00:00:00.001 Started by upstream project "autotest-per-patch" build number 126198 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "jbp-per-patch" build number 23959 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.032 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:03.079 The recommended git tool is: git 00:00:03.080 using credential 00000000-0000-0000-0000-000000000002 00:00:03.082 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:03.092 Fetching changes from the remote Git repository 00:00:03.096 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:03.107 Using shallow fetch with depth 1 00:00:03.107 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:03.107 > git --version # timeout=10 00:00:03.115 > git --version # 'git version 2.39.2' 00:00:03.115 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:03.125 Setting http proxy: proxy-dmz.intel.com:911 00:00:03.125 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/75/21875/23 # timeout=5 00:00:07.720 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.730 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.740 Checking out Revision 642aedf8bba2e584685fe6e0b1310032564b5451 (FETCH_HEAD) 00:00:07.740 > git config core.sparsecheckout # timeout=10 00:00:07.748 > git read-tree -mu HEAD # timeout=10 00:00:07.767 > git checkout -f 642aedf8bba2e584685fe6e0b1310032564b5451 # timeout=5 00:00:07.797 Commit message: "jenkins/jjb-config: Remove SPDK_TEST_RELEASE_BUILD from packaging job" 00:00:07.797 > git rev-list --no-walk d49304e16352441ae7eebb2419125dd094201f3e # timeout=10 00:00:07.910 [Pipeline] Start of Pipeline 00:00:07.926 [Pipeline] library 00:00:07.928 Loading library shm_lib@master 00:00:07.928 Library shm_lib@master is cached. Copying from home. 00:00:07.950 [Pipeline] node 00:00:07.959 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.960 [Pipeline] { 00:00:07.977 [Pipeline] catchError 00:00:07.979 [Pipeline] { 00:00:07.994 [Pipeline] wrap 00:00:08.005 [Pipeline] { 00:00:08.010 [Pipeline] stage 00:00:08.011 [Pipeline] { (Prologue) 00:00:08.187 [Pipeline] sh 00:00:08.465 + logger -p user.info -t JENKINS-CI 00:00:08.482 [Pipeline] echo 00:00:08.483 Node: WFP8 00:00:08.491 [Pipeline] sh 00:00:08.782 [Pipeline] setCustomBuildProperty 00:00:08.793 [Pipeline] echo 00:00:08.795 Cleanup processes 00:00:08.803 [Pipeline] sh 00:00:09.121 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.121 3474714 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.135 [Pipeline] sh 00:00:09.454 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.454 ++ grep -v 'sudo pgrep' 00:00:09.454 ++ awk '{print $1}' 00:00:09.454 + sudo kill -9 00:00:09.454 + true 00:00:09.466 [Pipeline] cleanWs 00:00:09.474 [WS-CLEANUP] Deleting project workspace... 00:00:09.474 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.479 [WS-CLEANUP] done 00:00:09.483 [Pipeline] setCustomBuildProperty 00:00:09.497 [Pipeline] sh 00:00:09.769 + sudo git config --global --replace-all safe.directory '*' 00:00:09.847 [Pipeline] httpRequest 00:00:09.870 [Pipeline] echo 00:00:09.871 Sorcerer 10.211.164.101 is alive 00:00:09.880 [Pipeline] httpRequest 00:00:09.884 HttpMethod: GET 00:00:09.885 URL: http://10.211.164.101/packages/jbp_642aedf8bba2e584685fe6e0b1310032564b5451.tar.gz 00:00:09.885 Sending request to url: http://10.211.164.101/packages/jbp_642aedf8bba2e584685fe6e0b1310032564b5451.tar.gz 00:00:09.906 Response Code: HTTP/1.1 200 OK 00:00:09.907 Success: Status code 200 is in the accepted range: 200,404 00:00:09.908 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_642aedf8bba2e584685fe6e0b1310032564b5451.tar.gz 00:00:33.615 [Pipeline] sh 00:00:33.894 + tar --no-same-owner -xf jbp_642aedf8bba2e584685fe6e0b1310032564b5451.tar.gz 00:00:33.909 [Pipeline] httpRequest 00:00:33.925 [Pipeline] echo 00:00:33.926 Sorcerer 10.211.164.101 is alive 00:00:33.934 [Pipeline] httpRequest 00:00:33.939 HttpMethod: GET 00:00:33.939 URL: http://10.211.164.101/packages/spdk_a95bbf2336179ce1093307c872b1debc25193da2.tar.gz 00:00:33.939 Sending request to url: http://10.211.164.101/packages/spdk_a95bbf2336179ce1093307c872b1debc25193da2.tar.gz 00:00:33.943 Response Code: HTTP/1.1 200 OK 00:00:33.944 Success: Status code 200 is in the accepted range: 200,404 00:00:33.944 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_a95bbf2336179ce1093307c872b1debc25193da2.tar.gz 00:02:36.253 [Pipeline] sh 00:02:36.540 + tar --no-same-owner -xf spdk_a95bbf2336179ce1093307c872b1debc25193da2.tar.gz 00:02:39.090 [Pipeline] sh 00:02:39.373 + git -C spdk log --oneline -n5 00:02:39.373 a95bbf233 blob: set parent_id properly on spdk_bs_blob_set_external_parent. 00:02:39.373 248c547d0 nvmf/tcp: add option for selecting a sock impl 00:02:39.373 2d30d9f83 accel: introduce tasks in sequence limit 00:02:39.373 2728651ee accel: adjust task per ch define name 00:02:39.373 e7cce062d Examples/Perf: correct the calculation of total bandwidth 00:02:39.385 [Pipeline] } 00:02:39.401 [Pipeline] // stage 00:02:39.410 [Pipeline] stage 00:02:39.412 [Pipeline] { (Prepare) 00:02:39.429 [Pipeline] writeFile 00:02:39.445 [Pipeline] sh 00:02:39.726 + logger -p user.info -t JENKINS-CI 00:02:39.738 [Pipeline] sh 00:02:40.019 + logger -p user.info -t JENKINS-CI 00:02:40.031 [Pipeline] sh 00:02:40.317 + cat autorun-spdk.conf 00:02:40.317 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:40.317 SPDK_TEST_NVMF=1 00:02:40.317 SPDK_TEST_NVME_CLI=1 00:02:40.317 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:40.317 SPDK_TEST_NVMF_NICS=e810 00:02:40.317 SPDK_TEST_VFIOUSER=1 00:02:40.317 SPDK_RUN_UBSAN=1 00:02:40.317 NET_TYPE=phy 00:02:40.325 RUN_NIGHTLY=0 00:02:40.330 [Pipeline] readFile 00:02:40.353 [Pipeline] withEnv 00:02:40.355 [Pipeline] { 00:02:40.367 [Pipeline] sh 00:02:40.652 + set -ex 00:02:40.652 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:40.652 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:40.652 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:40.652 ++ SPDK_TEST_NVMF=1 00:02:40.652 ++ SPDK_TEST_NVME_CLI=1 00:02:40.652 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:40.652 ++ SPDK_TEST_NVMF_NICS=e810 00:02:40.652 ++ SPDK_TEST_VFIOUSER=1 00:02:40.652 ++ SPDK_RUN_UBSAN=1 00:02:40.652 ++ NET_TYPE=phy 00:02:40.652 ++ RUN_NIGHTLY=0 00:02:40.652 + case $SPDK_TEST_NVMF_NICS in 00:02:40.652 + DRIVERS=ice 00:02:40.652 + [[ tcp == \r\d\m\a ]] 00:02:40.652 + [[ -n ice ]] 00:02:40.652 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:40.652 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:40.652 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:40.652 rmmod: ERROR: Module irdma is not currently loaded 00:02:40.652 rmmod: ERROR: Module i40iw is not currently loaded 00:02:40.652 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:40.652 + true 00:02:40.652 + for D in $DRIVERS 00:02:40.652 + sudo modprobe ice 00:02:40.652 + exit 0 00:02:40.662 [Pipeline] } 00:02:40.678 [Pipeline] // withEnv 00:02:40.684 [Pipeline] } 00:02:40.703 [Pipeline] // stage 00:02:40.713 [Pipeline] catchError 00:02:40.715 [Pipeline] { 00:02:40.731 [Pipeline] timeout 00:02:40.732 Timeout set to expire in 50 min 00:02:40.734 [Pipeline] { 00:02:40.749 [Pipeline] stage 00:02:40.752 [Pipeline] { (Tests) 00:02:40.768 [Pipeline] sh 00:02:41.050 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:41.050 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:41.050 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:41.050 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:41.050 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:41.050 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:41.050 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:41.050 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:41.050 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:41.050 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:41.050 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:41.050 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:41.050 + source /etc/os-release 00:02:41.050 ++ NAME='Fedora Linux' 00:02:41.050 ++ VERSION='38 (Cloud Edition)' 00:02:41.050 ++ ID=fedora 00:02:41.050 ++ VERSION_ID=38 00:02:41.050 ++ VERSION_CODENAME= 00:02:41.050 ++ PLATFORM_ID=platform:f38 00:02:41.050 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:41.050 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:41.050 ++ LOGO=fedora-logo-icon 00:02:41.050 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:41.050 ++ HOME_URL=https://fedoraproject.org/ 00:02:41.050 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:41.050 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:41.050 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:41.050 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:41.050 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:41.050 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:41.050 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:41.050 ++ SUPPORT_END=2024-05-14 00:02:41.050 ++ VARIANT='Cloud Edition' 00:02:41.050 ++ VARIANT_ID=cloud 00:02:41.050 + uname -a 00:02:41.050 Linux spdk-wfp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:41.050 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:43.636 Hugepages 00:02:43.636 node hugesize free / total 00:02:43.636 node0 1048576kB 0 / 0 00:02:43.636 node0 2048kB 0 / 0 00:02:43.636 node1 1048576kB 0 / 0 00:02:43.636 node1 2048kB 0 / 0 00:02:43.636 00:02:43.636 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:43.636 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:02:43.636 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:02:43.636 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:02:43.636 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:02:43.636 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:02:43.636 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:02:43.636 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:02:43.636 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:02:43.636 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:02:43.636 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:02:43.636 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:02:43.636 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:02:43.636 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:02:43.636 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:02:43.636 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:02:43.636 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:02:43.636 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:02:43.636 + rm -f /tmp/spdk-ld-path 00:02:43.636 + source autorun-spdk.conf 00:02:43.636 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:43.636 ++ SPDK_TEST_NVMF=1 00:02:43.636 ++ SPDK_TEST_NVME_CLI=1 00:02:43.636 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:43.636 ++ SPDK_TEST_NVMF_NICS=e810 00:02:43.636 ++ SPDK_TEST_VFIOUSER=1 00:02:43.636 ++ SPDK_RUN_UBSAN=1 00:02:43.636 ++ NET_TYPE=phy 00:02:43.636 ++ RUN_NIGHTLY=0 00:02:43.636 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:43.636 + [[ -n '' ]] 00:02:43.636 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:43.636 + for M in /var/spdk/build-*-manifest.txt 00:02:43.636 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:43.636 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:43.636 + for M in /var/spdk/build-*-manifest.txt 00:02:43.636 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:43.636 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:43.636 ++ uname 00:02:43.636 + [[ Linux == \L\i\n\u\x ]] 00:02:43.636 + sudo dmesg -T 00:02:43.636 + sudo dmesg --clear 00:02:43.636 + dmesg_pid=3476154 00:02:43.636 + [[ Fedora Linux == FreeBSD ]] 00:02:43.636 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:43.636 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:43.636 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:43.636 + [[ -x /usr/src/fio-static/fio ]] 00:02:43.636 + sudo dmesg -Tw 00:02:43.636 + export FIO_BIN=/usr/src/fio-static/fio 00:02:43.636 + FIO_BIN=/usr/src/fio-static/fio 00:02:43.636 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:43.636 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:43.636 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:43.636 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:43.636 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:43.636 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:43.636 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:43.636 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:43.636 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:43.636 Test configuration: 00:02:43.636 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:43.636 SPDK_TEST_NVMF=1 00:02:43.636 SPDK_TEST_NVME_CLI=1 00:02:43.636 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:43.636 SPDK_TEST_NVMF_NICS=e810 00:02:43.636 SPDK_TEST_VFIOUSER=1 00:02:43.636 SPDK_RUN_UBSAN=1 00:02:43.636 NET_TYPE=phy 00:02:43.636 RUN_NIGHTLY=0 15:44:12 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:43.636 15:44:12 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:43.636 15:44:12 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:43.636 15:44:12 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:43.636 15:44:12 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.636 15:44:12 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.636 15:44:12 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.636 15:44:12 -- paths/export.sh@5 -- $ export PATH 00:02:43.636 15:44:12 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.636 15:44:12 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:43.636 15:44:12 -- common/autobuild_common.sh@444 -- $ date +%s 00:02:43.636 15:44:12 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721051052.XXXXXX 00:02:43.636 15:44:12 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721051052.L9WU69 00:02:43.636 15:44:12 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:02:43.636 15:44:12 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:02:43.636 15:44:12 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:02:43.636 15:44:12 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:43.636 15:44:12 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:43.636 15:44:12 -- common/autobuild_common.sh@460 -- $ get_config_params 00:02:43.636 15:44:12 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:43.636 15:44:12 -- common/autotest_common.sh@10 -- $ set +x 00:02:43.636 15:44:12 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:43.636 15:44:12 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:02:43.636 15:44:12 -- pm/common@17 -- $ local monitor 00:02:43.636 15:44:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:43.636 15:44:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:43.636 15:44:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:43.636 15:44:12 -- pm/common@21 -- $ date +%s 00:02:43.636 15:44:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:43.636 15:44:12 -- pm/common@21 -- $ date +%s 00:02:43.636 15:44:12 -- pm/common@25 -- $ sleep 1 00:02:43.636 15:44:12 -- pm/common@21 -- $ date +%s 00:02:43.636 15:44:12 -- pm/common@21 -- $ date +%s 00:02:43.636 15:44:12 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721051052 00:02:43.636 15:44:12 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721051052 00:02:43.636 15:44:12 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721051052 00:02:43.636 15:44:12 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721051052 00:02:43.636 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721051052_collect-vmstat.pm.log 00:02:43.636 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721051052_collect-cpu-load.pm.log 00:02:43.636 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721051052_collect-cpu-temp.pm.log 00:02:43.636 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721051052_collect-bmc-pm.bmc.pm.log 00:02:44.571 15:44:13 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:02:44.571 15:44:13 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:44.571 15:44:13 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:44.571 15:44:13 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:44.571 15:44:13 -- spdk/autobuild.sh@16 -- $ date -u 00:02:44.571 Mon Jul 15 01:44:13 PM UTC 2024 00:02:44.571 15:44:13 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:44.571 v24.09-pre-209-ga95bbf233 00:02:44.571 15:44:13 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:44.571 15:44:13 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:44.571 15:44:13 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:44.571 15:44:13 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:44.571 15:44:13 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:44.571 15:44:13 -- common/autotest_common.sh@10 -- $ set +x 00:02:44.571 ************************************ 00:02:44.571 START TEST ubsan 00:02:44.571 ************************************ 00:02:44.571 15:44:13 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:02:44.571 using ubsan 00:02:44.571 00:02:44.571 real 0m0.001s 00:02:44.571 user 0m0.001s 00:02:44.571 sys 0m0.000s 00:02:44.571 15:44:13 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:44.571 15:44:13 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:44.571 ************************************ 00:02:44.571 END TEST ubsan 00:02:44.571 ************************************ 00:02:44.571 15:44:13 -- common/autotest_common.sh@1142 -- $ return 0 00:02:44.571 15:44:13 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:44.571 15:44:13 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:44.571 15:44:13 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:44.571 15:44:13 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:44.571 15:44:13 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:44.571 15:44:13 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:44.571 15:44:13 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:44.571 15:44:13 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:44.571 15:44:13 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:02:44.830 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:44.830 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:45.089 Using 'verbs' RDMA provider 00:02:58.242 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:08.222 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:08.481 Creating mk/config.mk...done. 00:03:08.481 Creating mk/cc.flags.mk...done. 00:03:08.481 Type 'make' to build. 00:03:08.481 15:44:37 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:03:08.481 15:44:37 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:03:08.481 15:44:37 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:08.481 15:44:37 -- common/autotest_common.sh@10 -- $ set +x 00:03:08.481 ************************************ 00:03:08.481 START TEST make 00:03:08.481 ************************************ 00:03:08.481 15:44:37 make -- common/autotest_common.sh@1123 -- $ make -j96 00:03:08.740 make[1]: Nothing to be done for 'all'. 00:03:10.113 The Meson build system 00:03:10.113 Version: 1.3.1 00:03:10.113 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:10.113 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:10.113 Build type: native build 00:03:10.113 Project name: libvfio-user 00:03:10.113 Project version: 0.0.1 00:03:10.113 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:10.113 C linker for the host machine: cc ld.bfd 2.39-16 00:03:10.113 Host machine cpu family: x86_64 00:03:10.113 Host machine cpu: x86_64 00:03:10.113 Run-time dependency threads found: YES 00:03:10.113 Library dl found: YES 00:03:10.113 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:10.113 Run-time dependency json-c found: YES 0.17 00:03:10.113 Run-time dependency cmocka found: YES 1.1.7 00:03:10.113 Program pytest-3 found: NO 00:03:10.113 Program flake8 found: NO 00:03:10.113 Program misspell-fixer found: NO 00:03:10.113 Program restructuredtext-lint found: NO 00:03:10.113 Program valgrind found: YES (/usr/bin/valgrind) 00:03:10.113 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:10.113 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:10.113 Compiler for C supports arguments -Wwrite-strings: YES 00:03:10.113 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:10.113 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:10.113 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:10.113 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:10.113 Build targets in project: 8 00:03:10.113 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:10.113 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:10.113 00:03:10.113 libvfio-user 0.0.1 00:03:10.113 00:03:10.114 User defined options 00:03:10.114 buildtype : debug 00:03:10.114 default_library: shared 00:03:10.114 libdir : /usr/local/lib 00:03:10.114 00:03:10.114 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:10.713 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:10.713 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:10.713 [2/37] Compiling C object samples/null.p/null.c.o 00:03:10.713 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:10.713 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:10.713 [5/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:10.713 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:10.713 [7/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:10.713 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:10.713 [9/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:10.713 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:10.713 [11/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:10.713 [12/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:10.713 [13/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:10.713 [14/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:10.713 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:10.713 [16/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:10.713 [17/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:10.713 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:10.713 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:10.713 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:10.713 [21/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:10.713 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:10.713 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:10.713 [24/37] Compiling C object samples/server.p/server.c.o 00:03:10.713 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:10.713 [26/37] Compiling C object samples/client.p/client.c.o 00:03:10.713 [27/37] Linking target samples/client 00:03:10.713 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:10.713 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:10.971 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:03:10.971 [31/37] Linking target test/unit_tests 00:03:10.971 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:10.971 [33/37] Linking target samples/null 00:03:10.971 [34/37] Linking target samples/server 00:03:10.972 [35/37] Linking target samples/gpio-pci-idio-16 00:03:10.972 [36/37] Linking target samples/lspci 00:03:10.972 [37/37] Linking target samples/shadow_ioeventfd_server 00:03:10.972 INFO: autodetecting backend as ninja 00:03:10.972 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:10.972 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:11.251 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:11.251 ninja: no work to do. 00:03:16.501 The Meson build system 00:03:16.501 Version: 1.3.1 00:03:16.501 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:03:16.501 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:03:16.501 Build type: native build 00:03:16.501 Program cat found: YES (/usr/bin/cat) 00:03:16.501 Project name: DPDK 00:03:16.501 Project version: 24.03.0 00:03:16.501 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:16.501 C linker for the host machine: cc ld.bfd 2.39-16 00:03:16.501 Host machine cpu family: x86_64 00:03:16.501 Host machine cpu: x86_64 00:03:16.501 Message: ## Building in Developer Mode ## 00:03:16.501 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:16.501 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:03:16.501 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:16.501 Program python3 found: YES (/usr/bin/python3) 00:03:16.501 Program cat found: YES (/usr/bin/cat) 00:03:16.501 Compiler for C supports arguments -march=native: YES 00:03:16.501 Checking for size of "void *" : 8 00:03:16.501 Checking for size of "void *" : 8 (cached) 00:03:16.501 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:03:16.501 Library m found: YES 00:03:16.501 Library numa found: YES 00:03:16.501 Has header "numaif.h" : YES 00:03:16.501 Library fdt found: NO 00:03:16.501 Library execinfo found: NO 00:03:16.501 Has header "execinfo.h" : YES 00:03:16.501 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:16.501 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:16.501 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:16.501 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:16.501 Run-time dependency openssl found: YES 3.0.9 00:03:16.501 Run-time dependency libpcap found: YES 1.10.4 00:03:16.501 Has header "pcap.h" with dependency libpcap: YES 00:03:16.501 Compiler for C supports arguments -Wcast-qual: YES 00:03:16.501 Compiler for C supports arguments -Wdeprecated: YES 00:03:16.501 Compiler for C supports arguments -Wformat: YES 00:03:16.501 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:16.501 Compiler for C supports arguments -Wformat-security: NO 00:03:16.501 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:16.501 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:16.501 Compiler for C supports arguments -Wnested-externs: YES 00:03:16.501 Compiler for C supports arguments -Wold-style-definition: YES 00:03:16.501 Compiler for C supports arguments -Wpointer-arith: YES 00:03:16.501 Compiler for C supports arguments -Wsign-compare: YES 00:03:16.501 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:16.501 Compiler for C supports arguments -Wundef: YES 00:03:16.501 Compiler for C supports arguments -Wwrite-strings: YES 00:03:16.501 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:16.501 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:16.501 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:16.501 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:16.501 Program objdump found: YES (/usr/bin/objdump) 00:03:16.501 Compiler for C supports arguments -mavx512f: YES 00:03:16.501 Checking if "AVX512 checking" compiles: YES 00:03:16.501 Fetching value of define "__SSE4_2__" : 1 00:03:16.501 Fetching value of define "__AES__" : 1 00:03:16.501 Fetching value of define "__AVX__" : 1 00:03:16.501 Fetching value of define "__AVX2__" : 1 00:03:16.501 Fetching value of define "__AVX512BW__" : 1 00:03:16.501 Fetching value of define "__AVX512CD__" : 1 00:03:16.501 Fetching value of define "__AVX512DQ__" : 1 00:03:16.501 Fetching value of define "__AVX512F__" : 1 00:03:16.501 Fetching value of define "__AVX512VL__" : 1 00:03:16.501 Fetching value of define "__PCLMUL__" : 1 00:03:16.501 Fetching value of define "__RDRND__" : 1 00:03:16.501 Fetching value of define "__RDSEED__" : 1 00:03:16.501 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:16.501 Fetching value of define "__znver1__" : (undefined) 00:03:16.501 Fetching value of define "__znver2__" : (undefined) 00:03:16.501 Fetching value of define "__znver3__" : (undefined) 00:03:16.501 Fetching value of define "__znver4__" : (undefined) 00:03:16.501 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:16.501 Message: lib/log: Defining dependency "log" 00:03:16.501 Message: lib/kvargs: Defining dependency "kvargs" 00:03:16.501 Message: lib/telemetry: Defining dependency "telemetry" 00:03:16.501 Checking for function "getentropy" : NO 00:03:16.501 Message: lib/eal: Defining dependency "eal" 00:03:16.501 Message: lib/ring: Defining dependency "ring" 00:03:16.501 Message: lib/rcu: Defining dependency "rcu" 00:03:16.501 Message: lib/mempool: Defining dependency "mempool" 00:03:16.501 Message: lib/mbuf: Defining dependency "mbuf" 00:03:16.501 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:16.501 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:16.501 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:16.501 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:16.501 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:16.501 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:16.501 Compiler for C supports arguments -mpclmul: YES 00:03:16.501 Compiler for C supports arguments -maes: YES 00:03:16.501 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:16.501 Compiler for C supports arguments -mavx512bw: YES 00:03:16.501 Compiler for C supports arguments -mavx512dq: YES 00:03:16.501 Compiler for C supports arguments -mavx512vl: YES 00:03:16.501 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:16.501 Compiler for C supports arguments -mavx2: YES 00:03:16.501 Compiler for C supports arguments -mavx: YES 00:03:16.501 Message: lib/net: Defining dependency "net" 00:03:16.501 Message: lib/meter: Defining dependency "meter" 00:03:16.501 Message: lib/ethdev: Defining dependency "ethdev" 00:03:16.501 Message: lib/pci: Defining dependency "pci" 00:03:16.501 Message: lib/cmdline: Defining dependency "cmdline" 00:03:16.501 Message: lib/hash: Defining dependency "hash" 00:03:16.501 Message: lib/timer: Defining dependency "timer" 00:03:16.501 Message: lib/compressdev: Defining dependency "compressdev" 00:03:16.501 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:16.501 Message: lib/dmadev: Defining dependency "dmadev" 00:03:16.501 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:16.501 Message: lib/power: Defining dependency "power" 00:03:16.501 Message: lib/reorder: Defining dependency "reorder" 00:03:16.501 Message: lib/security: Defining dependency "security" 00:03:16.501 Has header "linux/userfaultfd.h" : YES 00:03:16.501 Has header "linux/vduse.h" : YES 00:03:16.501 Message: lib/vhost: Defining dependency "vhost" 00:03:16.501 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:16.501 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:16.501 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:16.501 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:16.501 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:16.501 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:16.501 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:16.501 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:16.501 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:16.501 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:16.501 Program doxygen found: YES (/usr/bin/doxygen) 00:03:16.501 Configuring doxy-api-html.conf using configuration 00:03:16.501 Configuring doxy-api-man.conf using configuration 00:03:16.501 Program mandb found: YES (/usr/bin/mandb) 00:03:16.501 Program sphinx-build found: NO 00:03:16.501 Configuring rte_build_config.h using configuration 00:03:16.501 Message: 00:03:16.501 ================= 00:03:16.501 Applications Enabled 00:03:16.501 ================= 00:03:16.501 00:03:16.501 apps: 00:03:16.501 00:03:16.501 00:03:16.501 Message: 00:03:16.501 ================= 00:03:16.501 Libraries Enabled 00:03:16.501 ================= 00:03:16.501 00:03:16.501 libs: 00:03:16.501 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:16.501 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:16.501 cryptodev, dmadev, power, reorder, security, vhost, 00:03:16.501 00:03:16.501 Message: 00:03:16.501 =============== 00:03:16.501 Drivers Enabled 00:03:16.501 =============== 00:03:16.501 00:03:16.501 common: 00:03:16.501 00:03:16.501 bus: 00:03:16.501 pci, vdev, 00:03:16.501 mempool: 00:03:16.501 ring, 00:03:16.501 dma: 00:03:16.501 00:03:16.501 net: 00:03:16.501 00:03:16.501 crypto: 00:03:16.501 00:03:16.501 compress: 00:03:16.501 00:03:16.501 vdpa: 00:03:16.501 00:03:16.501 00:03:16.501 Message: 00:03:16.501 ================= 00:03:16.501 Content Skipped 00:03:16.501 ================= 00:03:16.501 00:03:16.501 apps: 00:03:16.501 dumpcap: explicitly disabled via build config 00:03:16.501 graph: explicitly disabled via build config 00:03:16.501 pdump: explicitly disabled via build config 00:03:16.501 proc-info: explicitly disabled via build config 00:03:16.501 test-acl: explicitly disabled via build config 00:03:16.501 test-bbdev: explicitly disabled via build config 00:03:16.501 test-cmdline: explicitly disabled via build config 00:03:16.501 test-compress-perf: explicitly disabled via build config 00:03:16.501 test-crypto-perf: explicitly disabled via build config 00:03:16.501 test-dma-perf: explicitly disabled via build config 00:03:16.501 test-eventdev: explicitly disabled via build config 00:03:16.501 test-fib: explicitly disabled via build config 00:03:16.501 test-flow-perf: explicitly disabled via build config 00:03:16.501 test-gpudev: explicitly disabled via build config 00:03:16.501 test-mldev: explicitly disabled via build config 00:03:16.501 test-pipeline: explicitly disabled via build config 00:03:16.501 test-pmd: explicitly disabled via build config 00:03:16.501 test-regex: explicitly disabled via build config 00:03:16.501 test-sad: explicitly disabled via build config 00:03:16.501 test-security-perf: explicitly disabled via build config 00:03:16.501 00:03:16.501 libs: 00:03:16.501 argparse: explicitly disabled via build config 00:03:16.501 metrics: explicitly disabled via build config 00:03:16.501 acl: explicitly disabled via build config 00:03:16.502 bbdev: explicitly disabled via build config 00:03:16.502 bitratestats: explicitly disabled via build config 00:03:16.502 bpf: explicitly disabled via build config 00:03:16.502 cfgfile: explicitly disabled via build config 00:03:16.502 distributor: explicitly disabled via build config 00:03:16.502 efd: explicitly disabled via build config 00:03:16.502 eventdev: explicitly disabled via build config 00:03:16.502 dispatcher: explicitly disabled via build config 00:03:16.502 gpudev: explicitly disabled via build config 00:03:16.502 gro: explicitly disabled via build config 00:03:16.502 gso: explicitly disabled via build config 00:03:16.502 ip_frag: explicitly disabled via build config 00:03:16.502 jobstats: explicitly disabled via build config 00:03:16.502 latencystats: explicitly disabled via build config 00:03:16.502 lpm: explicitly disabled via build config 00:03:16.502 member: explicitly disabled via build config 00:03:16.502 pcapng: explicitly disabled via build config 00:03:16.502 rawdev: explicitly disabled via build config 00:03:16.502 regexdev: explicitly disabled via build config 00:03:16.502 mldev: explicitly disabled via build config 00:03:16.502 rib: explicitly disabled via build config 00:03:16.502 sched: explicitly disabled via build config 00:03:16.502 stack: explicitly disabled via build config 00:03:16.502 ipsec: explicitly disabled via build config 00:03:16.502 pdcp: explicitly disabled via build config 00:03:16.502 fib: explicitly disabled via build config 00:03:16.502 port: explicitly disabled via build config 00:03:16.502 pdump: explicitly disabled via build config 00:03:16.502 table: explicitly disabled via build config 00:03:16.502 pipeline: explicitly disabled via build config 00:03:16.502 graph: explicitly disabled via build config 00:03:16.502 node: explicitly disabled via build config 00:03:16.502 00:03:16.502 drivers: 00:03:16.502 common/cpt: not in enabled drivers build config 00:03:16.502 common/dpaax: not in enabled drivers build config 00:03:16.502 common/iavf: not in enabled drivers build config 00:03:16.502 common/idpf: not in enabled drivers build config 00:03:16.502 common/ionic: not in enabled drivers build config 00:03:16.502 common/mvep: not in enabled drivers build config 00:03:16.502 common/octeontx: not in enabled drivers build config 00:03:16.502 bus/auxiliary: not in enabled drivers build config 00:03:16.502 bus/cdx: not in enabled drivers build config 00:03:16.502 bus/dpaa: not in enabled drivers build config 00:03:16.502 bus/fslmc: not in enabled drivers build config 00:03:16.502 bus/ifpga: not in enabled drivers build config 00:03:16.502 bus/platform: not in enabled drivers build config 00:03:16.502 bus/uacce: not in enabled drivers build config 00:03:16.502 bus/vmbus: not in enabled drivers build config 00:03:16.502 common/cnxk: not in enabled drivers build config 00:03:16.502 common/mlx5: not in enabled drivers build config 00:03:16.502 common/nfp: not in enabled drivers build config 00:03:16.502 common/nitrox: not in enabled drivers build config 00:03:16.502 common/qat: not in enabled drivers build config 00:03:16.502 common/sfc_efx: not in enabled drivers build config 00:03:16.502 mempool/bucket: not in enabled drivers build config 00:03:16.502 mempool/cnxk: not in enabled drivers build config 00:03:16.502 mempool/dpaa: not in enabled drivers build config 00:03:16.502 mempool/dpaa2: not in enabled drivers build config 00:03:16.502 mempool/octeontx: not in enabled drivers build config 00:03:16.502 mempool/stack: not in enabled drivers build config 00:03:16.502 dma/cnxk: not in enabled drivers build config 00:03:16.502 dma/dpaa: not in enabled drivers build config 00:03:16.502 dma/dpaa2: not in enabled drivers build config 00:03:16.502 dma/hisilicon: not in enabled drivers build config 00:03:16.502 dma/idxd: not in enabled drivers build config 00:03:16.502 dma/ioat: not in enabled drivers build config 00:03:16.502 dma/skeleton: not in enabled drivers build config 00:03:16.502 net/af_packet: not in enabled drivers build config 00:03:16.502 net/af_xdp: not in enabled drivers build config 00:03:16.502 net/ark: not in enabled drivers build config 00:03:16.502 net/atlantic: not in enabled drivers build config 00:03:16.502 net/avp: not in enabled drivers build config 00:03:16.502 net/axgbe: not in enabled drivers build config 00:03:16.502 net/bnx2x: not in enabled drivers build config 00:03:16.502 net/bnxt: not in enabled drivers build config 00:03:16.502 net/bonding: not in enabled drivers build config 00:03:16.502 net/cnxk: not in enabled drivers build config 00:03:16.502 net/cpfl: not in enabled drivers build config 00:03:16.502 net/cxgbe: not in enabled drivers build config 00:03:16.502 net/dpaa: not in enabled drivers build config 00:03:16.502 net/dpaa2: not in enabled drivers build config 00:03:16.502 net/e1000: not in enabled drivers build config 00:03:16.502 net/ena: not in enabled drivers build config 00:03:16.502 net/enetc: not in enabled drivers build config 00:03:16.502 net/enetfec: not in enabled drivers build config 00:03:16.502 net/enic: not in enabled drivers build config 00:03:16.502 net/failsafe: not in enabled drivers build config 00:03:16.502 net/fm10k: not in enabled drivers build config 00:03:16.502 net/gve: not in enabled drivers build config 00:03:16.502 net/hinic: not in enabled drivers build config 00:03:16.502 net/hns3: not in enabled drivers build config 00:03:16.502 net/i40e: not in enabled drivers build config 00:03:16.502 net/iavf: not in enabled drivers build config 00:03:16.502 net/ice: not in enabled drivers build config 00:03:16.502 net/idpf: not in enabled drivers build config 00:03:16.502 net/igc: not in enabled drivers build config 00:03:16.502 net/ionic: not in enabled drivers build config 00:03:16.502 net/ipn3ke: not in enabled drivers build config 00:03:16.502 net/ixgbe: not in enabled drivers build config 00:03:16.502 net/mana: not in enabled drivers build config 00:03:16.502 net/memif: not in enabled drivers build config 00:03:16.502 net/mlx4: not in enabled drivers build config 00:03:16.502 net/mlx5: not in enabled drivers build config 00:03:16.502 net/mvneta: not in enabled drivers build config 00:03:16.502 net/mvpp2: not in enabled drivers build config 00:03:16.502 net/netvsc: not in enabled drivers build config 00:03:16.502 net/nfb: not in enabled drivers build config 00:03:16.502 net/nfp: not in enabled drivers build config 00:03:16.502 net/ngbe: not in enabled drivers build config 00:03:16.502 net/null: not in enabled drivers build config 00:03:16.502 net/octeontx: not in enabled drivers build config 00:03:16.502 net/octeon_ep: not in enabled drivers build config 00:03:16.502 net/pcap: not in enabled drivers build config 00:03:16.502 net/pfe: not in enabled drivers build config 00:03:16.502 net/qede: not in enabled drivers build config 00:03:16.502 net/ring: not in enabled drivers build config 00:03:16.502 net/sfc: not in enabled drivers build config 00:03:16.502 net/softnic: not in enabled drivers build config 00:03:16.502 net/tap: not in enabled drivers build config 00:03:16.502 net/thunderx: not in enabled drivers build config 00:03:16.502 net/txgbe: not in enabled drivers build config 00:03:16.502 net/vdev_netvsc: not in enabled drivers build config 00:03:16.502 net/vhost: not in enabled drivers build config 00:03:16.502 net/virtio: not in enabled drivers build config 00:03:16.502 net/vmxnet3: not in enabled drivers build config 00:03:16.502 raw/*: missing internal dependency, "rawdev" 00:03:16.502 crypto/armv8: not in enabled drivers build config 00:03:16.502 crypto/bcmfs: not in enabled drivers build config 00:03:16.502 crypto/caam_jr: not in enabled drivers build config 00:03:16.502 crypto/ccp: not in enabled drivers build config 00:03:16.502 crypto/cnxk: not in enabled drivers build config 00:03:16.502 crypto/dpaa_sec: not in enabled drivers build config 00:03:16.502 crypto/dpaa2_sec: not in enabled drivers build config 00:03:16.502 crypto/ipsec_mb: not in enabled drivers build config 00:03:16.502 crypto/mlx5: not in enabled drivers build config 00:03:16.502 crypto/mvsam: not in enabled drivers build config 00:03:16.502 crypto/nitrox: not in enabled drivers build config 00:03:16.502 crypto/null: not in enabled drivers build config 00:03:16.502 crypto/octeontx: not in enabled drivers build config 00:03:16.502 crypto/openssl: not in enabled drivers build config 00:03:16.502 crypto/scheduler: not in enabled drivers build config 00:03:16.502 crypto/uadk: not in enabled drivers build config 00:03:16.502 crypto/virtio: not in enabled drivers build config 00:03:16.502 compress/isal: not in enabled drivers build config 00:03:16.502 compress/mlx5: not in enabled drivers build config 00:03:16.502 compress/nitrox: not in enabled drivers build config 00:03:16.502 compress/octeontx: not in enabled drivers build config 00:03:16.502 compress/zlib: not in enabled drivers build config 00:03:16.502 regex/*: missing internal dependency, "regexdev" 00:03:16.502 ml/*: missing internal dependency, "mldev" 00:03:16.502 vdpa/ifc: not in enabled drivers build config 00:03:16.502 vdpa/mlx5: not in enabled drivers build config 00:03:16.502 vdpa/nfp: not in enabled drivers build config 00:03:16.502 vdpa/sfc: not in enabled drivers build config 00:03:16.502 event/*: missing internal dependency, "eventdev" 00:03:16.502 baseband/*: missing internal dependency, "bbdev" 00:03:16.502 gpu/*: missing internal dependency, "gpudev" 00:03:16.502 00:03:16.502 00:03:16.502 Build targets in project: 85 00:03:16.502 00:03:16.502 DPDK 24.03.0 00:03:16.502 00:03:16.502 User defined options 00:03:16.502 buildtype : debug 00:03:16.502 default_library : shared 00:03:16.502 libdir : lib 00:03:16.502 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:16.502 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:16.502 c_link_args : 00:03:16.502 cpu_instruction_set: native 00:03:16.502 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:16.502 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:16.502 enable_docs : false 00:03:16.502 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:16.502 enable_kmods : false 00:03:16.502 max_lcores : 128 00:03:16.502 tests : false 00:03:16.502 00:03:16.502 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:16.502 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:03:16.774 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:16.774 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:16.774 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:16.774 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:16.774 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:16.774 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:16.774 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:16.774 [8/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:16.774 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:16.774 [10/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:16.774 [11/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:16.774 [12/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:16.774 [13/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:16.774 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:16.774 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:16.774 [16/268] Linking static target lib/librte_kvargs.a 00:03:16.774 [17/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:16.774 [18/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:16.774 [19/268] Linking static target lib/librte_log.a 00:03:17.030 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:17.030 [21/268] Linking static target lib/librte_pci.a 00:03:17.030 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:17.030 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:17.030 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:17.030 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:17.030 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:17.288 [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:17.288 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:17.288 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:17.288 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:17.288 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:17.288 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:17.288 [33/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:17.288 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:17.288 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:17.288 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:17.288 [37/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:17.288 [38/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:17.288 [39/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:17.288 [40/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:17.288 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:17.288 [42/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:17.288 [43/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:17.288 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:17.288 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:17.288 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:17.288 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:17.288 [48/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:17.288 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:17.288 [50/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:17.288 [51/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:17.288 [52/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:17.288 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:17.288 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:17.288 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:17.288 [56/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:17.288 [57/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:17.288 [58/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:17.288 [59/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:17.288 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:17.288 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:17.288 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:17.288 [63/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:17.288 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:17.288 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:17.288 [66/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:17.288 [67/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:17.288 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:17.288 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:17.288 [70/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:17.288 [71/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:17.288 [72/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:17.288 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:17.288 [74/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:17.288 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:17.288 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:17.288 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:17.288 [78/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:17.288 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:17.288 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:17.288 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:17.288 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:17.288 [83/268] Linking static target lib/librte_telemetry.a 00:03:17.288 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:17.288 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:17.288 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:17.288 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:17.288 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:17.288 [89/268] Linking static target lib/librte_meter.a 00:03:17.288 [90/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.288 [91/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:17.288 [92/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:17.288 [93/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:17.288 [94/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:17.288 [95/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:17.288 [96/268] Linking static target lib/librte_ring.a 00:03:17.288 [97/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:17.288 [98/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:17.288 [99/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:17.288 [100/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:17.288 [101/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:17.288 [102/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:17.288 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:17.288 [104/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.288 [105/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:17.288 [106/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:17.288 [107/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:17.547 [108/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:17.547 [109/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:17.547 [110/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:17.547 [111/268] Linking static target lib/librte_net.a 00:03:17.547 [112/268] Linking static target lib/librte_rcu.a 00:03:17.547 [113/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:17.547 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:17.547 [115/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:17.547 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:17.547 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:17.547 [118/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:17.547 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:17.547 [120/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:17.547 [121/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:17.547 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:17.547 [123/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:17.547 [124/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:17.547 [125/268] Linking static target lib/librte_mempool.a 00:03:17.547 [126/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:17.547 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:17.547 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:17.547 [129/268] Linking static target lib/librte_eal.a 00:03:17.547 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:17.547 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:17.547 [132/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.547 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:17.547 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:17.547 [135/268] Linking static target lib/librte_cmdline.a 00:03:17.547 [136/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.547 [137/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:17.547 [138/268] Linking target lib/librte_log.so.24.1 00:03:17.547 [139/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:17.547 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:17.547 [141/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.547 [142/268] Linking static target lib/librte_mbuf.a 00:03:17.547 [143/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:17.547 [144/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:17.547 [145/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:17.547 [146/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:17.547 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:17.547 [148/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.805 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:17.805 [150/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:17.805 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:17.805 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:17.805 [153/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:17.805 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:17.805 [155/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:17.805 [156/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:17.805 [157/268] Linking static target lib/librte_timer.a 00:03:17.805 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:17.805 [159/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.805 [160/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:17.805 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:17.805 [162/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:17.805 [163/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:17.805 [164/268] Linking static target lib/librte_compressdev.a 00:03:17.805 [165/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:17.805 [166/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.805 [167/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:17.805 [168/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:17.805 [169/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:17.805 [170/268] Linking target lib/librte_kvargs.so.24.1 00:03:17.805 [171/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:17.805 [172/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:17.805 [173/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:17.805 [174/268] Linking target lib/librte_telemetry.so.24.1 00:03:17.805 [175/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:17.805 [176/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:17.805 [177/268] Linking static target lib/librte_reorder.a 00:03:17.805 [178/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:17.805 [179/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:17.805 [180/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:17.805 [181/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:17.805 [182/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:17.805 [183/268] Linking static target lib/librte_dmadev.a 00:03:17.805 [184/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:17.805 [185/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:17.805 [186/268] Linking static target lib/librte_power.a 00:03:17.805 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:17.805 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:18.064 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:18.064 [190/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:18.064 [191/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:18.064 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:18.064 [193/268] Linking static target lib/librte_security.a 00:03:18.064 [194/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:18.064 [195/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:18.064 [196/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:18.064 [197/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:18.064 [198/268] Linking static target drivers/librte_bus_vdev.a 00:03:18.064 [199/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:18.064 [200/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:18.064 [201/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:18.064 [202/268] Linking static target drivers/librte_mempool_ring.a 00:03:18.064 [203/268] Linking static target lib/librte_hash.a 00:03:18.064 [204/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:18.064 [205/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:18.064 [206/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:18.064 [207/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:18.064 [208/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:18.064 [209/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:18.064 [210/268] Linking static target drivers/librte_bus_pci.a 00:03:18.064 [211/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.322 [212/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.322 [213/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.322 [214/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:18.322 [215/268] Linking static target lib/librte_cryptodev.a 00:03:18.322 [216/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:18.322 [217/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.322 [218/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.322 [219/268] Linking static target lib/librte_ethdev.a 00:03:18.322 [220/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.613 [221/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.613 [222/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.613 [223/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:18.613 [224/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.613 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.872 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.872 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.441 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:19.700 [229/268] Linking static target lib/librte_vhost.a 00:03:19.959 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.337 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.615 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.183 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.183 [234/268] Linking target lib/librte_eal.so.24.1 00:03:27.183 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:27.183 [236/268] Linking target lib/librte_pci.so.24.1 00:03:27.183 [237/268] Linking target lib/librte_dmadev.so.24.1 00:03:27.183 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:27.183 [239/268] Linking target lib/librte_ring.so.24.1 00:03:27.183 [240/268] Linking target lib/librte_meter.so.24.1 00:03:27.183 [241/268] Linking target lib/librte_timer.so.24.1 00:03:27.442 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:27.442 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:27.442 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:27.442 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:27.442 [246/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:27.442 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:27.442 [248/268] Linking target lib/librte_rcu.so.24.1 00:03:27.442 [249/268] Linking target lib/librte_mempool.so.24.1 00:03:27.701 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:27.701 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:27.701 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:27.701 [253/268] Linking target lib/librte_mbuf.so.24.1 00:03:27.701 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:27.701 [255/268] Linking target lib/librte_net.so.24.1 00:03:27.701 [256/268] Linking target lib/librte_reorder.so.24.1 00:03:27.701 [257/268] Linking target lib/librte_compressdev.so.24.1 00:03:27.701 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:03:27.961 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:27.961 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:27.961 [261/268] Linking target lib/librte_hash.so.24.1 00:03:27.961 [262/268] Linking target lib/librte_cmdline.so.24.1 00:03:27.961 [263/268] Linking target lib/librte_security.so.24.1 00:03:27.961 [264/268] Linking target lib/librte_ethdev.so.24.1 00:03:28.219 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:28.219 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:28.219 [267/268] Linking target lib/librte_vhost.so.24.1 00:03:28.219 [268/268] Linking target lib/librte_power.so.24.1 00:03:28.219 INFO: autodetecting backend as ninja 00:03:28.219 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:03:29.154 CC lib/ut_mock/mock.o 00:03:29.154 CC lib/ut/ut.o 00:03:29.154 CC lib/log/log.o 00:03:29.154 CC lib/log/log_flags.o 00:03:29.154 CC lib/log/log_deprecated.o 00:03:29.154 LIB libspdk_ut_mock.a 00:03:29.154 LIB libspdk_ut.a 00:03:29.412 LIB libspdk_log.a 00:03:29.412 SO libspdk_ut_mock.so.6.0 00:03:29.412 SO libspdk_ut.so.2.0 00:03:29.412 SO libspdk_log.so.7.0 00:03:29.412 SYMLINK libspdk_ut_mock.so 00:03:29.412 SYMLINK libspdk_ut.so 00:03:29.412 SYMLINK libspdk_log.so 00:03:29.670 CC lib/ioat/ioat.o 00:03:29.670 CXX lib/trace_parser/trace.o 00:03:29.670 CC lib/util/base64.o 00:03:29.670 CC lib/util/bit_array.o 00:03:29.670 CC lib/util/cpuset.o 00:03:29.670 CC lib/util/crc16.o 00:03:29.670 CC lib/util/crc32.o 00:03:29.670 CC lib/util/crc32c.o 00:03:29.670 CC lib/util/crc64.o 00:03:29.670 CC lib/util/crc32_ieee.o 00:03:29.670 CC lib/util/fd.o 00:03:29.670 CC lib/util/file.o 00:03:29.670 CC lib/util/dif.o 00:03:29.670 CC lib/util/hexlify.o 00:03:29.670 CC lib/util/iov.o 00:03:29.670 CC lib/util/math.o 00:03:29.670 CC lib/util/pipe.o 00:03:29.670 CC lib/util/strerror_tls.o 00:03:29.670 CC lib/dma/dma.o 00:03:29.670 CC lib/util/string.o 00:03:29.670 CC lib/util/uuid.o 00:03:29.670 CC lib/util/fd_group.o 00:03:29.670 CC lib/util/xor.o 00:03:29.670 CC lib/util/zipf.o 00:03:29.929 CC lib/vfio_user/host/vfio_user_pci.o 00:03:29.929 CC lib/vfio_user/host/vfio_user.o 00:03:29.929 LIB libspdk_dma.a 00:03:29.929 LIB libspdk_ioat.a 00:03:29.929 SO libspdk_dma.so.4.0 00:03:29.929 SO libspdk_ioat.so.7.0 00:03:29.929 SYMLINK libspdk_dma.so 00:03:29.930 SYMLINK libspdk_ioat.so 00:03:29.930 LIB libspdk_vfio_user.a 00:03:30.188 SO libspdk_vfio_user.so.5.0 00:03:30.188 LIB libspdk_util.a 00:03:30.188 SYMLINK libspdk_vfio_user.so 00:03:30.188 SO libspdk_util.so.9.1 00:03:30.188 SYMLINK libspdk_util.so 00:03:30.448 LIB libspdk_trace_parser.a 00:03:30.448 SO libspdk_trace_parser.so.5.0 00:03:30.448 SYMLINK libspdk_trace_parser.so 00:03:30.448 CC lib/conf/conf.o 00:03:30.747 CC lib/json/json_parse.o 00:03:30.747 CC lib/json/json_util.o 00:03:30.747 CC lib/json/json_write.o 00:03:30.747 CC lib/vmd/vmd.o 00:03:30.747 CC lib/vmd/led.o 00:03:30.747 CC lib/env_dpdk/env.o 00:03:30.747 CC lib/env_dpdk/pci.o 00:03:30.747 CC lib/env_dpdk/memory.o 00:03:30.747 CC lib/env_dpdk/init.o 00:03:30.747 CC lib/env_dpdk/pci_virtio.o 00:03:30.747 CC lib/env_dpdk/threads.o 00:03:30.747 CC lib/env_dpdk/pci_ioat.o 00:03:30.747 CC lib/idxd/idxd.o 00:03:30.747 CC lib/rdma_utils/rdma_utils.o 00:03:30.747 CC lib/env_dpdk/pci_vmd.o 00:03:30.747 CC lib/idxd/idxd_user.o 00:03:30.747 CC lib/env_dpdk/pci_event.o 00:03:30.747 CC lib/rdma_provider/common.o 00:03:30.747 CC lib/idxd/idxd_kernel.o 00:03:30.747 CC lib/env_dpdk/pci_idxd.o 00:03:30.747 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:30.747 CC lib/env_dpdk/sigbus_handler.o 00:03:30.747 CC lib/env_dpdk/pci_dpdk.o 00:03:30.747 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:30.747 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:30.747 LIB libspdk_conf.a 00:03:30.747 LIB libspdk_rdma_provider.a 00:03:30.747 SO libspdk_conf.so.6.0 00:03:30.747 SO libspdk_rdma_provider.so.6.0 00:03:30.747 LIB libspdk_rdma_utils.a 00:03:30.747 LIB libspdk_json.a 00:03:31.030 SYMLINK libspdk_conf.so 00:03:31.030 SO libspdk_rdma_utils.so.1.0 00:03:31.030 SO libspdk_json.so.6.0 00:03:31.030 SYMLINK libspdk_rdma_provider.so 00:03:31.030 SYMLINK libspdk_rdma_utils.so 00:03:31.030 SYMLINK libspdk_json.so 00:03:31.030 LIB libspdk_idxd.a 00:03:31.030 SO libspdk_idxd.so.12.0 00:03:31.030 LIB libspdk_vmd.a 00:03:31.030 SO libspdk_vmd.so.6.0 00:03:31.288 SYMLINK libspdk_idxd.so 00:03:31.288 SYMLINK libspdk_vmd.so 00:03:31.288 CC lib/jsonrpc/jsonrpc_server.o 00:03:31.288 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:31.288 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:31.288 CC lib/jsonrpc/jsonrpc_client.o 00:03:31.547 LIB libspdk_jsonrpc.a 00:03:31.547 SO libspdk_jsonrpc.so.6.0 00:03:31.547 SYMLINK libspdk_jsonrpc.so 00:03:31.547 LIB libspdk_env_dpdk.a 00:03:31.806 SO libspdk_env_dpdk.so.14.1 00:03:31.806 SYMLINK libspdk_env_dpdk.so 00:03:31.806 CC lib/rpc/rpc.o 00:03:32.063 LIB libspdk_rpc.a 00:03:32.063 SO libspdk_rpc.so.6.0 00:03:32.063 SYMLINK libspdk_rpc.so 00:03:32.320 CC lib/notify/notify.o 00:03:32.320 CC lib/notify/notify_rpc.o 00:03:32.320 CC lib/trace/trace.o 00:03:32.320 CC lib/trace/trace_flags.o 00:03:32.320 CC lib/trace/trace_rpc.o 00:03:32.578 CC lib/keyring/keyring.o 00:03:32.579 CC lib/keyring/keyring_rpc.o 00:03:32.579 LIB libspdk_notify.a 00:03:32.579 SO libspdk_notify.so.6.0 00:03:32.579 SYMLINK libspdk_notify.so 00:03:32.579 LIB libspdk_keyring.a 00:03:32.579 LIB libspdk_trace.a 00:03:32.579 SO libspdk_keyring.so.1.0 00:03:32.579 SO libspdk_trace.so.10.0 00:03:32.836 SYMLINK libspdk_keyring.so 00:03:32.836 SYMLINK libspdk_trace.so 00:03:33.093 CC lib/thread/thread.o 00:03:33.093 CC lib/thread/iobuf.o 00:03:33.093 CC lib/sock/sock.o 00:03:33.093 CC lib/sock/sock_rpc.o 00:03:33.351 LIB libspdk_sock.a 00:03:33.351 SO libspdk_sock.so.10.0 00:03:33.351 SYMLINK libspdk_sock.so 00:03:33.608 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:33.608 CC lib/nvme/nvme_ctrlr.o 00:03:33.608 CC lib/nvme/nvme_fabric.o 00:03:33.608 CC lib/nvme/nvme_ns_cmd.o 00:03:33.865 CC lib/nvme/nvme_ns.o 00:03:33.865 CC lib/nvme/nvme_pcie_common.o 00:03:33.865 CC lib/nvme/nvme_pcie.o 00:03:33.865 CC lib/nvme/nvme_qpair.o 00:03:33.865 CC lib/nvme/nvme.o 00:03:33.865 CC lib/nvme/nvme_quirks.o 00:03:33.865 CC lib/nvme/nvme_transport.o 00:03:33.865 CC lib/nvme/nvme_discovery.o 00:03:33.865 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:33.865 CC lib/nvme/nvme_opal.o 00:03:33.865 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:33.865 CC lib/nvme/nvme_tcp.o 00:03:33.865 CC lib/nvme/nvme_io_msg.o 00:03:33.865 CC lib/nvme/nvme_poll_group.o 00:03:33.865 CC lib/nvme/nvme_zns.o 00:03:33.865 CC lib/nvme/nvme_stubs.o 00:03:33.865 CC lib/nvme/nvme_auth.o 00:03:33.865 CC lib/nvme/nvme_cuse.o 00:03:33.865 CC lib/nvme/nvme_rdma.o 00:03:33.865 CC lib/nvme/nvme_vfio_user.o 00:03:34.124 LIB libspdk_thread.a 00:03:34.124 SO libspdk_thread.so.10.1 00:03:34.124 SYMLINK libspdk_thread.so 00:03:34.383 CC lib/vfu_tgt/tgt_endpoint.o 00:03:34.383 CC lib/vfu_tgt/tgt_rpc.o 00:03:34.383 CC lib/blob/blobstore.o 00:03:34.383 CC lib/init/subsystem_rpc.o 00:03:34.383 CC lib/blob/request.o 00:03:34.383 CC lib/init/json_config.o 00:03:34.383 CC lib/accel/accel_rpc.o 00:03:34.383 CC lib/accel/accel_sw.o 00:03:34.383 CC lib/init/subsystem.o 00:03:34.383 CC lib/virtio/virtio.o 00:03:34.383 CC lib/accel/accel.o 00:03:34.383 CC lib/blob/zeroes.o 00:03:34.383 CC lib/blob/blob_bs_dev.o 00:03:34.383 CC lib/virtio/virtio_vhost_user.o 00:03:34.383 CC lib/init/rpc.o 00:03:34.383 CC lib/virtio/virtio_vfio_user.o 00:03:34.383 CC lib/virtio/virtio_pci.o 00:03:34.657 LIB libspdk_init.a 00:03:34.657 LIB libspdk_vfu_tgt.a 00:03:34.657 LIB libspdk_virtio.a 00:03:34.657 SO libspdk_init.so.5.0 00:03:34.657 SO libspdk_vfu_tgt.so.3.0 00:03:34.657 SO libspdk_virtio.so.7.0 00:03:34.913 SYMLINK libspdk_init.so 00:03:34.913 SYMLINK libspdk_vfu_tgt.so 00:03:34.913 SYMLINK libspdk_virtio.so 00:03:35.170 CC lib/event/app.o 00:03:35.170 CC lib/event/reactor.o 00:03:35.170 CC lib/event/log_rpc.o 00:03:35.170 CC lib/event/app_rpc.o 00:03:35.170 CC lib/event/scheduler_static.o 00:03:35.170 LIB libspdk_accel.a 00:03:35.170 SO libspdk_accel.so.15.1 00:03:35.170 SYMLINK libspdk_accel.so 00:03:35.429 LIB libspdk_nvme.a 00:03:35.429 LIB libspdk_event.a 00:03:35.429 SO libspdk_nvme.so.13.1 00:03:35.429 SO libspdk_event.so.14.0 00:03:35.429 SYMLINK libspdk_event.so 00:03:35.686 CC lib/bdev/bdev.o 00:03:35.686 CC lib/bdev/bdev_rpc.o 00:03:35.686 CC lib/bdev/bdev_zone.o 00:03:35.686 CC lib/bdev/scsi_nvme.o 00:03:35.686 CC lib/bdev/part.o 00:03:35.686 SYMLINK libspdk_nvme.so 00:03:36.622 LIB libspdk_blob.a 00:03:36.622 SO libspdk_blob.so.11.0 00:03:36.622 SYMLINK libspdk_blob.so 00:03:36.881 CC lib/lvol/lvol.o 00:03:36.881 CC lib/blobfs/blobfs.o 00:03:36.881 CC lib/blobfs/tree.o 00:03:37.449 LIB libspdk_bdev.a 00:03:37.449 SO libspdk_bdev.so.15.1 00:03:37.449 SYMLINK libspdk_bdev.so 00:03:37.449 LIB libspdk_blobfs.a 00:03:37.449 SO libspdk_blobfs.so.10.0 00:03:37.709 LIB libspdk_lvol.a 00:03:37.709 SYMLINK libspdk_blobfs.so 00:03:37.709 SO libspdk_lvol.so.10.0 00:03:37.709 SYMLINK libspdk_lvol.so 00:03:37.709 CC lib/nbd/nbd.o 00:03:37.709 CC lib/nbd/nbd_rpc.o 00:03:37.709 CC lib/scsi/dev.o 00:03:37.709 CC lib/nvmf/ctrlr.o 00:03:37.709 CC lib/scsi/port.o 00:03:37.709 CC lib/nvmf/ctrlr_discovery.o 00:03:37.709 CC lib/scsi/lun.o 00:03:37.709 CC lib/nvmf/ctrlr_bdev.o 00:03:37.709 CC lib/nvmf/subsystem.o 00:03:37.709 CC lib/scsi/scsi.o 00:03:37.709 CC lib/nvmf/nvmf.o 00:03:37.709 CC lib/scsi/scsi_bdev.o 00:03:37.709 CC lib/scsi/scsi_rpc.o 00:03:37.709 CC lib/nvmf/nvmf_rpc.o 00:03:37.709 CC lib/scsi/scsi_pr.o 00:03:37.709 CC lib/ublk/ublk.o 00:03:37.709 CC lib/nvmf/transport.o 00:03:37.709 CC lib/ublk/ublk_rpc.o 00:03:37.709 CC lib/nvmf/tcp.o 00:03:37.709 CC lib/scsi/task.o 00:03:37.709 CC lib/nvmf/stubs.o 00:03:37.709 CC lib/nvmf/mdns_server.o 00:03:37.709 CC lib/ftl/ftl_core.o 00:03:37.709 CC lib/nvmf/vfio_user.o 00:03:37.709 CC lib/ftl/ftl_layout.o 00:03:37.709 CC lib/ftl/ftl_init.o 00:03:37.709 CC lib/nvmf/rdma.o 00:03:37.709 CC lib/nvmf/auth.o 00:03:37.709 CC lib/ftl/ftl_debug.o 00:03:37.709 CC lib/ftl/ftl_io.o 00:03:37.709 CC lib/ftl/ftl_sb.o 00:03:37.709 CC lib/ftl/ftl_l2p.o 00:03:37.709 CC lib/ftl/ftl_l2p_flat.o 00:03:37.709 CC lib/ftl/ftl_nv_cache.o 00:03:37.709 CC lib/ftl/ftl_band.o 00:03:37.709 CC lib/ftl/ftl_band_ops.o 00:03:37.709 CC lib/ftl/ftl_writer.o 00:03:37.709 CC lib/ftl/ftl_rq.o 00:03:37.709 CC lib/ftl/ftl_reloc.o 00:03:37.709 CC lib/ftl/ftl_l2p_cache.o 00:03:37.709 CC lib/ftl/mngt/ftl_mngt.o 00:03:37.709 CC lib/ftl/ftl_p2l.o 00:03:37.709 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:37.709 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:37.709 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:37.709 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:37.709 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:37.709 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:37.709 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:37.709 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:37.709 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:37.709 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:37.709 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:37.709 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:37.709 CC lib/ftl/utils/ftl_conf.o 00:03:37.709 CC lib/ftl/utils/ftl_md.o 00:03:37.709 CC lib/ftl/utils/ftl_mempool.o 00:03:37.709 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:37.709 CC lib/ftl/utils/ftl_property.o 00:03:37.709 CC lib/ftl/utils/ftl_bitmap.o 00:03:37.709 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:37.709 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:37.709 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:37.709 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:37.709 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:37.709 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:37.709 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:37.709 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:37.709 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:37.709 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:37.709 CC lib/ftl/base/ftl_base_dev.o 00:03:37.709 CC lib/ftl/base/ftl_base_bdev.o 00:03:37.709 CC lib/ftl/ftl_trace.o 00:03:38.275 LIB libspdk_scsi.a 00:03:38.275 SO libspdk_scsi.so.9.0 00:03:38.533 SYMLINK libspdk_scsi.so 00:03:38.533 LIB libspdk_nbd.a 00:03:38.533 LIB libspdk_ublk.a 00:03:38.533 SO libspdk_nbd.so.7.0 00:03:38.533 SO libspdk_ublk.so.3.0 00:03:38.533 SYMLINK libspdk_nbd.so 00:03:38.533 SYMLINK libspdk_ublk.so 00:03:38.791 LIB libspdk_ftl.a 00:03:38.791 CC lib/iscsi/conn.o 00:03:38.791 CC lib/iscsi/iscsi.o 00:03:38.791 CC lib/iscsi/init_grp.o 00:03:38.791 CC lib/iscsi/md5.o 00:03:38.791 CC lib/vhost/vhost.o 00:03:38.791 CC lib/iscsi/param.o 00:03:38.791 CC lib/iscsi/portal_grp.o 00:03:38.791 CC lib/vhost/vhost_rpc.o 00:03:38.791 CC lib/iscsi/tgt_node.o 00:03:38.791 CC lib/vhost/vhost_blk.o 00:03:38.791 CC lib/vhost/vhost_scsi.o 00:03:38.791 CC lib/iscsi/iscsi_subsystem.o 00:03:38.791 CC lib/iscsi/iscsi_rpc.o 00:03:38.791 CC lib/vhost/rte_vhost_user.o 00:03:38.791 CC lib/iscsi/task.o 00:03:38.791 SO libspdk_ftl.so.9.0 00:03:39.049 SYMLINK libspdk_ftl.so 00:03:39.307 LIB libspdk_nvmf.a 00:03:39.566 SO libspdk_nvmf.so.19.0 00:03:39.566 LIB libspdk_vhost.a 00:03:39.566 SO libspdk_vhost.so.8.0 00:03:39.566 SYMLINK libspdk_nvmf.so 00:03:39.566 SYMLINK libspdk_vhost.so 00:03:39.824 LIB libspdk_iscsi.a 00:03:39.824 SO libspdk_iscsi.so.8.0 00:03:39.824 SYMLINK libspdk_iscsi.so 00:03:40.391 CC module/vfu_device/vfu_virtio.o 00:03:40.391 CC module/vfu_device/vfu_virtio_blk.o 00:03:40.391 CC module/vfu_device/vfu_virtio_rpc.o 00:03:40.391 CC module/vfu_device/vfu_virtio_scsi.o 00:03:40.391 CC module/env_dpdk/env_dpdk_rpc.o 00:03:40.391 CC module/sock/posix/posix.o 00:03:40.391 CC module/accel/iaa/accel_iaa_rpc.o 00:03:40.391 CC module/accel/iaa/accel_iaa.o 00:03:40.391 CC module/keyring/linux/keyring.o 00:03:40.649 CC module/accel/ioat/accel_ioat_rpc.o 00:03:40.649 CC module/keyring/linux/keyring_rpc.o 00:03:40.649 CC module/accel/ioat/accel_ioat.o 00:03:40.649 CC module/keyring/file/keyring.o 00:03:40.649 CC module/keyring/file/keyring_rpc.o 00:03:40.649 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:40.649 CC module/blob/bdev/blob_bdev.o 00:03:40.649 LIB libspdk_env_dpdk_rpc.a 00:03:40.649 CC module/accel/error/accel_error.o 00:03:40.649 CC module/accel/error/accel_error_rpc.o 00:03:40.649 CC module/accel/dsa/accel_dsa.o 00:03:40.649 CC module/scheduler/gscheduler/gscheduler.o 00:03:40.649 CC module/accel/dsa/accel_dsa_rpc.o 00:03:40.649 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:40.649 SO libspdk_env_dpdk_rpc.so.6.0 00:03:40.649 SYMLINK libspdk_env_dpdk_rpc.so 00:03:40.649 LIB libspdk_keyring_linux.a 00:03:40.649 LIB libspdk_keyring_file.a 00:03:40.649 SO libspdk_keyring_linux.so.1.0 00:03:40.649 LIB libspdk_scheduler_dpdk_governor.a 00:03:40.649 LIB libspdk_scheduler_gscheduler.a 00:03:40.649 LIB libspdk_accel_error.a 00:03:40.649 LIB libspdk_scheduler_dynamic.a 00:03:40.649 LIB libspdk_accel_iaa.a 00:03:40.649 SO libspdk_keyring_file.so.1.0 00:03:40.649 LIB libspdk_accel_ioat.a 00:03:40.649 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:40.649 SO libspdk_scheduler_gscheduler.so.4.0 00:03:40.649 SO libspdk_accel_error.so.2.0 00:03:40.649 SO libspdk_scheduler_dynamic.so.4.0 00:03:40.649 SYMLINK libspdk_keyring_linux.so 00:03:40.649 SO libspdk_accel_ioat.so.6.0 00:03:40.650 SO libspdk_accel_iaa.so.3.0 00:03:40.650 LIB libspdk_blob_bdev.a 00:03:40.650 SYMLINK libspdk_keyring_file.so 00:03:40.650 LIB libspdk_accel_dsa.a 00:03:40.908 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:40.908 SYMLINK libspdk_scheduler_gscheduler.so 00:03:40.908 SO libspdk_blob_bdev.so.11.0 00:03:40.908 SO libspdk_accel_dsa.so.5.0 00:03:40.908 SYMLINK libspdk_accel_ioat.so 00:03:40.908 SYMLINK libspdk_scheduler_dynamic.so 00:03:40.908 SYMLINK libspdk_accel_error.so 00:03:40.908 SYMLINK libspdk_accel_iaa.so 00:03:40.908 SYMLINK libspdk_blob_bdev.so 00:03:40.908 SYMLINK libspdk_accel_dsa.so 00:03:40.908 LIB libspdk_vfu_device.a 00:03:40.908 SO libspdk_vfu_device.so.3.0 00:03:40.908 SYMLINK libspdk_vfu_device.so 00:03:41.167 LIB libspdk_sock_posix.a 00:03:41.167 SO libspdk_sock_posix.so.6.0 00:03:41.167 SYMLINK libspdk_sock_posix.so 00:03:41.167 CC module/bdev/gpt/gpt.o 00:03:41.167 CC module/bdev/raid/bdev_raid.o 00:03:41.167 CC module/bdev/raid/bdev_raid_sb.o 00:03:41.167 CC module/bdev/gpt/vbdev_gpt.o 00:03:41.167 CC module/bdev/raid/bdev_raid_rpc.o 00:03:41.167 CC module/bdev/raid/raid0.o 00:03:41.167 CC module/bdev/ftl/bdev_ftl.o 00:03:41.167 CC module/bdev/raid/raid1.o 00:03:41.167 CC module/bdev/raid/concat.o 00:03:41.167 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:41.167 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:41.167 CC module/blobfs/bdev/blobfs_bdev.o 00:03:41.167 CC module/bdev/error/vbdev_error_rpc.o 00:03:41.167 CC module/bdev/error/vbdev_error.o 00:03:41.167 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:41.167 CC module/bdev/delay/vbdev_delay.o 00:03:41.167 CC module/bdev/nvme/bdev_nvme.o 00:03:41.167 CC module/bdev/nvme/nvme_rpc.o 00:03:41.167 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:41.167 CC module/bdev/nvme/bdev_mdns_client.o 00:03:41.167 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:41.167 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:41.167 CC module/bdev/nvme/vbdev_opal.o 00:03:41.167 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:41.167 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:41.167 CC module/bdev/split/vbdev_split.o 00:03:41.167 CC module/bdev/passthru/vbdev_passthru.o 00:03:41.167 CC module/bdev/split/vbdev_split_rpc.o 00:03:41.167 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:41.167 CC module/bdev/aio/bdev_aio_rpc.o 00:03:41.167 CC module/bdev/lvol/vbdev_lvol.o 00:03:41.167 CC module/bdev/aio/bdev_aio.o 00:03:41.167 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:41.167 CC module/bdev/null/bdev_null.o 00:03:41.167 CC module/bdev/null/bdev_null_rpc.o 00:03:41.425 CC module/bdev/malloc/bdev_malloc.o 00:03:41.425 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:41.425 CC module/bdev/iscsi/bdev_iscsi.o 00:03:41.425 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:41.425 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:41.425 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:41.425 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:41.425 LIB libspdk_blobfs_bdev.a 00:03:41.425 LIB libspdk_bdev_split.a 00:03:41.683 SO libspdk_blobfs_bdev.so.6.0 00:03:41.683 LIB libspdk_bdev_null.a 00:03:41.683 SO libspdk_bdev_split.so.6.0 00:03:41.683 LIB libspdk_bdev_gpt.a 00:03:41.683 LIB libspdk_bdev_error.a 00:03:41.683 LIB libspdk_bdev_ftl.a 00:03:41.683 SO libspdk_bdev_null.so.6.0 00:03:41.683 SO libspdk_bdev_error.so.6.0 00:03:41.683 LIB libspdk_bdev_aio.a 00:03:41.683 SO libspdk_bdev_gpt.so.6.0 00:03:41.683 LIB libspdk_bdev_zone_block.a 00:03:41.683 SYMLINK libspdk_blobfs_bdev.so 00:03:41.683 LIB libspdk_bdev_passthru.a 00:03:41.683 SYMLINK libspdk_bdev_split.so 00:03:41.683 SO libspdk_bdev_ftl.so.6.0 00:03:41.683 LIB libspdk_bdev_malloc.a 00:03:41.683 SO libspdk_bdev_zone_block.so.6.0 00:03:41.683 SO libspdk_bdev_aio.so.6.0 00:03:41.683 LIB libspdk_bdev_iscsi.a 00:03:41.683 SO libspdk_bdev_passthru.so.6.0 00:03:41.683 SYMLINK libspdk_bdev_null.so 00:03:41.683 SYMLINK libspdk_bdev_gpt.so 00:03:41.683 SYMLINK libspdk_bdev_error.so 00:03:41.683 LIB libspdk_bdev_delay.a 00:03:41.683 SO libspdk_bdev_malloc.so.6.0 00:03:41.683 SYMLINK libspdk_bdev_ftl.so 00:03:41.683 SO libspdk_bdev_iscsi.so.6.0 00:03:41.683 SYMLINK libspdk_bdev_zone_block.so 00:03:41.683 SO libspdk_bdev_delay.so.6.0 00:03:41.683 SYMLINK libspdk_bdev_passthru.so 00:03:41.683 SYMLINK libspdk_bdev_aio.so 00:03:41.683 SYMLINK libspdk_bdev_malloc.so 00:03:41.683 SYMLINK libspdk_bdev_iscsi.so 00:03:41.683 LIB libspdk_bdev_lvol.a 00:03:41.683 SYMLINK libspdk_bdev_delay.so 00:03:41.683 LIB libspdk_bdev_virtio.a 00:03:41.683 SO libspdk_bdev_lvol.so.6.0 00:03:41.941 SO libspdk_bdev_virtio.so.6.0 00:03:41.941 SYMLINK libspdk_bdev_lvol.so 00:03:41.941 SYMLINK libspdk_bdev_virtio.so 00:03:42.200 LIB libspdk_bdev_raid.a 00:03:42.200 SO libspdk_bdev_raid.so.6.0 00:03:42.200 SYMLINK libspdk_bdev_raid.so 00:03:42.765 LIB libspdk_bdev_nvme.a 00:03:43.024 SO libspdk_bdev_nvme.so.7.0 00:03:43.024 SYMLINK libspdk_bdev_nvme.so 00:03:43.589 CC module/event/subsystems/iobuf/iobuf.o 00:03:43.589 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:43.589 CC module/event/subsystems/sock/sock.o 00:03:43.589 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:43.589 CC module/event/subsystems/vmd/vmd.o 00:03:43.589 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:43.589 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:43.589 CC module/event/subsystems/keyring/keyring.o 00:03:43.589 CC module/event/subsystems/scheduler/scheduler.o 00:03:43.589 LIB libspdk_event_sock.a 00:03:43.589 LIB libspdk_event_iobuf.a 00:03:43.848 LIB libspdk_event_vfu_tgt.a 00:03:43.848 LIB libspdk_event_keyring.a 00:03:43.848 SO libspdk_event_sock.so.5.0 00:03:43.848 LIB libspdk_event_vhost_blk.a 00:03:43.848 SO libspdk_event_keyring.so.1.0 00:03:43.848 LIB libspdk_event_vmd.a 00:03:43.848 SO libspdk_event_iobuf.so.3.0 00:03:43.848 LIB libspdk_event_scheduler.a 00:03:43.848 SO libspdk_event_vfu_tgt.so.3.0 00:03:43.848 SO libspdk_event_scheduler.so.4.0 00:03:43.848 SO libspdk_event_vhost_blk.so.3.0 00:03:43.848 SYMLINK libspdk_event_sock.so 00:03:43.848 SO libspdk_event_vmd.so.6.0 00:03:43.848 SYMLINK libspdk_event_keyring.so 00:03:43.848 SYMLINK libspdk_event_iobuf.so 00:03:43.848 SYMLINK libspdk_event_vfu_tgt.so 00:03:43.848 SYMLINK libspdk_event_vhost_blk.so 00:03:43.848 SYMLINK libspdk_event_scheduler.so 00:03:43.848 SYMLINK libspdk_event_vmd.so 00:03:44.106 CC module/event/subsystems/accel/accel.o 00:03:44.364 LIB libspdk_event_accel.a 00:03:44.364 SO libspdk_event_accel.so.6.0 00:03:44.364 SYMLINK libspdk_event_accel.so 00:03:44.621 CC module/event/subsystems/bdev/bdev.o 00:03:44.920 LIB libspdk_event_bdev.a 00:03:44.920 SO libspdk_event_bdev.so.6.0 00:03:44.920 SYMLINK libspdk_event_bdev.so 00:03:45.200 CC module/event/subsystems/nbd/nbd.o 00:03:45.200 CC module/event/subsystems/scsi/scsi.o 00:03:45.200 CC module/event/subsystems/ublk/ublk.o 00:03:45.200 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:45.200 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:45.200 LIB libspdk_event_nbd.a 00:03:45.459 LIB libspdk_event_scsi.a 00:03:45.459 SO libspdk_event_nbd.so.6.0 00:03:45.459 LIB libspdk_event_ublk.a 00:03:45.459 SO libspdk_event_scsi.so.6.0 00:03:45.459 SO libspdk_event_ublk.so.3.0 00:03:45.459 LIB libspdk_event_nvmf.a 00:03:45.459 SYMLINK libspdk_event_nbd.so 00:03:45.459 SO libspdk_event_nvmf.so.6.0 00:03:45.459 SYMLINK libspdk_event_scsi.so 00:03:45.459 SYMLINK libspdk_event_ublk.so 00:03:45.459 SYMLINK libspdk_event_nvmf.so 00:03:45.717 CC module/event/subsystems/iscsi/iscsi.o 00:03:45.717 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:45.975 LIB libspdk_event_iscsi.a 00:03:45.975 LIB libspdk_event_vhost_scsi.a 00:03:45.975 SO libspdk_event_iscsi.so.6.0 00:03:45.975 SO libspdk_event_vhost_scsi.so.3.0 00:03:45.975 SYMLINK libspdk_event_iscsi.so 00:03:45.975 SYMLINK libspdk_event_vhost_scsi.so 00:03:46.232 SO libspdk.so.6.0 00:03:46.232 SYMLINK libspdk.so 00:03:46.505 CC test/rpc_client/rpc_client_test.o 00:03:46.505 TEST_HEADER include/spdk/accel.h 00:03:46.505 TEST_HEADER include/spdk/accel_module.h 00:03:46.505 TEST_HEADER include/spdk/assert.h 00:03:46.505 TEST_HEADER include/spdk/barrier.h 00:03:46.505 TEST_HEADER include/spdk/bdev_module.h 00:03:46.505 TEST_HEADER include/spdk/base64.h 00:03:46.505 TEST_HEADER include/spdk/bdev.h 00:03:46.505 TEST_HEADER include/spdk/bit_pool.h 00:03:46.505 TEST_HEADER include/spdk/bdev_zone.h 00:03:46.506 TEST_HEADER include/spdk/blob_bdev.h 00:03:46.506 TEST_HEADER include/spdk/bit_array.h 00:03:46.506 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:46.506 TEST_HEADER include/spdk/blob.h 00:03:46.506 TEST_HEADER include/spdk/blobfs.h 00:03:46.506 TEST_HEADER include/spdk/conf.h 00:03:46.506 TEST_HEADER include/spdk/config.h 00:03:46.506 TEST_HEADER include/spdk/cpuset.h 00:03:46.506 TEST_HEADER include/spdk/crc16.h 00:03:46.506 CXX app/trace/trace.o 00:03:46.506 TEST_HEADER include/spdk/crc32.h 00:03:46.506 TEST_HEADER include/spdk/crc64.h 00:03:46.506 CC app/trace_record/trace_record.o 00:03:46.506 TEST_HEADER include/spdk/dif.h 00:03:46.506 TEST_HEADER include/spdk/dma.h 00:03:46.506 CC app/spdk_nvme_discover/discovery_aer.o 00:03:46.506 TEST_HEADER include/spdk/endian.h 00:03:46.506 TEST_HEADER include/spdk/env_dpdk.h 00:03:46.506 TEST_HEADER include/spdk/env.h 00:03:46.506 CC app/spdk_lspci/spdk_lspci.o 00:03:46.506 TEST_HEADER include/spdk/event.h 00:03:46.506 TEST_HEADER include/spdk/fd_group.h 00:03:46.506 CC app/spdk_nvme_identify/identify.o 00:03:46.506 TEST_HEADER include/spdk/fd.h 00:03:46.506 TEST_HEADER include/spdk/file.h 00:03:46.506 TEST_HEADER include/spdk/hexlify.h 00:03:46.506 TEST_HEADER include/spdk/ftl.h 00:03:46.506 TEST_HEADER include/spdk/gpt_spec.h 00:03:46.506 TEST_HEADER include/spdk/histogram_data.h 00:03:46.506 TEST_HEADER include/spdk/idxd.h 00:03:46.506 CC app/spdk_nvme_perf/perf.o 00:03:46.506 TEST_HEADER include/spdk/idxd_spec.h 00:03:46.506 TEST_HEADER include/spdk/init.h 00:03:46.506 TEST_HEADER include/spdk/ioat.h 00:03:46.506 TEST_HEADER include/spdk/iscsi_spec.h 00:03:46.506 TEST_HEADER include/spdk/ioat_spec.h 00:03:46.506 TEST_HEADER include/spdk/json.h 00:03:46.506 CC app/spdk_top/spdk_top.o 00:03:46.506 TEST_HEADER include/spdk/jsonrpc.h 00:03:46.506 TEST_HEADER include/spdk/keyring.h 00:03:46.506 TEST_HEADER include/spdk/keyring_module.h 00:03:46.506 TEST_HEADER include/spdk/likely.h 00:03:46.506 TEST_HEADER include/spdk/log.h 00:03:46.506 TEST_HEADER include/spdk/lvol.h 00:03:46.506 TEST_HEADER include/spdk/memory.h 00:03:46.506 TEST_HEADER include/spdk/nbd.h 00:03:46.506 TEST_HEADER include/spdk/mmio.h 00:03:46.506 TEST_HEADER include/spdk/nvme_intel.h 00:03:46.506 TEST_HEADER include/spdk/nvme.h 00:03:46.506 TEST_HEADER include/spdk/notify.h 00:03:46.506 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:46.506 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:46.506 TEST_HEADER include/spdk/nvme_spec.h 00:03:46.506 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:46.506 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:46.506 TEST_HEADER include/spdk/nvmf.h 00:03:46.506 TEST_HEADER include/spdk/nvme_zns.h 00:03:46.506 TEST_HEADER include/spdk/nvmf_transport.h 00:03:46.506 TEST_HEADER include/spdk/nvmf_spec.h 00:03:46.506 TEST_HEADER include/spdk/opal_spec.h 00:03:46.506 TEST_HEADER include/spdk/opal.h 00:03:46.506 TEST_HEADER include/spdk/pci_ids.h 00:03:46.506 TEST_HEADER include/spdk/pipe.h 00:03:46.506 TEST_HEADER include/spdk/queue.h 00:03:46.506 TEST_HEADER include/spdk/reduce.h 00:03:46.506 TEST_HEADER include/spdk/rpc.h 00:03:46.506 CC app/nvmf_tgt/nvmf_main.o 00:03:46.506 TEST_HEADER include/spdk/scsi.h 00:03:46.506 TEST_HEADER include/spdk/scheduler.h 00:03:46.506 TEST_HEADER include/spdk/stdinc.h 00:03:46.506 TEST_HEADER include/spdk/scsi_spec.h 00:03:46.506 TEST_HEADER include/spdk/sock.h 00:03:46.506 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:46.506 TEST_HEADER include/spdk/string.h 00:03:46.506 TEST_HEADER include/spdk/thread.h 00:03:46.506 TEST_HEADER include/spdk/trace_parser.h 00:03:46.506 TEST_HEADER include/spdk/trace.h 00:03:46.506 TEST_HEADER include/spdk/tree.h 00:03:46.506 TEST_HEADER include/spdk/ublk.h 00:03:46.506 TEST_HEADER include/spdk/util.h 00:03:46.506 TEST_HEADER include/spdk/uuid.h 00:03:46.506 TEST_HEADER include/spdk/version.h 00:03:46.506 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:46.506 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:46.506 TEST_HEADER include/spdk/vhost.h 00:03:46.506 TEST_HEADER include/spdk/vmd.h 00:03:46.506 TEST_HEADER include/spdk/zipf.h 00:03:46.506 TEST_HEADER include/spdk/xor.h 00:03:46.506 CXX test/cpp_headers/accel_module.o 00:03:46.506 CXX test/cpp_headers/assert.o 00:03:46.506 CXX test/cpp_headers/accel.o 00:03:46.506 CXX test/cpp_headers/barrier.o 00:03:46.506 CC app/iscsi_tgt/iscsi_tgt.o 00:03:46.506 CXX test/cpp_headers/base64.o 00:03:46.506 CXX test/cpp_headers/bdev.o 00:03:46.506 CC app/spdk_dd/spdk_dd.o 00:03:46.506 CXX test/cpp_headers/bdev_zone.o 00:03:46.506 CXX test/cpp_headers/bit_array.o 00:03:46.506 CXX test/cpp_headers/bit_pool.o 00:03:46.506 CXX test/cpp_headers/bdev_module.o 00:03:46.506 CXX test/cpp_headers/blobfs_bdev.o 00:03:46.506 CXX test/cpp_headers/blob_bdev.o 00:03:46.506 CXX test/cpp_headers/blobfs.o 00:03:46.506 CXX test/cpp_headers/blob.o 00:03:46.506 CXX test/cpp_headers/conf.o 00:03:46.506 CXX test/cpp_headers/config.o 00:03:46.506 CXX test/cpp_headers/crc16.o 00:03:46.506 CXX test/cpp_headers/crc32.o 00:03:46.506 CXX test/cpp_headers/crc64.o 00:03:46.506 CXX test/cpp_headers/dif.o 00:03:46.506 CXX test/cpp_headers/cpuset.o 00:03:46.506 CXX test/cpp_headers/dma.o 00:03:46.506 CXX test/cpp_headers/endian.o 00:03:46.506 CXX test/cpp_headers/env_dpdk.o 00:03:46.506 CXX test/cpp_headers/env.o 00:03:46.506 CXX test/cpp_headers/event.o 00:03:46.506 CXX test/cpp_headers/fd_group.o 00:03:46.506 CXX test/cpp_headers/file.o 00:03:46.506 CXX test/cpp_headers/fd.o 00:03:46.506 CXX test/cpp_headers/gpt_spec.o 00:03:46.506 CXX test/cpp_headers/ftl.o 00:03:46.506 CXX test/cpp_headers/idxd.o 00:03:46.506 CXX test/cpp_headers/hexlify.o 00:03:46.506 CC app/spdk_tgt/spdk_tgt.o 00:03:46.506 CXX test/cpp_headers/histogram_data.o 00:03:46.506 CXX test/cpp_headers/init.o 00:03:46.506 CXX test/cpp_headers/ioat_spec.o 00:03:46.506 CXX test/cpp_headers/ioat.o 00:03:46.506 CXX test/cpp_headers/json.o 00:03:46.506 CXX test/cpp_headers/iscsi_spec.o 00:03:46.506 CXX test/cpp_headers/idxd_spec.o 00:03:46.506 CXX test/cpp_headers/jsonrpc.o 00:03:46.506 CXX test/cpp_headers/keyring.o 00:03:46.506 CXX test/cpp_headers/likely.o 00:03:46.506 CXX test/cpp_headers/lvol.o 00:03:46.506 CXX test/cpp_headers/keyring_module.o 00:03:46.506 CXX test/cpp_headers/log.o 00:03:46.506 CXX test/cpp_headers/mmio.o 00:03:46.506 CXX test/cpp_headers/nbd.o 00:03:46.506 CXX test/cpp_headers/memory.o 00:03:46.506 CXX test/cpp_headers/notify.o 00:03:46.506 CXX test/cpp_headers/nvme.o 00:03:46.506 CXX test/cpp_headers/nvme_intel.o 00:03:46.506 CXX test/cpp_headers/nvme_ocssd.o 00:03:46.506 CXX test/cpp_headers/nvme_spec.o 00:03:46.506 CXX test/cpp_headers/nvme_zns.o 00:03:46.506 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:46.506 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:46.506 CXX test/cpp_headers/nvmf_cmd.o 00:03:46.506 CXX test/cpp_headers/nvmf.o 00:03:46.506 CXX test/cpp_headers/nvmf_spec.o 00:03:46.506 CXX test/cpp_headers/nvmf_transport.o 00:03:46.506 CXX test/cpp_headers/pci_ids.o 00:03:46.506 CXX test/cpp_headers/opal_spec.o 00:03:46.506 CXX test/cpp_headers/opal.o 00:03:46.506 CXX test/cpp_headers/pipe.o 00:03:46.506 CXX test/cpp_headers/queue.o 00:03:46.506 CC test/app/histogram_perf/histogram_perf.o 00:03:46.506 CXX test/cpp_headers/reduce.o 00:03:46.506 CC test/env/memory/memory_ut.o 00:03:46.506 CC test/thread/poller_perf/poller_perf.o 00:03:46.506 CC test/app/jsoncat/jsoncat.o 00:03:46.506 CXX test/cpp_headers/rpc.o 00:03:46.506 CC test/env/vtophys/vtophys.o 00:03:46.785 CC test/env/pci/pci_ut.o 00:03:46.785 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:46.785 CC test/app/stub/stub.o 00:03:46.785 CC test/app/bdev_svc/bdev_svc.o 00:03:46.785 CC examples/ioat/verify/verify.o 00:03:46.785 CC test/dma/test_dma/test_dma.o 00:03:46.785 CC examples/ioat/perf/perf.o 00:03:46.785 CC examples/util/zipf/zipf.o 00:03:46.785 CXX test/cpp_headers/scheduler.o 00:03:46.785 CC app/fio/nvme/fio_plugin.o 00:03:46.785 CC app/fio/bdev/fio_plugin.o 00:03:46.785 LINK spdk_lspci 00:03:47.049 LINK spdk_nvme_discover 00:03:47.049 LINK nvmf_tgt 00:03:47.049 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:47.049 CC test/env/mem_callbacks/mem_callbacks.o 00:03:47.049 LINK jsoncat 00:03:47.049 LINK rpc_client_test 00:03:47.049 LINK vtophys 00:03:47.049 LINK env_dpdk_post_init 00:03:47.049 CXX test/cpp_headers/scsi.o 00:03:47.049 CXX test/cpp_headers/scsi_spec.o 00:03:47.049 CXX test/cpp_headers/sock.o 00:03:47.049 CXX test/cpp_headers/stdinc.o 00:03:47.049 CXX test/cpp_headers/string.o 00:03:47.049 CXX test/cpp_headers/thread.o 00:03:47.049 CXX test/cpp_headers/trace.o 00:03:47.049 CXX test/cpp_headers/trace_parser.o 00:03:47.049 CXX test/cpp_headers/tree.o 00:03:47.049 CXX test/cpp_headers/ublk.o 00:03:47.049 CXX test/cpp_headers/util.o 00:03:47.049 CXX test/cpp_headers/uuid.o 00:03:47.049 CXX test/cpp_headers/version.o 00:03:47.049 CXX test/cpp_headers/vfio_user_pci.o 00:03:47.049 CXX test/cpp_headers/vfio_user_spec.o 00:03:47.049 CXX test/cpp_headers/vhost.o 00:03:47.049 CXX test/cpp_headers/vmd.o 00:03:47.049 CXX test/cpp_headers/xor.o 00:03:47.049 CXX test/cpp_headers/zipf.o 00:03:47.306 LINK histogram_perf 00:03:47.306 LINK stub 00:03:47.306 LINK interrupt_tgt 00:03:47.306 LINK iscsi_tgt 00:03:47.306 LINK poller_perf 00:03:47.306 LINK spdk_tgt 00:03:47.306 LINK spdk_trace_record 00:03:47.306 LINK zipf 00:03:47.306 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:47.306 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:47.306 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:47.306 LINK bdev_svc 00:03:47.306 LINK ioat_perf 00:03:47.306 LINK spdk_dd 00:03:47.306 LINK verify 00:03:47.306 LINK test_dma 00:03:47.306 LINK pci_ut 00:03:47.306 LINK spdk_trace 00:03:47.564 LINK spdk_bdev 00:03:47.564 LINK nvme_fuzz 00:03:47.564 LINK spdk_nvme 00:03:47.564 CC test/event/event_perf/event_perf.o 00:03:47.564 CC test/event/reactor/reactor.o 00:03:47.564 LINK vhost_fuzz 00:03:47.564 CC test/event/reactor_perf/reactor_perf.o 00:03:47.564 LINK spdk_nvme_perf 00:03:47.821 CC test/event/scheduler/scheduler.o 00:03:47.822 LINK spdk_nvme_identify 00:03:47.822 CC examples/vmd/lsvmd/lsvmd.o 00:03:47.822 CC test/event/app_repeat/app_repeat.o 00:03:47.822 CC examples/idxd/perf/perf.o 00:03:47.822 CC examples/vmd/led/led.o 00:03:47.822 CC examples/sock/hello_world/hello_sock.o 00:03:47.822 CC app/vhost/vhost.o 00:03:47.822 CC examples/thread/thread/thread_ex.o 00:03:47.822 LINK mem_callbacks 00:03:47.822 LINK spdk_top 00:03:47.822 CC test/nvme/sgl/sgl.o 00:03:47.822 CC test/nvme/reset/reset.o 00:03:47.822 CC test/nvme/fused_ordering/fused_ordering.o 00:03:47.822 CC test/nvme/e2edp/nvme_dp.o 00:03:47.822 CC test/nvme/overhead/overhead.o 00:03:47.822 LINK reactor_perf 00:03:47.822 CC test/nvme/startup/startup.o 00:03:47.822 CC test/nvme/fdp/fdp.o 00:03:47.822 CC test/nvme/reserve/reserve.o 00:03:47.822 CC test/nvme/aer/aer.o 00:03:47.822 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:47.822 CC test/nvme/simple_copy/simple_copy.o 00:03:47.822 CC test/nvme/compliance/nvme_compliance.o 00:03:47.822 LINK reactor 00:03:47.822 CC test/nvme/err_injection/err_injection.o 00:03:47.822 CC test/nvme/boot_partition/boot_partition.o 00:03:47.822 CC test/nvme/connect_stress/connect_stress.o 00:03:47.822 CC test/nvme/cuse/cuse.o 00:03:47.822 LINK event_perf 00:03:47.822 CC test/accel/dif/dif.o 00:03:47.822 LINK led 00:03:47.822 LINK lsvmd 00:03:47.822 CC test/blobfs/mkfs/mkfs.o 00:03:47.822 LINK app_repeat 00:03:48.079 CC test/lvol/esnap/esnap.o 00:03:48.079 LINK scheduler 00:03:48.079 LINK hello_sock 00:03:48.079 LINK vhost 00:03:48.079 LINK thread 00:03:48.079 LINK startup 00:03:48.079 LINK boot_partition 00:03:48.079 LINK reserve 00:03:48.079 LINK connect_stress 00:03:48.079 LINK doorbell_aers 00:03:48.079 LINK idxd_perf 00:03:48.079 LINK err_injection 00:03:48.079 LINK memory_ut 00:03:48.079 LINK fused_ordering 00:03:48.079 LINK simple_copy 00:03:48.079 LINK nvme_dp 00:03:48.079 LINK reset 00:03:48.079 LINK mkfs 00:03:48.079 LINK sgl 00:03:48.079 LINK aer 00:03:48.079 LINK overhead 00:03:48.079 LINK nvme_compliance 00:03:48.079 LINK fdp 00:03:48.337 LINK dif 00:03:48.337 CC examples/nvme/reconnect/reconnect.o 00:03:48.337 CC examples/nvme/abort/abort.o 00:03:48.337 CC examples/nvme/hello_world/hello_world.o 00:03:48.337 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:48.337 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:48.337 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:48.337 CC examples/nvme/hotplug/hotplug.o 00:03:48.337 CC examples/nvme/arbitration/arbitration.o 00:03:48.620 CC examples/accel/perf/accel_perf.o 00:03:48.620 CC examples/blob/hello_world/hello_blob.o 00:03:48.620 CC examples/blob/cli/blobcli.o 00:03:48.620 LINK pmr_persistence 00:03:48.620 LINK cmb_copy 00:03:48.620 LINK hello_world 00:03:48.620 LINK hotplug 00:03:48.620 LINK iscsi_fuzz 00:03:48.620 LINK reconnect 00:03:48.620 LINK abort 00:03:48.620 LINK arbitration 00:03:48.879 CC test/bdev/bdevio/bdevio.o 00:03:48.879 LINK hello_blob 00:03:48.879 LINK nvme_manage 00:03:48.879 LINK accel_perf 00:03:48.879 LINK cuse 00:03:48.879 LINK blobcli 00:03:49.136 LINK bdevio 00:03:49.393 CC examples/bdev/bdevperf/bdevperf.o 00:03:49.393 CC examples/bdev/hello_world/hello_bdev.o 00:03:49.651 LINK hello_bdev 00:03:49.909 LINK bdevperf 00:03:50.473 CC examples/nvmf/nvmf/nvmf.o 00:03:50.473 LINK nvmf 00:03:51.405 LINK esnap 00:03:51.663 00:03:51.663 real 0m43.231s 00:03:51.663 user 6m30.963s 00:03:51.663 sys 3m24.698s 00:03:51.663 15:45:20 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:51.663 15:45:20 make -- common/autotest_common.sh@10 -- $ set +x 00:03:51.663 ************************************ 00:03:51.663 END TEST make 00:03:51.663 ************************************ 00:03:51.663 15:45:20 -- common/autotest_common.sh@1142 -- $ return 0 00:03:51.663 15:45:20 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:51.663 15:45:20 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:51.663 15:45:20 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:51.663 15:45:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.663 15:45:20 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:51.663 15:45:20 -- pm/common@44 -- $ pid=3476189 00:03:51.663 15:45:20 -- pm/common@50 -- $ kill -TERM 3476189 00:03:51.663 15:45:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.663 15:45:20 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:51.663 15:45:20 -- pm/common@44 -- $ pid=3476191 00:03:51.663 15:45:20 -- pm/common@50 -- $ kill -TERM 3476191 00:03:51.663 15:45:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.663 15:45:20 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:51.663 15:45:20 -- pm/common@44 -- $ pid=3476193 00:03:51.663 15:45:20 -- pm/common@50 -- $ kill -TERM 3476193 00:03:51.663 15:45:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.663 15:45:20 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:51.663 15:45:20 -- pm/common@44 -- $ pid=3476216 00:03:51.663 15:45:20 -- pm/common@50 -- $ sudo -E kill -TERM 3476216 00:03:51.921 15:45:20 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:51.921 15:45:20 -- nvmf/common.sh@7 -- # uname -s 00:03:51.921 15:45:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:51.921 15:45:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:51.921 15:45:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:51.921 15:45:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:51.921 15:45:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:51.921 15:45:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:51.921 15:45:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:51.921 15:45:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:51.921 15:45:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:51.921 15:45:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:51.921 15:45:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:03:51.921 15:45:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:03:51.921 15:45:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:51.921 15:45:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:51.921 15:45:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:51.921 15:45:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:51.921 15:45:20 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:51.921 15:45:20 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:51.921 15:45:20 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:51.921 15:45:20 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:51.921 15:45:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:51.921 15:45:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:51.921 15:45:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:51.921 15:45:20 -- paths/export.sh@5 -- # export PATH 00:03:51.921 15:45:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:51.921 15:45:20 -- nvmf/common.sh@47 -- # : 0 00:03:51.921 15:45:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:51.921 15:45:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:51.921 15:45:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:51.921 15:45:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:51.921 15:45:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:51.921 15:45:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:51.921 15:45:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:51.921 15:45:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:51.921 15:45:20 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:51.921 15:45:20 -- spdk/autotest.sh@32 -- # uname -s 00:03:51.921 15:45:20 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:51.921 15:45:20 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:51.921 15:45:20 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:51.921 15:45:20 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:51.921 15:45:20 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:51.921 15:45:20 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:51.921 15:45:20 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:51.921 15:45:20 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:51.921 15:45:20 -- spdk/autotest.sh@48 -- # udevadm_pid=3535479 00:03:51.921 15:45:20 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:51.921 15:45:20 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:51.921 15:45:20 -- pm/common@17 -- # local monitor 00:03:51.921 15:45:20 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.921 15:45:20 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.921 15:45:20 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.921 15:45:20 -- pm/common@21 -- # date +%s 00:03:51.921 15:45:20 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.921 15:45:20 -- pm/common@25 -- # sleep 1 00:03:51.921 15:45:20 -- pm/common@21 -- # date +%s 00:03:51.921 15:45:20 -- pm/common@21 -- # date +%s 00:03:51.921 15:45:20 -- pm/common@21 -- # date +%s 00:03:51.921 15:45:20 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721051120 00:03:51.921 15:45:20 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721051120 00:03:51.921 15:45:20 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721051120 00:03:51.921 15:45:20 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721051120 00:03:51.921 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721051120_collect-vmstat.pm.log 00:03:51.922 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721051120_collect-cpu-temp.pm.log 00:03:51.922 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721051120_collect-cpu-load.pm.log 00:03:51.922 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721051120_collect-bmc-pm.bmc.pm.log 00:03:52.855 15:45:21 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:52.855 15:45:21 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:52.855 15:45:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:52.855 15:45:21 -- common/autotest_common.sh@10 -- # set +x 00:03:52.855 15:45:21 -- spdk/autotest.sh@59 -- # create_test_list 00:03:52.855 15:45:21 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:52.855 15:45:21 -- common/autotest_common.sh@10 -- # set +x 00:03:52.855 15:45:21 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:52.855 15:45:21 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:52.855 15:45:21 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:52.855 15:45:21 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:52.855 15:45:21 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:52.855 15:45:21 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:52.855 15:45:21 -- common/autotest_common.sh@1455 -- # uname 00:03:52.855 15:45:21 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:52.855 15:45:21 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:52.855 15:45:21 -- common/autotest_common.sh@1475 -- # uname 00:03:52.855 15:45:21 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:52.855 15:45:21 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:52.855 15:45:21 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:52.855 15:45:21 -- spdk/autotest.sh@72 -- # hash lcov 00:03:52.855 15:45:21 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:52.856 15:45:21 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:52.856 --rc lcov_branch_coverage=1 00:03:52.856 --rc lcov_function_coverage=1 00:03:52.856 --rc genhtml_branch_coverage=1 00:03:52.856 --rc genhtml_function_coverage=1 00:03:52.856 --rc genhtml_legend=1 00:03:52.856 --rc geninfo_all_blocks=1 00:03:52.856 ' 00:03:52.856 15:45:21 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:52.856 --rc lcov_branch_coverage=1 00:03:52.856 --rc lcov_function_coverage=1 00:03:52.856 --rc genhtml_branch_coverage=1 00:03:52.856 --rc genhtml_function_coverage=1 00:03:52.856 --rc genhtml_legend=1 00:03:52.856 --rc geninfo_all_blocks=1 00:03:52.856 ' 00:03:52.856 15:45:21 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:52.856 --rc lcov_branch_coverage=1 00:03:52.856 --rc lcov_function_coverage=1 00:03:52.856 --rc genhtml_branch_coverage=1 00:03:52.856 --rc genhtml_function_coverage=1 00:03:52.856 --rc genhtml_legend=1 00:03:52.856 --rc geninfo_all_blocks=1 00:03:52.856 --no-external' 00:03:52.856 15:45:21 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:52.856 --rc lcov_branch_coverage=1 00:03:52.856 --rc lcov_function_coverage=1 00:03:52.856 --rc genhtml_branch_coverage=1 00:03:52.856 --rc genhtml_function_coverage=1 00:03:52.856 --rc genhtml_legend=1 00:03:52.856 --rc geninfo_all_blocks=1 00:03:52.856 --no-external' 00:03:52.856 15:45:21 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:53.114 lcov: LCOV version 1.14 00:03:53.114 15:45:21 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:05.307 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:05.307 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:15.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:15.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:15.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:15.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:15.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:15.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:15.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:15.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:15.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:15.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:15.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:15.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:15.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:15.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:15.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:15.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:15.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:15.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:15.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:15.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:15.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:15.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:15.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:15.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:15.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:15.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:15.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:15.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:15.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:15.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:15.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:15.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:15.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:15.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:15.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:15.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:15.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:15.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:15.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:15.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:15.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:15.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:15.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:15.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:15.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:15.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:15.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:15.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:15.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:15.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:15.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:15.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:15.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:15.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:15.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:15.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:15.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:15.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:15.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:15.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:15.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:15.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:15.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:15.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:15.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:15.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:15.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:15.305 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:15.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:15.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:15.306 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:17.210 15:45:46 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:17.210 15:45:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:17.210 15:45:46 -- common/autotest_common.sh@10 -- # set +x 00:04:17.210 15:45:46 -- spdk/autotest.sh@91 -- # rm -f 00:04:17.210 15:45:46 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:19.745 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:04:19.745 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:04:19.745 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:04:19.745 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:04:19.745 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:04:19.745 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:04:19.745 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:04:19.745 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:04:20.003 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:04:20.003 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:04:20.003 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:04:20.003 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:04:20.003 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:04:20.003 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:04:20.003 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:04:20.003 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:04:20.003 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:04:20.003 15:45:48 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:20.003 15:45:48 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:20.003 15:45:48 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:20.003 15:45:48 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:20.003 15:45:48 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:20.003 15:45:48 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:20.003 15:45:48 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:20.003 15:45:48 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:20.003 15:45:48 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:20.003 15:45:48 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:20.003 15:45:48 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:20.003 15:45:48 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:20.003 15:45:48 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:20.003 15:45:48 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:20.003 15:45:48 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:20.261 No valid GPT data, bailing 00:04:20.261 15:45:48 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:20.261 15:45:48 -- scripts/common.sh@391 -- # pt= 00:04:20.261 15:45:48 -- scripts/common.sh@392 -- # return 1 00:04:20.261 15:45:48 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:20.261 1+0 records in 00:04:20.261 1+0 records out 00:04:20.261 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00416672 s, 252 MB/s 00:04:20.261 15:45:48 -- spdk/autotest.sh@118 -- # sync 00:04:20.261 15:45:49 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:20.261 15:45:49 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:20.261 15:45:49 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:24.444 15:45:53 -- spdk/autotest.sh@124 -- # uname -s 00:04:24.444 15:45:53 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:24.444 15:45:53 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:24.444 15:45:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:24.444 15:45:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.444 15:45:53 -- common/autotest_common.sh@10 -- # set +x 00:04:24.444 ************************************ 00:04:24.444 START TEST setup.sh 00:04:24.444 ************************************ 00:04:24.444 15:45:53 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:24.444 * Looking for test storage... 00:04:24.444 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:24.444 15:45:53 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:24.444 15:45:53 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:24.444 15:45:53 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:24.444 15:45:53 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:24.444 15:45:53 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.444 15:45:53 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:24.444 ************************************ 00:04:24.444 START TEST acl 00:04:24.444 ************************************ 00:04:24.444 15:45:53 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:24.702 * Looking for test storage... 00:04:24.702 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:24.702 15:45:53 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:24.702 15:45:53 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:24.702 15:45:53 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:24.702 15:45:53 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:24.702 15:45:53 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:24.702 15:45:53 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:24.702 15:45:53 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:24.702 15:45:53 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:24.702 15:45:53 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:24.702 15:45:53 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:24.702 15:45:53 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:24.702 15:45:53 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:24.702 15:45:53 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:24.702 15:45:53 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:24.702 15:45:53 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:24.702 15:45:53 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:27.988 15:45:56 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:27.988 15:45:56 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:27.988 15:45:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:27.988 15:45:56 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:27.988 15:45:56 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.988 15:45:56 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:29.893 Hugepages 00:04:29.893 node hugesize free / total 00:04:29.893 15:45:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:29.893 15:45:58 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:29.893 15:45:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.893 15:45:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:29.893 15:45:58 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:29.893 15:45:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.893 15:45:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:29.893 15:45:58 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:29.893 15:45:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.893 00:04:29.893 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:29.893 15:45:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:29.893 15:45:58 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:29.893 15:45:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.893 15:45:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:29.893 15:45:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:29.893 15:45:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:29.893 15:45:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.893 15:45:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:29.893 15:45:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:29.893 15:45:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:29.893 15:45:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.893 15:45:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:29.893 15:45:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:29.893 15:45:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:29.893 15:45:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.893 15:45:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:29.893 15:45:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:29.893 15:45:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:29.893 15:45:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.893 15:45:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:29.893 15:45:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:29.893 15:45:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:29.893 15:45:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.893 15:45:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:29.893 15:45:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:29.893 15:45:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:29.893 15:45:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.893 15:45:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:29.893 15:45:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:29.893 15:45:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:29.893 15:45:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:29.893 15:45:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:29.893 15:45:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:29.893 15:45:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:29.893 15:45:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:30.151 15:45:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:04:30.151 15:45:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:30.151 15:45:58 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:04:30.151 15:45:58 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:30.151 15:45:58 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:30.151 15:45:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:30.151 15:45:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:30.151 15:45:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:30.151 15:45:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:30.151 15:45:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:30.151 15:45:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:30.151 15:45:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:30.151 15:45:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:30.151 15:45:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:30.151 15:45:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:30.151 15:45:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:30.151 15:45:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:30.151 15:45:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:30.151 15:45:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:30.151 15:45:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:30.151 15:45:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:30.151 15:45:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:30.151 15:45:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:30.151 15:45:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:30.151 15:45:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:30.151 15:45:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:30.151 15:45:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:30.151 15:45:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:30.151 15:45:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:30.151 15:45:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:30.151 15:45:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:30.151 15:45:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:30.151 15:45:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:30.151 15:45:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:30.151 15:45:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:30.151 15:45:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:30.151 15:45:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:30.151 15:45:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:30.151 15:45:58 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:30.151 15:45:58 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:30.151 15:45:58 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:30.151 15:45:58 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.151 15:45:58 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:30.151 ************************************ 00:04:30.151 START TEST denied 00:04:30.151 ************************************ 00:04:30.151 15:45:58 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:30.151 15:45:58 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:04:30.151 15:45:58 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:04:30.151 15:45:58 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:30.151 15:45:58 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.152 15:45:58 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:33.430 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:04:33.430 15:46:01 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:04:33.430 15:46:01 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:33.430 15:46:01 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:33.430 15:46:01 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:04:33.430 15:46:01 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:04:33.430 15:46:01 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:33.430 15:46:01 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:33.430 15:46:01 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:33.430 15:46:01 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:33.430 15:46:01 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:36.713 00:04:36.713 real 0m6.013s 00:04:36.713 user 0m1.779s 00:04:36.713 sys 0m3.263s 00:04:36.713 15:46:04 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.713 15:46:04 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:36.713 ************************************ 00:04:36.713 END TEST denied 00:04:36.713 ************************************ 00:04:36.713 15:46:04 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:36.713 15:46:04 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:36.713 15:46:04 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.713 15:46:04 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.713 15:46:04 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:36.713 ************************************ 00:04:36.713 START TEST allowed 00:04:36.713 ************************************ 00:04:36.713 15:46:05 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:36.713 15:46:05 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:04:36.713 15:46:05 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:36.713 15:46:05 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.713 15:46:05 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:04:36.713 15:46:05 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:40.066 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:40.066 15:46:08 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:40.066 15:46:08 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:40.066 15:46:08 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:40.066 15:46:08 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:40.066 15:46:08 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:43.349 00:04:43.349 real 0m6.591s 00:04:43.349 user 0m2.002s 00:04:43.349 sys 0m3.728s 00:04:43.349 15:46:11 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.349 15:46:11 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:43.349 ************************************ 00:04:43.349 END TEST allowed 00:04:43.349 ************************************ 00:04:43.349 15:46:11 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:43.349 00:04:43.349 real 0m18.273s 00:04:43.349 user 0m5.769s 00:04:43.349 sys 0m10.741s 00:04:43.349 15:46:11 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.349 15:46:11 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:43.349 ************************************ 00:04:43.349 END TEST acl 00:04:43.349 ************************************ 00:04:43.349 15:46:11 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:43.349 15:46:11 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:43.349 15:46:11 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.349 15:46:11 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.349 15:46:11 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:43.349 ************************************ 00:04:43.349 START TEST hugepages 00:04:43.349 ************************************ 00:04:43.349 15:46:11 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:43.349 * Looking for test storage... 00:04:43.349 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:43.349 15:46:11 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:43.349 15:46:11 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:43.349 15:46:11 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:43.349 15:46:11 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:43.349 15:46:11 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:43.349 15:46:11 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:43.349 15:46:11 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:43.349 15:46:11 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:43.349 15:46:11 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:43.349 15:46:11 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:43.349 15:46:11 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.349 15:46:11 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.349 15:46:11 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.349 15:46:11 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.349 15:46:11 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.349 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.349 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.349 15:46:11 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173383156 kB' 'MemAvailable: 176254584 kB' 'Buffers: 3896 kB' 'Cached: 10149780 kB' 'SwapCached: 0 kB' 'Active: 7158460 kB' 'Inactive: 3507524 kB' 'Active(anon): 6766452 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515644 kB' 'Mapped: 186952 kB' 'Shmem: 6254144 kB' 'KReclaimable: 232692 kB' 'Slab: 809684 kB' 'SReclaimable: 232692 kB' 'SUnreclaim: 576992 kB' 'KernelStack: 20416 kB' 'PageTables: 8932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101982028 kB' 'Committed_AS: 8288012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315292 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2954196 kB' 'DirectMap2M: 15599616 kB' 'DirectMap1G: 183500800 kB' 00:04:43.349 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.349 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:43.350 15:46:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:43.351 15:46:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:43.351 15:46:11 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:43.351 15:46:11 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:43.351 15:46:11 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:43.351 15:46:11 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.351 15:46:11 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.351 15:46:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:43.351 ************************************ 00:04:43.351 START TEST default_setup 00:04:43.351 ************************************ 00:04:43.351 15:46:11 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:43.351 15:46:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:43.351 15:46:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:43.351 15:46:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:43.351 15:46:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:43.351 15:46:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:43.351 15:46:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:43.351 15:46:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:43.351 15:46:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:43.351 15:46:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:43.351 15:46:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:43.351 15:46:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:43.351 15:46:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:43.351 15:46:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:43.351 15:46:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:43.351 15:46:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:43.351 15:46:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:43.351 15:46:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:43.351 15:46:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:43.351 15:46:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:43.351 15:46:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:43.351 15:46:11 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.351 15:46:11 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:45.249 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:45.249 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:45.507 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:45.507 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:45.507 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:45.507 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:45.507 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:45.507 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:45.507 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:45.507 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:45.507 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:45.507 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:45.507 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:45.507 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:45.507 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:45.507 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:46.449 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175537872 kB' 'MemAvailable: 178409284 kB' 'Buffers: 3896 kB' 'Cached: 10149884 kB' 'SwapCached: 0 kB' 'Active: 7177004 kB' 'Inactive: 3507524 kB' 'Active(anon): 6784996 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533564 kB' 'Mapped: 187048 kB' 'Shmem: 6254248 kB' 'KReclaimable: 232660 kB' 'Slab: 808052 kB' 'SReclaimable: 232660 kB' 'SUnreclaim: 575392 kB' 'KernelStack: 20480 kB' 'PageTables: 8900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8305740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315468 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2954196 kB' 'DirectMap2M: 15599616 kB' 'DirectMap1G: 183500800 kB' 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.449 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175538708 kB' 'MemAvailable: 178410120 kB' 'Buffers: 3896 kB' 'Cached: 10149888 kB' 'SwapCached: 0 kB' 'Active: 7176260 kB' 'Inactive: 3507524 kB' 'Active(anon): 6784252 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533336 kB' 'Mapped: 186968 kB' 'Shmem: 6254252 kB' 'KReclaimable: 232660 kB' 'Slab: 808020 kB' 'SReclaimable: 232660 kB' 'SUnreclaim: 575360 kB' 'KernelStack: 20480 kB' 'PageTables: 8876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8305760 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315468 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2954196 kB' 'DirectMap2M: 15599616 kB' 'DirectMap1G: 183500800 kB' 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.450 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.451 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175539044 kB' 'MemAvailable: 178410456 kB' 'Buffers: 3896 kB' 'Cached: 10149904 kB' 'SwapCached: 0 kB' 'Active: 7176292 kB' 'Inactive: 3507524 kB' 'Active(anon): 6784284 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533336 kB' 'Mapped: 186968 kB' 'Shmem: 6254268 kB' 'KReclaimable: 232660 kB' 'Slab: 808020 kB' 'SReclaimable: 232660 kB' 'SUnreclaim: 575360 kB' 'KernelStack: 20480 kB' 'PageTables: 8876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8305780 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315468 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2954196 kB' 'DirectMap2M: 15599616 kB' 'DirectMap1G: 183500800 kB' 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.452 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.453 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:46.454 nr_hugepages=1024 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:46.454 resv_hugepages=0 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:46.454 surplus_hugepages=0 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:46.454 anon_hugepages=0 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175539044 kB' 'MemAvailable: 178410456 kB' 'Buffers: 3896 kB' 'Cached: 10149928 kB' 'SwapCached: 0 kB' 'Active: 7176216 kB' 'Inactive: 3507524 kB' 'Active(anon): 6784208 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533212 kB' 'Mapped: 186968 kB' 'Shmem: 6254292 kB' 'KReclaimable: 232660 kB' 'Slab: 808020 kB' 'SReclaimable: 232660 kB' 'SUnreclaim: 575360 kB' 'KernelStack: 20464 kB' 'PageTables: 8824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8305804 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315468 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2954196 kB' 'DirectMap2M: 15599616 kB' 'DirectMap1G: 183500800 kB' 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.454 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.455 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.714 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.714 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.714 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.714 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.714 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.714 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.714 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.714 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.714 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.714 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.714 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.714 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.714 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.714 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.714 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.714 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.714 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.714 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.714 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.714 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.714 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.714 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.714 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:46.714 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:46.714 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:46.714 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:46.714 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85753512 kB' 'MemUsed: 11909172 kB' 'SwapCached: 0 kB' 'Active: 5348660 kB' 'Inactive: 3336416 kB' 'Active(anon): 5191120 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3336416 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8536868 kB' 'Mapped: 77752 kB' 'AnonPages: 151360 kB' 'Shmem: 5042912 kB' 'KernelStack: 11480 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131736 kB' 'Slab: 404548 kB' 'SReclaimable: 131736 kB' 'SUnreclaim: 272812 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.715 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:46.716 node0=1024 expecting 1024 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:46.716 00:04:46.716 real 0m3.542s 00:04:46.716 user 0m1.024s 00:04:46.716 sys 0m1.694s 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.716 15:46:15 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:46.716 ************************************ 00:04:46.716 END TEST default_setup 00:04:46.716 ************************************ 00:04:46.716 15:46:15 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:46.716 15:46:15 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:46.716 15:46:15 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.716 15:46:15 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.716 15:46:15 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:46.716 ************************************ 00:04:46.716 START TEST per_node_1G_alloc 00:04:46.716 ************************************ 00:04:46.716 15:46:15 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:46.716 15:46:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:46.716 15:46:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:46.716 15:46:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:46.716 15:46:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:46.716 15:46:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:46.716 15:46:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:46.716 15:46:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:46.716 15:46:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:46.716 15:46:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:46.716 15:46:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:46.716 15:46:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:46.716 15:46:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:46.716 15:46:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:46.716 15:46:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:46.716 15:46:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:46.716 15:46:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:46.716 15:46:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:46.716 15:46:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:46.716 15:46:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:46.716 15:46:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:46.716 15:46:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:46.716 15:46:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:46.716 15:46:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:46.716 15:46:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:46.716 15:46:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:46.716 15:46:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.716 15:46:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:49.248 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:49.248 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:49.248 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:49.248 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:49.248 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:49.248 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:49.248 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:49.248 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:49.248 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:49.248 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:49.248 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:49.248 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:49.248 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:49.248 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:49.248 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:49.248 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:49.248 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:49.248 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:49.248 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:49.248 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:49.248 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:49.248 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:49.248 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:49.248 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:49.248 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:49.248 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:49.248 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:49.248 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:49.248 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:49.248 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:49.248 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.248 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.248 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.248 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.248 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.248 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.248 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.248 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175558144 kB' 'MemAvailable: 178429556 kB' 'Buffers: 3896 kB' 'Cached: 10150024 kB' 'SwapCached: 0 kB' 'Active: 7177860 kB' 'Inactive: 3507524 kB' 'Active(anon): 6785852 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534296 kB' 'Mapped: 187096 kB' 'Shmem: 6254388 kB' 'KReclaimable: 232660 kB' 'Slab: 807668 kB' 'SReclaimable: 232660 kB' 'SUnreclaim: 575008 kB' 'KernelStack: 20528 kB' 'PageTables: 9008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8306272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315468 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2954196 kB' 'DirectMap2M: 15599616 kB' 'DirectMap1G: 183500800 kB' 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.249 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.512 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.512 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.512 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.512 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.512 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.512 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.512 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.512 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.512 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.512 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.512 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.512 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.512 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.512 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.512 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.512 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.512 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.512 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.512 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.512 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.512 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.512 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.512 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.512 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175559508 kB' 'MemAvailable: 178430920 kB' 'Buffers: 3896 kB' 'Cached: 10150028 kB' 'SwapCached: 0 kB' 'Active: 7177212 kB' 'Inactive: 3507524 kB' 'Active(anon): 6785204 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534136 kB' 'Mapped: 186980 kB' 'Shmem: 6254392 kB' 'KReclaimable: 232660 kB' 'Slab: 807620 kB' 'SReclaimable: 232660 kB' 'SUnreclaim: 574960 kB' 'KernelStack: 20496 kB' 'PageTables: 8896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8306292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315452 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2954196 kB' 'DirectMap2M: 15599616 kB' 'DirectMap1G: 183500800 kB' 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.513 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.514 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175560148 kB' 'MemAvailable: 178431560 kB' 'Buffers: 3896 kB' 'Cached: 10150044 kB' 'SwapCached: 0 kB' 'Active: 7177380 kB' 'Inactive: 3507524 kB' 'Active(anon): 6785372 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534292 kB' 'Mapped: 186980 kB' 'Shmem: 6254408 kB' 'KReclaimable: 232660 kB' 'Slab: 807620 kB' 'SReclaimable: 232660 kB' 'SUnreclaim: 574960 kB' 'KernelStack: 20480 kB' 'PageTables: 8864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8308932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315484 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2954196 kB' 'DirectMap2M: 15599616 kB' 'DirectMap1G: 183500800 kB' 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.515 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:49.516 nr_hugepages=1024 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:49.516 resv_hugepages=0 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:49.516 surplus_hugepages=0 00:04:49.516 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:49.517 anon_hugepages=0 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175563660 kB' 'MemAvailable: 178435072 kB' 'Buffers: 3896 kB' 'Cached: 10150064 kB' 'SwapCached: 0 kB' 'Active: 7176008 kB' 'Inactive: 3507524 kB' 'Active(anon): 6784000 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532416 kB' 'Mapped: 185820 kB' 'Shmem: 6254428 kB' 'KReclaimable: 232660 kB' 'Slab: 807548 kB' 'SReclaimable: 232660 kB' 'SUnreclaim: 574888 kB' 'KernelStack: 20800 kB' 'PageTables: 9612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8297528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315644 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2954196 kB' 'DirectMap2M: 15599616 kB' 'DirectMap1G: 183500800 kB' 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.517 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:49.518 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86820412 kB' 'MemUsed: 10842272 kB' 'SwapCached: 0 kB' 'Active: 5350144 kB' 'Inactive: 3336416 kB' 'Active(anon): 5192604 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3336416 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8536956 kB' 'Mapped: 77344 kB' 'AnonPages: 152768 kB' 'Shmem: 5043000 kB' 'KernelStack: 11528 kB' 'PageTables: 4488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131736 kB' 'Slab: 404588 kB' 'SReclaimable: 131736 kB' 'SUnreclaim: 272852 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.519 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 88744448 kB' 'MemUsed: 4974020 kB' 'SwapCached: 0 kB' 'Active: 1825276 kB' 'Inactive: 171108 kB' 'Active(anon): 1590808 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 171108 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1617032 kB' 'Mapped: 108476 kB' 'AnonPages: 379448 kB' 'Shmem: 1211456 kB' 'KernelStack: 9112 kB' 'PageTables: 4920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 100924 kB' 'Slab: 403056 kB' 'SReclaimable: 100924 kB' 'SUnreclaim: 302132 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.520 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:49.521 node0=512 expecting 512 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:49.521 node1=512 expecting 512 00:04:49.521 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:49.521 00:04:49.521 real 0m2.871s 00:04:49.521 user 0m1.147s 00:04:49.521 sys 0m1.746s 00:04:49.522 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.522 15:46:18 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:49.522 ************************************ 00:04:49.522 END TEST per_node_1G_alloc 00:04:49.522 ************************************ 00:04:49.522 15:46:18 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:49.522 15:46:18 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:49.522 15:46:18 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.522 15:46:18 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.522 15:46:18 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:49.522 ************************************ 00:04:49.522 START TEST even_2G_alloc 00:04:49.522 ************************************ 00:04:49.522 15:46:18 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:49.522 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:49.522 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:49.522 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:49.522 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:49.522 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:49.522 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:49.522 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:49.522 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:49.522 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:49.522 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:49.522 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:49.522 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:49.522 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:49.522 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:49.522 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:49.522 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:49.522 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:49.522 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:49.522 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:49.522 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:49.522 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:49.522 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:49.522 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:49.522 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:49.522 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:49.522 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:49.522 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.522 15:46:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:52.052 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:52.052 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:52.052 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:52.312 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:52.312 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:52.312 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:52.312 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:52.312 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:52.312 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:52.312 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:52.312 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:52.312 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:52.313 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:52.313 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:52.313 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:52.313 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:52.313 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175552460 kB' 'MemAvailable: 178423872 kB' 'Buffers: 3896 kB' 'Cached: 10150176 kB' 'SwapCached: 0 kB' 'Active: 7176452 kB' 'Inactive: 3507524 kB' 'Active(anon): 6784444 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533080 kB' 'Mapped: 185912 kB' 'Shmem: 6254540 kB' 'KReclaimable: 232660 kB' 'Slab: 806568 kB' 'SReclaimable: 232660 kB' 'SUnreclaim: 573908 kB' 'KernelStack: 20704 kB' 'PageTables: 8756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8298004 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315660 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2954196 kB' 'DirectMap2M: 15599616 kB' 'DirectMap1G: 183500800 kB' 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.313 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175552048 kB' 'MemAvailable: 178423460 kB' 'Buffers: 3896 kB' 'Cached: 10150180 kB' 'SwapCached: 0 kB' 'Active: 7176208 kB' 'Inactive: 3507524 kB' 'Active(anon): 6784200 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532904 kB' 'Mapped: 185876 kB' 'Shmem: 6254544 kB' 'KReclaimable: 232660 kB' 'Slab: 806872 kB' 'SReclaimable: 232660 kB' 'SUnreclaim: 574212 kB' 'KernelStack: 20736 kB' 'PageTables: 9124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8298020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315660 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2954196 kB' 'DirectMap2M: 15599616 kB' 'DirectMap1G: 183500800 kB' 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.314 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.315 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175553472 kB' 'MemAvailable: 178424884 kB' 'Buffers: 3896 kB' 'Cached: 10150196 kB' 'SwapCached: 0 kB' 'Active: 7175492 kB' 'Inactive: 3507524 kB' 'Active(anon): 6783484 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531628 kB' 'Mapped: 185936 kB' 'Shmem: 6254560 kB' 'KReclaimable: 232660 kB' 'Slab: 806956 kB' 'SReclaimable: 232660 kB' 'SUnreclaim: 574296 kB' 'KernelStack: 20368 kB' 'PageTables: 8596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8296180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315468 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2954196 kB' 'DirectMap2M: 15599616 kB' 'DirectMap1G: 183500800 kB' 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.316 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.317 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:52.318 nr_hugepages=1024 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:52.318 resv_hugepages=0 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:52.318 surplus_hugepages=0 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:52.318 anon_hugepages=0 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.318 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175554452 kB' 'MemAvailable: 178425864 kB' 'Buffers: 3896 kB' 'Cached: 10150220 kB' 'SwapCached: 0 kB' 'Active: 7174904 kB' 'Inactive: 3507524 kB' 'Active(anon): 6782896 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531540 kB' 'Mapped: 185876 kB' 'Shmem: 6254584 kB' 'KReclaimable: 232660 kB' 'Slab: 806956 kB' 'SReclaimable: 232660 kB' 'SUnreclaim: 574296 kB' 'KernelStack: 20352 kB' 'PageTables: 8356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8295084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315372 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2954196 kB' 'DirectMap2M: 15599616 kB' 'DirectMap1G: 183500800 kB' 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.578 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.579 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.580 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86798092 kB' 'MemUsed: 10864592 kB' 'SwapCached: 0 kB' 'Active: 5349008 kB' 'Inactive: 3336416 kB' 'Active(anon): 5191468 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3336416 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8537100 kB' 'Mapped: 77388 kB' 'AnonPages: 151536 kB' 'Shmem: 5043144 kB' 'KernelStack: 11496 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131736 kB' 'Slab: 404296 kB' 'SReclaimable: 131736 kB' 'SUnreclaim: 272560 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.581 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.582 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 88757996 kB' 'MemUsed: 4960472 kB' 'SwapCached: 0 kB' 'Active: 1825908 kB' 'Inactive: 171108 kB' 'Active(anon): 1591440 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 171108 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1617064 kB' 'Mapped: 108488 kB' 'AnonPages: 380060 kB' 'Shmem: 1211488 kB' 'KernelStack: 8920 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 100924 kB' 'Slab: 402596 kB' 'SReclaimable: 100924 kB' 'SUnreclaim: 301672 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.583 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.584 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.585 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.585 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.585 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.585 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.585 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:52.585 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.585 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.585 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.585 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.585 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:52.585 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:52.585 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:52.585 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:52.585 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:52.585 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:52.585 node0=512 expecting 512 00:04:52.585 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:52.585 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:52.585 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:52.585 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:52.585 node1=512 expecting 512 00:04:52.585 15:46:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:52.585 00:04:52.585 real 0m2.905s 00:04:52.585 user 0m1.215s 00:04:52.585 sys 0m1.754s 00:04:52.585 15:46:21 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.585 15:46:21 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:52.585 ************************************ 00:04:52.585 END TEST even_2G_alloc 00:04:52.585 ************************************ 00:04:52.585 15:46:21 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:52.585 15:46:21 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:52.585 15:46:21 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.585 15:46:21 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.585 15:46:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:52.585 ************************************ 00:04:52.585 START TEST odd_alloc 00:04:52.585 ************************************ 00:04:52.585 15:46:21 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:52.585 15:46:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:52.585 15:46:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:52.585 15:46:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:52.585 15:46:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:52.585 15:46:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:52.585 15:46:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:52.585 15:46:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:52.585 15:46:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:52.585 15:46:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:52.585 15:46:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:52.585 15:46:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:52.585 15:46:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:52.585 15:46:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:52.585 15:46:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:52.585 15:46:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:52.585 15:46:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:52.585 15:46:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:52.585 15:46:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:52.585 15:46:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:52.585 15:46:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:52.585 15:46:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:52.585 15:46:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:52.585 15:46:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:52.585 15:46:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:52.585 15:46:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:52.585 15:46:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:52.585 15:46:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:52.585 15:46:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:55.123 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:55.123 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:55.123 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:55.123 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:55.123 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:55.123 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:55.123 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:55.123 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:55.123 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:55.123 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:55.123 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:55.123 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:55.123 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:55.123 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:55.123 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:55.123 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:55.123 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:55.123 15:46:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:55.123 15:46:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:55.123 15:46:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:55.123 15:46:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:55.123 15:46:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:55.123 15:46:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:55.123 15:46:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:55.123 15:46:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:55.123 15:46:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:55.123 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:55.123 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:55.123 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:55.123 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.123 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.123 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.123 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.123 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.123 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.123 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.123 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.123 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175557428 kB' 'MemAvailable: 178428840 kB' 'Buffers: 3896 kB' 'Cached: 10150332 kB' 'SwapCached: 0 kB' 'Active: 7177008 kB' 'Inactive: 3507524 kB' 'Active(anon): 6785000 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533040 kB' 'Mapped: 185964 kB' 'Shmem: 6254696 kB' 'KReclaimable: 232660 kB' 'Slab: 807388 kB' 'SReclaimable: 232660 kB' 'SUnreclaim: 574728 kB' 'KernelStack: 20672 kB' 'PageTables: 9040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8297056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315676 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2954196 kB' 'DirectMap2M: 15599616 kB' 'DirectMap1G: 183500800 kB' 00:04:55.123 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.123 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.123 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.123 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.123 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.123 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.123 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.123 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.123 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.123 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.123 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.123 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.123 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.123 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.123 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.123 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.123 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.123 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.123 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.123 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.123 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.123 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.124 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175561464 kB' 'MemAvailable: 178432876 kB' 'Buffers: 3896 kB' 'Cached: 10150344 kB' 'SwapCached: 0 kB' 'Active: 7176652 kB' 'Inactive: 3507524 kB' 'Active(anon): 6784644 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532752 kB' 'Mapped: 185924 kB' 'Shmem: 6254708 kB' 'KReclaimable: 232660 kB' 'Slab: 807388 kB' 'SReclaimable: 232660 kB' 'SUnreclaim: 574728 kB' 'KernelStack: 20496 kB' 'PageTables: 8248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8298568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315532 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2954196 kB' 'DirectMap2M: 15599616 kB' 'DirectMap1G: 183500800 kB' 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.125 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.126 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175561324 kB' 'MemAvailable: 178432736 kB' 'Buffers: 3896 kB' 'Cached: 10150352 kB' 'SwapCached: 0 kB' 'Active: 7176764 kB' 'Inactive: 3507524 kB' 'Active(anon): 6784756 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533416 kB' 'Mapped: 185848 kB' 'Shmem: 6254716 kB' 'KReclaimable: 232660 kB' 'Slab: 807524 kB' 'SReclaimable: 232660 kB' 'SUnreclaim: 574864 kB' 'KernelStack: 20608 kB' 'PageTables: 8684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8298588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315676 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2954196 kB' 'DirectMap2M: 15599616 kB' 'DirectMap1G: 183500800 kB' 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.127 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.128 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:55.129 nr_hugepages=1025 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:55.129 resv_hugepages=0 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:55.129 surplus_hugepages=0 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:55.129 anon_hugepages=0 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:55.129 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175561208 kB' 'MemAvailable: 178432620 kB' 'Buffers: 3896 kB' 'Cached: 10150372 kB' 'SwapCached: 0 kB' 'Active: 7176544 kB' 'Inactive: 3507524 kB' 'Active(anon): 6784536 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533076 kB' 'Mapped: 185848 kB' 'Shmem: 6254736 kB' 'KReclaimable: 232660 kB' 'Slab: 807556 kB' 'SReclaimable: 232660 kB' 'SUnreclaim: 574896 kB' 'KernelStack: 20528 kB' 'PageTables: 8820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8298608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315660 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2954196 kB' 'DirectMap2M: 15599616 kB' 'DirectMap1G: 183500800 kB' 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.130 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.131 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86797528 kB' 'MemUsed: 10865156 kB' 'SwapCached: 0 kB' 'Active: 5349452 kB' 'Inactive: 3336416 kB' 'Active(anon): 5191912 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3336416 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8537200 kB' 'Mapped: 77348 kB' 'AnonPages: 151848 kB' 'Shmem: 5043244 kB' 'KernelStack: 11496 kB' 'PageTables: 4340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131736 kB' 'Slab: 404816 kB' 'SReclaimable: 131736 kB' 'SUnreclaim: 273080 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.132 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.133 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 88762696 kB' 'MemUsed: 4955772 kB' 'SwapCached: 0 kB' 'Active: 1826720 kB' 'Inactive: 171108 kB' 'Active(anon): 1592252 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 171108 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1617092 kB' 'Mapped: 108500 kB' 'AnonPages: 380816 kB' 'Shmem: 1211516 kB' 'KernelStack: 9112 kB' 'PageTables: 4772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 100924 kB' 'Slab: 402740 kB' 'SReclaimable: 100924 kB' 'SUnreclaim: 301816 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.134 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:55.135 node0=512 expecting 513 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:55.135 node1=513 expecting 512 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:55.135 00:04:55.135 real 0m2.568s 00:04:55.135 user 0m1.007s 00:04:55.135 sys 0m1.574s 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.135 15:46:23 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:55.135 ************************************ 00:04:55.135 END TEST odd_alloc 00:04:55.135 ************************************ 00:04:55.135 15:46:23 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:55.135 15:46:23 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:55.135 15:46:23 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.135 15:46:23 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.135 15:46:23 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:55.135 ************************************ 00:04:55.135 START TEST custom_alloc 00:04:55.135 ************************************ 00:04:55.135 15:46:24 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:55.135 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:55.135 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:55.135 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.136 15:46:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:57.707 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:57.707 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:57.707 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:57.707 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:57.707 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:57.707 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:57.707 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:57.707 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:57.707 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:57.707 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:57.707 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:57.707 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:57.707 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:57.707 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:57.707 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:57.707 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:57.707 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174513408 kB' 'MemAvailable: 177384820 kB' 'Buffers: 3896 kB' 'Cached: 10150476 kB' 'SwapCached: 0 kB' 'Active: 7177116 kB' 'Inactive: 3507524 kB' 'Active(anon): 6785108 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533032 kB' 'Mapped: 186040 kB' 'Shmem: 6254840 kB' 'KReclaimable: 232660 kB' 'Slab: 807556 kB' 'SReclaimable: 232660 kB' 'SUnreclaim: 574896 kB' 'KernelStack: 20464 kB' 'PageTables: 8844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8296464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315500 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2954196 kB' 'DirectMap2M: 15599616 kB' 'DirectMap1G: 183500800 kB' 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.972 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.973 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174513712 kB' 'MemAvailable: 177385124 kB' 'Buffers: 3896 kB' 'Cached: 10150480 kB' 'SwapCached: 0 kB' 'Active: 7176136 kB' 'Inactive: 3507524 kB' 'Active(anon): 6784128 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532544 kB' 'Mapped: 185860 kB' 'Shmem: 6254844 kB' 'KReclaimable: 232660 kB' 'Slab: 807516 kB' 'SReclaimable: 232660 kB' 'SUnreclaim: 574856 kB' 'KernelStack: 20432 kB' 'PageTables: 8736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8296484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315468 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2954196 kB' 'DirectMap2M: 15599616 kB' 'DirectMap1G: 183500800 kB' 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.974 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174515388 kB' 'MemAvailable: 177386800 kB' 'Buffers: 3896 kB' 'Cached: 10150480 kB' 'SwapCached: 0 kB' 'Active: 7176844 kB' 'Inactive: 3507524 kB' 'Active(anon): 6784836 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533316 kB' 'Mapped: 185860 kB' 'Shmem: 6254844 kB' 'KReclaimable: 232660 kB' 'Slab: 807516 kB' 'SReclaimable: 232660 kB' 'SUnreclaim: 574856 kB' 'KernelStack: 20432 kB' 'PageTables: 8760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8318564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315452 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2954196 kB' 'DirectMap2M: 15599616 kB' 'DirectMap1G: 183500800 kB' 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.975 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.976 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:57.977 nr_hugepages=1536 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:57.977 resv_hugepages=0 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:57.977 surplus_hugepages=0 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:57.977 anon_hugepages=0 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174513632 kB' 'MemAvailable: 177385044 kB' 'Buffers: 3896 kB' 'Cached: 10150536 kB' 'SwapCached: 0 kB' 'Active: 7175728 kB' 'Inactive: 3507524 kB' 'Active(anon): 6783720 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532116 kB' 'Mapped: 185860 kB' 'Shmem: 6254900 kB' 'KReclaimable: 232660 kB' 'Slab: 807516 kB' 'SReclaimable: 232660 kB' 'SUnreclaim: 574856 kB' 'KernelStack: 20368 kB' 'PageTables: 8492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8296160 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315436 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2954196 kB' 'DirectMap2M: 15599616 kB' 'DirectMap1G: 183500800 kB' 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.977 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.978 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86807452 kB' 'MemUsed: 10855232 kB' 'SwapCached: 0 kB' 'Active: 5348876 kB' 'Inactive: 3336416 kB' 'Active(anon): 5191336 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3336416 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8537336 kB' 'Mapped: 77348 kB' 'AnonPages: 151208 kB' 'Shmem: 5043380 kB' 'KernelStack: 11464 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131736 kB' 'Slab: 404684 kB' 'SReclaimable: 131736 kB' 'SUnreclaim: 272948 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.979 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 87706328 kB' 'MemUsed: 6012140 kB' 'SwapCached: 0 kB' 'Active: 1826892 kB' 'Inactive: 171108 kB' 'Active(anon): 1592424 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 171108 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1617116 kB' 'Mapped: 108512 kB' 'AnonPages: 381016 kB' 'Shmem: 1211540 kB' 'KernelStack: 8872 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 100924 kB' 'Slab: 402824 kB' 'SReclaimable: 100924 kB' 'SUnreclaim: 301900 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.980 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:57.981 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:58.241 node0=512 expecting 512 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:58.241 node1=1024 expecting 1024 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:58.241 00:04:58.241 real 0m2.883s 00:04:58.241 user 0m1.191s 00:04:58.241 sys 0m1.758s 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.241 15:46:26 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:58.241 ************************************ 00:04:58.241 END TEST custom_alloc 00:04:58.241 ************************************ 00:04:58.241 15:46:26 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:58.242 15:46:26 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:58.242 15:46:26 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.242 15:46:26 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.242 15:46:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:58.242 ************************************ 00:04:58.242 START TEST no_shrink_alloc 00:04:58.242 ************************************ 00:04:58.242 15:46:26 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:58.242 15:46:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:58.242 15:46:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:58.242 15:46:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:58.242 15:46:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:58.242 15:46:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:58.242 15:46:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:58.242 15:46:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:58.242 15:46:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:58.242 15:46:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:58.242 15:46:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:58.242 15:46:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:58.242 15:46:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:58.242 15:46:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:58.242 15:46:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:58.242 15:46:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:58.242 15:46:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:58.242 15:46:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:58.242 15:46:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:58.242 15:46:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:58.242 15:46:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:58.242 15:46:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:58.242 15:46:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:00.147 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:00.147 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:00.147 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:00.147 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:00.147 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:00.147 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:00.147 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:00.147 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:00.147 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:00.147 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:00.147 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:00.147 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:00.147 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:00.147 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:00.410 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:00.410 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:00.410 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:00.410 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:00.410 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:00.410 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:00.410 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:00.410 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:00.410 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:00.410 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:00.410 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:00.410 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:00.410 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:00.410 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:00.410 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:00.410 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.410 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.410 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.410 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.410 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.410 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.410 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.410 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175534160 kB' 'MemAvailable: 178405572 kB' 'Buffers: 3896 kB' 'Cached: 10150628 kB' 'SwapCached: 0 kB' 'Active: 7178356 kB' 'Inactive: 3507524 kB' 'Active(anon): 6786348 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534492 kB' 'Mapped: 185884 kB' 'Shmem: 6254992 kB' 'KReclaimable: 232660 kB' 'Slab: 807476 kB' 'SReclaimable: 232660 kB' 'SUnreclaim: 574816 kB' 'KernelStack: 20656 kB' 'PageTables: 9068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8299820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315740 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2954196 kB' 'DirectMap2M: 15599616 kB' 'DirectMap1G: 183500800 kB' 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.411 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175535896 kB' 'MemAvailable: 178407308 kB' 'Buffers: 3896 kB' 'Cached: 10150632 kB' 'SwapCached: 0 kB' 'Active: 7177504 kB' 'Inactive: 3507524 kB' 'Active(anon): 6785496 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533684 kB' 'Mapped: 185956 kB' 'Shmem: 6254996 kB' 'KReclaimable: 232660 kB' 'Slab: 807488 kB' 'SReclaimable: 232660 kB' 'SUnreclaim: 574828 kB' 'KernelStack: 20384 kB' 'PageTables: 8556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8298344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315532 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2954196 kB' 'DirectMap2M: 15599616 kB' 'DirectMap1G: 183500800 kB' 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.412 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.413 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175538368 kB' 'MemAvailable: 178409780 kB' 'Buffers: 3896 kB' 'Cached: 10150648 kB' 'SwapCached: 0 kB' 'Active: 7177492 kB' 'Inactive: 3507524 kB' 'Active(anon): 6785484 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533700 kB' 'Mapped: 185880 kB' 'Shmem: 6255012 kB' 'KReclaimable: 232660 kB' 'Slab: 807740 kB' 'SReclaimable: 232660 kB' 'SUnreclaim: 575080 kB' 'KernelStack: 20544 kB' 'PageTables: 8824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8298368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315596 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2954196 kB' 'DirectMap2M: 15599616 kB' 'DirectMap1G: 183500800 kB' 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.414 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.415 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:00.416 nr_hugepages=1024 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:00.416 resv_hugepages=0 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:00.416 surplus_hugepages=0 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:00.416 anon_hugepages=0 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175537688 kB' 'MemAvailable: 178409100 kB' 'Buffers: 3896 kB' 'Cached: 10150668 kB' 'SwapCached: 0 kB' 'Active: 7177692 kB' 'Inactive: 3507524 kB' 'Active(anon): 6785684 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533868 kB' 'Mapped: 185880 kB' 'Shmem: 6255032 kB' 'KReclaimable: 232660 kB' 'Slab: 807740 kB' 'SReclaimable: 232660 kB' 'SUnreclaim: 575080 kB' 'KernelStack: 20592 kB' 'PageTables: 8992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8299884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315660 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2954196 kB' 'DirectMap2M: 15599616 kB' 'DirectMap1G: 183500800 kB' 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.416 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:00.417 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:00.418 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:00.418 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:00.418 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:00.418 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:00.418 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.418 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85751708 kB' 'MemUsed: 11910976 kB' 'SwapCached: 0 kB' 'Active: 5349508 kB' 'Inactive: 3336416 kB' 'Active(anon): 5191968 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3336416 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8537436 kB' 'Mapped: 77348 kB' 'AnonPages: 151648 kB' 'Shmem: 5043480 kB' 'KernelStack: 11464 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131736 kB' 'Slab: 404928 kB' 'SReclaimable: 131736 kB' 'SUnreclaim: 273192 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.702 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:00.703 node0=1024 expecting 1024 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.703 15:46:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:03.243 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:03.243 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:03.243 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:03.243 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:03.243 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:03.243 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:03.243 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:03.243 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:03.243 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:03.243 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:03.243 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:03.243 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:03.243 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:03.243 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:03.243 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:03.243 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:03.243 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:03.243 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:03.243 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:03.243 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:03.243 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:03.243 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:03.243 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:03.243 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:03.243 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:03.243 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:03.243 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:03.243 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:03.243 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:03.243 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:03.243 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.243 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.243 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.243 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.243 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.243 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.243 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.243 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.243 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175590660 kB' 'MemAvailable: 178462072 kB' 'Buffers: 3896 kB' 'Cached: 10150756 kB' 'SwapCached: 0 kB' 'Active: 7177504 kB' 'Inactive: 3507524 kB' 'Active(anon): 6785496 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533428 kB' 'Mapped: 185980 kB' 'Shmem: 6255120 kB' 'KReclaimable: 232660 kB' 'Slab: 807284 kB' 'SReclaimable: 232660 kB' 'SUnreclaim: 574624 kB' 'KernelStack: 20592 kB' 'PageTables: 8992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8298716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315644 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2954196 kB' 'DirectMap2M: 15599616 kB' 'DirectMap1G: 183500800 kB' 00:05:03.243 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.243 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.243 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.243 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.243 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.243 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.243 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.243 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.243 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.243 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.243 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.243 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.243 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.243 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.243 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.243 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.243 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.243 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.244 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175591444 kB' 'MemAvailable: 178462856 kB' 'Buffers: 3896 kB' 'Cached: 10150760 kB' 'SwapCached: 0 kB' 'Active: 7177264 kB' 'Inactive: 3507524 kB' 'Active(anon): 6785256 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533456 kB' 'Mapped: 185892 kB' 'Shmem: 6255124 kB' 'KReclaimable: 232660 kB' 'Slab: 807312 kB' 'SReclaimable: 232660 kB' 'SUnreclaim: 574652 kB' 'KernelStack: 20576 kB' 'PageTables: 8760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8300224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315660 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2954196 kB' 'DirectMap2M: 15599616 kB' 'DirectMap1G: 183500800 kB' 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.245 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.246 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175589768 kB' 'MemAvailable: 178461180 kB' 'Buffers: 3896 kB' 'Cached: 10150780 kB' 'SwapCached: 0 kB' 'Active: 7177160 kB' 'Inactive: 3507524 kB' 'Active(anon): 6785152 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533276 kB' 'Mapped: 185892 kB' 'Shmem: 6255144 kB' 'KReclaimable: 232660 kB' 'Slab: 807376 kB' 'SReclaimable: 232660 kB' 'SUnreclaim: 574716 kB' 'KernelStack: 20704 kB' 'PageTables: 9284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8300248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315692 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2954196 kB' 'DirectMap2M: 15599616 kB' 'DirectMap1G: 183500800 kB' 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.247 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.248 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:03.249 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:03.249 nr_hugepages=1024 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:03.250 resv_hugepages=0 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:03.250 surplus_hugepages=0 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:03.250 anon_hugepages=0 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175590672 kB' 'MemAvailable: 178462084 kB' 'Buffers: 3896 kB' 'Cached: 10150800 kB' 'SwapCached: 0 kB' 'Active: 7177500 kB' 'Inactive: 3507524 kB' 'Active(anon): 6785492 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533560 kB' 'Mapped: 185892 kB' 'Shmem: 6255164 kB' 'KReclaimable: 232660 kB' 'Slab: 807376 kB' 'SReclaimable: 232660 kB' 'SUnreclaim: 574716 kB' 'KernelStack: 20656 kB' 'PageTables: 9220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8300268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315676 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2954196 kB' 'DirectMap2M: 15599616 kB' 'DirectMap1G: 183500800 kB' 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.250 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.251 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85780792 kB' 'MemUsed: 11881892 kB' 'SwapCached: 0 kB' 'Active: 5349472 kB' 'Inactive: 3336416 kB' 'Active(anon): 5191932 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3336416 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8537540 kB' 'Mapped: 77360 kB' 'AnonPages: 151508 kB' 'Shmem: 5043584 kB' 'KernelStack: 11480 kB' 'PageTables: 4340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131736 kB' 'Slab: 405032 kB' 'SReclaimable: 131736 kB' 'SUnreclaim: 273296 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.252 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.253 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.254 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.254 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.254 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.254 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.254 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.254 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.254 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.254 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:03.254 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:03.254 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:03.254 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:03.254 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:03.254 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:03.254 node0=1024 expecting 1024 00:05:03.254 15:46:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:03.254 00:05:03.254 real 0m4.999s 00:05:03.254 user 0m1.968s 00:05:03.254 sys 0m3.040s 00:05:03.254 15:46:31 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.254 15:46:31 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:03.254 ************************************ 00:05:03.254 END TEST no_shrink_alloc 00:05:03.254 ************************************ 00:05:03.254 15:46:31 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:03.254 15:46:31 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:03.254 15:46:31 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:03.254 15:46:31 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:03.254 15:46:31 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:03.254 15:46:31 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:03.254 15:46:31 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:03.254 15:46:31 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:03.254 15:46:32 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:03.254 15:46:32 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:03.254 15:46:32 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:03.254 15:46:32 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:03.254 15:46:32 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:03.254 15:46:32 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:03.254 15:46:32 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:03.254 00:05:03.254 real 0m20.308s 00:05:03.254 user 0m7.777s 00:05:03.254 sys 0m11.918s 00:05:03.254 15:46:32 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.254 15:46:32 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:03.254 ************************************ 00:05:03.254 END TEST hugepages 00:05:03.254 ************************************ 00:05:03.254 15:46:32 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:03.254 15:46:32 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:03.254 15:46:32 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.254 15:46:32 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.254 15:46:32 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:03.254 ************************************ 00:05:03.254 START TEST driver 00:05:03.254 ************************************ 00:05:03.254 15:46:32 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:03.254 * Looking for test storage... 00:05:03.254 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:03.254 15:46:32 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:03.254 15:46:32 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:03.254 15:46:32 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:07.446 15:46:35 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:07.446 15:46:35 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.446 15:46:35 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.446 15:46:35 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:07.446 ************************************ 00:05:07.446 START TEST guess_driver 00:05:07.446 ************************************ 00:05:07.446 15:46:35 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:05:07.446 15:46:35 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:07.446 15:46:35 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:07.446 15:46:35 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:07.446 15:46:35 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:07.446 15:46:35 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:07.446 15:46:35 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:07.446 15:46:35 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:07.446 15:46:35 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:07.446 15:46:35 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:07.446 15:46:35 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 174 > 0 )) 00:05:07.446 15:46:35 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:07.446 15:46:35 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:05:07.446 15:46:35 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:05:07.446 15:46:35 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:07.446 15:46:35 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:07.446 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:07.446 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:07.446 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:07.446 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:07.446 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:07.446 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:07.446 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:07.446 15:46:35 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:05:07.446 15:46:35 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:05:07.446 15:46:35 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:07.446 15:46:35 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:07.446 15:46:35 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:07.446 Looking for driver=vfio-pci 00:05:07.446 15:46:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:07.446 15:46:35 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:07.446 15:46:35 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:07.446 15:46:35 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:09.976 15:46:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:09.976 15:46:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:09.976 15:46:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:09.976 15:46:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:09.976 15:46:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:09.976 15:46:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:09.976 15:46:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:09.976 15:46:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:09.976 15:46:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:09.976 15:46:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:09.976 15:46:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:09.976 15:46:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:09.976 15:46:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:09.976 15:46:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:09.976 15:46:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:09.976 15:46:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:09.976 15:46:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:09.976 15:46:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:09.976 15:46:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:09.976 15:46:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:09.976 15:46:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:09.976 15:46:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:09.976 15:46:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:09.976 15:46:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:09.976 15:46:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:09.976 15:46:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:09.976 15:46:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:09.976 15:46:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:09.976 15:46:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:09.976 15:46:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:09.976 15:46:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:09.977 15:46:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:09.977 15:46:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:09.977 15:46:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:09.977 15:46:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:09.977 15:46:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:09.977 15:46:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:09.977 15:46:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:09.977 15:46:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:09.977 15:46:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:09.977 15:46:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:09.977 15:46:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:09.977 15:46:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:09.977 15:46:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:09.977 15:46:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:09.977 15:46:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:09.977 15:46:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:09.977 15:46:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:10.544 15:46:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:10.544 15:46:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:10.544 15:46:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:10.544 15:46:39 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:10.544 15:46:39 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:10.544 15:46:39 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:10.544 15:46:39 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:14.736 00:05:14.736 real 0m7.503s 00:05:14.736 user 0m2.144s 00:05:14.736 sys 0m3.868s 00:05:14.736 15:46:43 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.736 15:46:43 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:14.736 ************************************ 00:05:14.736 END TEST guess_driver 00:05:14.736 ************************************ 00:05:14.736 15:46:43 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:05:14.736 00:05:14.736 real 0m11.176s 00:05:14.736 user 0m3.103s 00:05:14.736 sys 0m5.775s 00:05:14.736 15:46:43 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.736 15:46:43 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:14.736 ************************************ 00:05:14.736 END TEST driver 00:05:14.736 ************************************ 00:05:14.736 15:46:43 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:14.736 15:46:43 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:14.736 15:46:43 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:14.736 15:46:43 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.736 15:46:43 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:14.736 ************************************ 00:05:14.736 START TEST devices 00:05:14.736 ************************************ 00:05:14.736 15:46:43 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:14.736 * Looking for test storage... 00:05:14.736 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:14.736 15:46:43 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:14.736 15:46:43 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:14.736 15:46:43 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:14.736 15:46:43 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:18.020 15:46:46 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:18.020 15:46:46 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:18.020 15:46:46 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:18.020 15:46:46 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:18.020 15:46:46 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:18.020 15:46:46 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:18.020 15:46:46 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:18.020 15:46:46 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:18.020 15:46:46 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:18.020 15:46:46 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:18.020 15:46:46 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:18.020 15:46:46 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:18.020 15:46:46 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:18.020 15:46:46 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:18.020 15:46:46 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:18.020 15:46:46 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:18.020 15:46:46 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:18.020 15:46:46 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:05:18.020 15:46:46 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:05:18.020 15:46:46 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:18.020 15:46:46 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:18.020 15:46:46 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:18.020 No valid GPT data, bailing 00:05:18.020 15:46:46 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:18.020 15:46:46 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:18.020 15:46:46 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:18.020 15:46:46 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:18.020 15:46:46 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:18.020 15:46:46 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:18.020 15:46:46 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:05:18.020 15:46:46 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:05:18.020 15:46:46 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:18.020 15:46:46 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:05:18.020 15:46:46 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:18.020 15:46:46 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:18.020 15:46:46 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:18.020 15:46:46 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:18.020 15:46:46 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.020 15:46:46 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:18.020 ************************************ 00:05:18.020 START TEST nvme_mount 00:05:18.020 ************************************ 00:05:18.020 15:46:46 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:05:18.020 15:46:46 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:18.020 15:46:46 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:18.020 15:46:46 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:18.020 15:46:46 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:18.020 15:46:46 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:18.020 15:46:46 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:18.020 15:46:46 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:18.020 15:46:46 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:18.020 15:46:46 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:18.020 15:46:46 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:18.020 15:46:46 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:18.020 15:46:46 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:18.020 15:46:46 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:18.020 15:46:46 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:18.021 15:46:46 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:18.021 15:46:46 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:18.021 15:46:46 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:18.021 15:46:46 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:18.021 15:46:46 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:18.589 Creating new GPT entries in memory. 00:05:18.589 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:18.589 other utilities. 00:05:18.589 15:46:47 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:18.589 15:46:47 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:18.589 15:46:47 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:18.589 15:46:47 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:18.589 15:46:47 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:19.524 Creating new GPT entries in memory. 00:05:19.524 The operation has completed successfully. 00:05:19.524 15:46:48 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:19.524 15:46:48 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:19.524 15:46:48 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3566561 00:05:19.524 15:46:48 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:19.524 15:46:48 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:19.524 15:46:48 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:19.524 15:46:48 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:19.524 15:46:48 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:19.524 15:46:48 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:19.818 15:46:48 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:19.818 15:46:48 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:05:19.818 15:46:48 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:19.818 15:46:48 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:19.818 15:46:48 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:19.818 15:46:48 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:19.818 15:46:48 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:19.818 15:46:48 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:19.818 15:46:48 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:19.818 15:46:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.818 15:46:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:05:19.818 15:46:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:19.818 15:46:48 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:19.818 15:46:48 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:22.352 15:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:22.352 15:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:22.352 15:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:22.352 15:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.352 15:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:22.352 15:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.352 15:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:22.352 15:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.352 15:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:22.352 15:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.352 15:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:22.352 15:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.352 15:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:22.352 15:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.352 15:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:22.352 15:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.352 15:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:22.352 15:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.352 15:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:22.352 15:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.352 15:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:22.352 15:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.352 15:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:22.352 15:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.352 15:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:22.352 15:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.352 15:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:22.352 15:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.352 15:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:22.352 15:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.352 15:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:22.352 15:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.352 15:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:22.352 15:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.352 15:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:22.352 15:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.352 15:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:22.352 15:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:22.352 15:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:22.352 15:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:22.352 15:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:22.352 15:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:22.352 15:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:22.352 15:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:22.352 15:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:22.352 15:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:22.352 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:22.352 15:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:22.352 15:46:50 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:22.352 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:22.352 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:22.352 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:22.352 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:22.352 15:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:22.352 15:46:51 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:22.352 15:46:51 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:22.352 15:46:51 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:22.352 15:46:51 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:22.352 15:46:51 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:22.352 15:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:22.352 15:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:05:22.352 15:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:22.352 15:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:22.352 15:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:22.352 15:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:22.352 15:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:22.352 15:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:22.352 15:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:22.352 15:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.352 15:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:05:22.352 15:46:51 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:22.352 15:46:51 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:22.352 15:46:51 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:24.912 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:24.912 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:24.912 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:24.912 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.912 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:24.912 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.912 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:24.912 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.912 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:24.912 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.912 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:24.912 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.912 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:24.912 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.912 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:24.912 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.912 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:24.912 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.912 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:24.912 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.912 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:24.912 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.912 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:24.912 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.912 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:24.912 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.912 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:24.912 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.912 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:24.912 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.912 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:24.912 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.912 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:24.912 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.912 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:24.912 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.170 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:25.170 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:25.170 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:25.170 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:25.170 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:25.170 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:25.170 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:05:25.170 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:05:25.170 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:25.170 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:25.170 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:25.170 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:25.170 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:25.170 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:25.170 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.170 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:05:25.170 15:46:53 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:25.170 15:46:53 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:25.170 15:46:53 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:27.699 15:46:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:27.699 15:46:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:27.699 15:46:56 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:27.699 15:46:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.699 15:46:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:27.699 15:46:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.699 15:46:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:27.699 15:46:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.699 15:46:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:27.699 15:46:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.699 15:46:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:27.699 15:46:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.699 15:46:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:27.699 15:46:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.699 15:46:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:27.699 15:46:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.699 15:46:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:27.699 15:46:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.699 15:46:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:27.699 15:46:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.699 15:46:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:27.699 15:46:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.699 15:46:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:27.699 15:46:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.699 15:46:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:27.699 15:46:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.699 15:46:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:27.699 15:46:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.699 15:46:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:27.699 15:46:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.699 15:46:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:27.699 15:46:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.699 15:46:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:27.699 15:46:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.699 15:46:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:27.699 15:46:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.958 15:46:56 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:27.958 15:46:56 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:27.958 15:46:56 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:27.958 15:46:56 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:27.958 15:46:56 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:27.958 15:46:56 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:27.958 15:46:56 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:27.958 15:46:56 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:27.958 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:27.958 00:05:27.958 real 0m10.374s 00:05:27.958 user 0m2.914s 00:05:27.958 sys 0m5.220s 00:05:27.958 15:46:56 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.958 15:46:56 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:27.958 ************************************ 00:05:27.958 END TEST nvme_mount 00:05:27.958 ************************************ 00:05:27.958 15:46:56 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:27.958 15:46:56 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:27.958 15:46:56 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.958 15:46:56 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.959 15:46:56 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:27.959 ************************************ 00:05:27.959 START TEST dm_mount 00:05:27.959 ************************************ 00:05:27.959 15:46:56 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:05:27.959 15:46:56 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:27.959 15:46:56 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:27.959 15:46:56 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:27.959 15:46:56 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:27.959 15:46:56 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:27.959 15:46:56 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:27.959 15:46:56 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:27.959 15:46:56 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:27.959 15:46:56 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:27.959 15:46:56 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:27.959 15:46:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:27.959 15:46:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:27.959 15:46:56 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:27.959 15:46:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:27.959 15:46:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:27.959 15:46:56 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:27.959 15:46:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:27.959 15:46:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:27.959 15:46:56 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:27.959 15:46:56 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:27.959 15:46:56 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:28.894 Creating new GPT entries in memory. 00:05:28.894 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:28.894 other utilities. 00:05:28.894 15:46:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:28.894 15:46:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:28.894 15:46:57 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:28.894 15:46:57 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:28.894 15:46:57 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:30.268 Creating new GPT entries in memory. 00:05:30.268 The operation has completed successfully. 00:05:30.268 15:46:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:30.268 15:46:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:30.268 15:46:58 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:30.268 15:46:58 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:30.268 15:46:58 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:31.201 The operation has completed successfully. 00:05:31.201 15:46:59 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:31.201 15:46:59 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:31.201 15:46:59 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3570669 00:05:31.201 15:46:59 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:31.201 15:46:59 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:31.201 15:46:59 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:31.201 15:46:59 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:31.201 15:46:59 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:31.201 15:46:59 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:31.201 15:46:59 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:31.201 15:46:59 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:31.201 15:46:59 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:31.201 15:46:59 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:05:31.201 15:46:59 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:05:31.201 15:46:59 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:05:31.201 15:46:59 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:05:31.201 15:46:59 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:31.201 15:46:59 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:31.201 15:46:59 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:31.201 15:46:59 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:31.201 15:46:59 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:31.201 15:46:59 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:31.201 15:46:59 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:31.201 15:46:59 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:05:31.201 15:46:59 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:31.201 15:46:59 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:31.201 15:46:59 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:31.201 15:46:59 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:31.201 15:46:59 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:31.201 15:46:59 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:31.201 15:46:59 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:31.201 15:46:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.201 15:46:59 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:05:31.201 15:46:59 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:31.201 15:46:59 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:31.201 15:46:59 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:33.730 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:33.730 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:33.730 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:33.730 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.730 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:33.730 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.730 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:33.730 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.730 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:33.730 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.730 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:33.730 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.730 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:33.730 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.730 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:33.730 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.730 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:33.730 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.730 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:33.730 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.730 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:33.730 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.730 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:33.730 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.730 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:33.730 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.730 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:33.730 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.730 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:33.730 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.730 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:33.731 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.731 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:33.731 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.731 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:33.731 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.731 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:33.731 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:33.731 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:33.731 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:33.731 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:33.731 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:33.731 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:05:33.731 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:05:33.731 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:05:33.731 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:33.731 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:33.731 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:33.731 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:33.731 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:33.731 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.731 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:05:33.731 15:47:02 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:33.731 15:47:02 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:33.731 15:47:02 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:37.016 15:47:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:37.016 15:47:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:05:37.016 15:47:05 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:37.016 15:47:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.016 15:47:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:37.016 15:47:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.016 15:47:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:37.016 15:47:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.016 15:47:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:37.016 15:47:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.016 15:47:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:37.016 15:47:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.016 15:47:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:37.016 15:47:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.016 15:47:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:37.016 15:47:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.016 15:47:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:37.016 15:47:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.016 15:47:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:37.016 15:47:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.016 15:47:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:37.016 15:47:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.016 15:47:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:37.016 15:47:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.016 15:47:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:37.016 15:47:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.016 15:47:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:37.016 15:47:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.016 15:47:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:37.016 15:47:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.016 15:47:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:37.016 15:47:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.016 15:47:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:37.016 15:47:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.016 15:47:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:37.016 15:47:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.016 15:47:05 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:37.016 15:47:05 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:37.016 15:47:05 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:37.016 15:47:05 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:37.016 15:47:05 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:37.016 15:47:05 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:37.016 15:47:05 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:37.016 15:47:05 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:37.016 15:47:05 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:37.016 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:37.016 15:47:05 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:37.016 15:47:05 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:37.016 00:05:37.016 real 0m8.747s 00:05:37.016 user 0m2.048s 00:05:37.016 sys 0m3.700s 00:05:37.016 15:47:05 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.016 15:47:05 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:37.016 ************************************ 00:05:37.016 END TEST dm_mount 00:05:37.016 ************************************ 00:05:37.016 15:47:05 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:37.016 15:47:05 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:37.016 15:47:05 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:37.017 15:47:05 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:37.017 15:47:05 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:37.017 15:47:05 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:37.017 15:47:05 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:37.017 15:47:05 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:37.017 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:37.017 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:37.017 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:37.017 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:37.017 15:47:05 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:37.017 15:47:05 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:37.017 15:47:05 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:37.017 15:47:05 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:37.017 15:47:05 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:37.017 15:47:05 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:37.017 15:47:05 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:37.017 00:05:37.017 real 0m22.536s 00:05:37.017 user 0m6.112s 00:05:37.017 sys 0m11.028s 00:05:37.017 15:47:05 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.017 15:47:05 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:37.017 ************************************ 00:05:37.017 END TEST devices 00:05:37.017 ************************************ 00:05:37.017 15:47:05 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:37.017 00:05:37.017 real 1m12.631s 00:05:37.017 user 0m22.898s 00:05:37.017 sys 0m39.689s 00:05:37.017 15:47:05 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.017 15:47:05 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:37.017 ************************************ 00:05:37.017 END TEST setup.sh 00:05:37.017 ************************************ 00:05:37.017 15:47:05 -- common/autotest_common.sh@1142 -- # return 0 00:05:37.017 15:47:05 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:40.303 Hugepages 00:05:40.303 node hugesize free / total 00:05:40.303 node0 1048576kB 0 / 0 00:05:40.303 node0 2048kB 2048 / 2048 00:05:40.303 node1 1048576kB 0 / 0 00:05:40.303 node1 2048kB 0 / 0 00:05:40.303 00:05:40.303 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:40.303 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:40.303 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:40.303 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:40.303 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:40.303 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:40.303 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:40.303 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:40.303 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:40.303 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:05:40.303 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:40.303 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:40.303 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:40.303 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:40.303 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:40.303 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:40.303 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:40.303 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:40.303 15:47:08 -- spdk/autotest.sh@130 -- # uname -s 00:05:40.303 15:47:08 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:40.303 15:47:08 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:40.303 15:47:08 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:42.859 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:42.859 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:42.859 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:42.859 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:42.859 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:42.859 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:42.859 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:42.859 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:42.859 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:42.859 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:42.859 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:42.859 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:42.859 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:42.859 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:42.859 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:42.859 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:43.426 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:05:43.685 15:47:12 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:44.621 15:47:13 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:44.621 15:47:13 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:44.621 15:47:13 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:44.621 15:47:13 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:44.621 15:47:13 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:44.621 15:47:13 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:44.621 15:47:13 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:44.621 15:47:13 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:44.621 15:47:13 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:44.621 15:47:13 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:44.621 15:47:13 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:05:44.621 15:47:13 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:47.152 Waiting for block devices as requested 00:05:47.152 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:05:47.152 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:47.152 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:47.152 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:47.411 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:47.411 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:47.411 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:47.669 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:47.669 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:47.669 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:47.669 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:47.927 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:47.927 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:47.927 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:47.927 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:48.185 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:48.185 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:48.185 15:47:17 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:48.185 15:47:17 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:05:48.185 15:47:17 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:48.185 15:47:17 -- common/autotest_common.sh@1502 -- # grep 0000:5e:00.0/nvme/nvme 00:05:48.185 15:47:17 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:05:48.185 15:47:17 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:05:48.185 15:47:17 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:05:48.186 15:47:17 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:48.186 15:47:17 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:48.186 15:47:17 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:48.186 15:47:17 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:48.186 15:47:17 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:48.186 15:47:17 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:48.186 15:47:17 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:05:48.186 15:47:17 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:48.186 15:47:17 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:48.444 15:47:17 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:48.444 15:47:17 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:48.444 15:47:17 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:48.444 15:47:17 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:48.444 15:47:17 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:48.444 15:47:17 -- common/autotest_common.sh@1557 -- # continue 00:05:48.444 15:47:17 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:48.444 15:47:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:48.444 15:47:17 -- common/autotest_common.sh@10 -- # set +x 00:05:48.444 15:47:17 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:48.444 15:47:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:48.444 15:47:17 -- common/autotest_common.sh@10 -- # set +x 00:05:48.444 15:47:17 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:50.973 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:50.973 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:50.973 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:50.973 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:50.973 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:50.973 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:50.973 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:50.973 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:50.973 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:50.973 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:50.973 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:50.973 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:50.973 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:50.973 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:50.973 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:50.973 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:51.540 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:05:51.798 15:47:20 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:51.798 15:47:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:51.798 15:47:20 -- common/autotest_common.sh@10 -- # set +x 00:05:51.798 15:47:20 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:51.798 15:47:20 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:51.798 15:47:20 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:51.798 15:47:20 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:51.798 15:47:20 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:51.798 15:47:20 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:51.798 15:47:20 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:51.798 15:47:20 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:51.798 15:47:20 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:51.798 15:47:20 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:51.798 15:47:20 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:51.798 15:47:20 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:51.798 15:47:20 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:05:51.798 15:47:20 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:51.798 15:47:20 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:05:51.798 15:47:20 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:51.798 15:47:20 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:51.798 15:47:20 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:51.798 15:47:20 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:5e:00.0 00:05:51.798 15:47:20 -- common/autotest_common.sh@1592 -- # [[ -z 0000:5e:00.0 ]] 00:05:51.798 15:47:20 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=3579520 00:05:51.798 15:47:20 -- common/autotest_common.sh@1598 -- # waitforlisten 3579520 00:05:51.798 15:47:20 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:51.798 15:47:20 -- common/autotest_common.sh@829 -- # '[' -z 3579520 ']' 00:05:51.798 15:47:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.798 15:47:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:51.798 15:47:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.798 15:47:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:51.798 15:47:20 -- common/autotest_common.sh@10 -- # set +x 00:05:51.798 [2024-07-15 15:47:20.678106] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:05:51.798 [2024-07-15 15:47:20.678150] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3579520 ] 00:05:51.798 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.055 [2024-07-15 15:47:20.732648] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.055 [2024-07-15 15:47:20.807992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.616 15:47:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:52.616 15:47:21 -- common/autotest_common.sh@862 -- # return 0 00:05:52.616 15:47:21 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:52.616 15:47:21 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:52.616 15:47:21 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:05:55.896 nvme0n1 00:05:55.896 15:47:24 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:55.896 [2024-07-15 15:47:24.611862] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:55.896 request: 00:05:55.896 { 00:05:55.896 "nvme_ctrlr_name": "nvme0", 00:05:55.896 "password": "test", 00:05:55.896 "method": "bdev_nvme_opal_revert", 00:05:55.896 "req_id": 1 00:05:55.896 } 00:05:55.896 Got JSON-RPC error response 00:05:55.896 response: 00:05:55.896 { 00:05:55.896 "code": -32602, 00:05:55.896 "message": "Invalid parameters" 00:05:55.896 } 00:05:55.896 15:47:24 -- common/autotest_common.sh@1604 -- # true 00:05:55.896 15:47:24 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:55.896 15:47:24 -- common/autotest_common.sh@1608 -- # killprocess 3579520 00:05:55.896 15:47:24 -- common/autotest_common.sh@948 -- # '[' -z 3579520 ']' 00:05:55.896 15:47:24 -- common/autotest_common.sh@952 -- # kill -0 3579520 00:05:55.896 15:47:24 -- common/autotest_common.sh@953 -- # uname 00:05:55.896 15:47:24 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:55.896 15:47:24 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3579520 00:05:55.896 15:47:24 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:55.896 15:47:24 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:55.896 15:47:24 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3579520' 00:05:55.896 killing process with pid 3579520 00:05:55.896 15:47:24 -- common/autotest_common.sh@967 -- # kill 3579520 00:05:55.896 15:47:24 -- common/autotest_common.sh@972 -- # wait 3579520 00:05:57.805 15:47:26 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:57.806 15:47:26 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:57.806 15:47:26 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:57.806 15:47:26 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:57.806 15:47:26 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:57.806 15:47:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:57.806 15:47:26 -- common/autotest_common.sh@10 -- # set +x 00:05:57.806 15:47:26 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:57.806 15:47:26 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:57.806 15:47:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:57.806 15:47:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.806 15:47:26 -- common/autotest_common.sh@10 -- # set +x 00:05:57.806 ************************************ 00:05:57.806 START TEST env 00:05:57.806 ************************************ 00:05:57.806 15:47:26 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:57.806 * Looking for test storage... 00:05:57.806 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:57.806 15:47:26 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:57.806 15:47:26 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:57.806 15:47:26 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.806 15:47:26 env -- common/autotest_common.sh@10 -- # set +x 00:05:57.806 ************************************ 00:05:57.806 START TEST env_memory 00:05:57.806 ************************************ 00:05:57.806 15:47:26 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:57.806 00:05:57.806 00:05:57.806 CUnit - A unit testing framework for C - Version 2.1-3 00:05:57.806 http://cunit.sourceforge.net/ 00:05:57.806 00:05:57.806 00:05:57.806 Suite: memory 00:05:57.806 Test: alloc and free memory map ...[2024-07-15 15:47:26.480101] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:57.806 passed 00:05:57.806 Test: mem map translation ...[2024-07-15 15:47:26.499084] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:57.806 [2024-07-15 15:47:26.499097] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:57.806 [2024-07-15 15:47:26.499130] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:57.806 [2024-07-15 15:47:26.499153] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:57.806 passed 00:05:57.806 Test: mem map registration ...[2024-07-15 15:47:26.535858] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:57.806 [2024-07-15 15:47:26.535874] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:57.806 passed 00:05:57.806 Test: mem map adjacent registrations ...passed 00:05:57.806 00:05:57.806 Run Summary: Type Total Ran Passed Failed Inactive 00:05:57.806 suites 1 1 n/a 0 0 00:05:57.806 tests 4 4 4 0 0 00:05:57.806 asserts 152 152 152 0 n/a 00:05:57.806 00:05:57.806 Elapsed time = 0.137 seconds 00:05:57.806 00:05:57.806 real 0m0.149s 00:05:57.806 user 0m0.144s 00:05:57.806 sys 0m0.005s 00:05:57.806 15:47:26 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.806 15:47:26 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:57.806 ************************************ 00:05:57.806 END TEST env_memory 00:05:57.806 ************************************ 00:05:57.806 15:47:26 env -- common/autotest_common.sh@1142 -- # return 0 00:05:57.806 15:47:26 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:57.806 15:47:26 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:57.806 15:47:26 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.806 15:47:26 env -- common/autotest_common.sh@10 -- # set +x 00:05:57.806 ************************************ 00:05:57.806 START TEST env_vtophys 00:05:57.806 ************************************ 00:05:57.806 15:47:26 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:57.806 EAL: lib.eal log level changed from notice to debug 00:05:57.806 EAL: Detected lcore 0 as core 0 on socket 0 00:05:57.806 EAL: Detected lcore 1 as core 1 on socket 0 00:05:57.806 EAL: Detected lcore 2 as core 2 on socket 0 00:05:57.806 EAL: Detected lcore 3 as core 3 on socket 0 00:05:57.806 EAL: Detected lcore 4 as core 4 on socket 0 00:05:57.806 EAL: Detected lcore 5 as core 5 on socket 0 00:05:57.806 EAL: Detected lcore 6 as core 6 on socket 0 00:05:57.806 EAL: Detected lcore 7 as core 8 on socket 0 00:05:57.806 EAL: Detected lcore 8 as core 9 on socket 0 00:05:57.806 EAL: Detected lcore 9 as core 10 on socket 0 00:05:57.806 EAL: Detected lcore 10 as core 11 on socket 0 00:05:57.806 EAL: Detected lcore 11 as core 12 on socket 0 00:05:57.806 EAL: Detected lcore 12 as core 13 on socket 0 00:05:57.806 EAL: Detected lcore 13 as core 16 on socket 0 00:05:57.806 EAL: Detected lcore 14 as core 17 on socket 0 00:05:57.806 EAL: Detected lcore 15 as core 18 on socket 0 00:05:57.806 EAL: Detected lcore 16 as core 19 on socket 0 00:05:57.806 EAL: Detected lcore 17 as core 20 on socket 0 00:05:57.806 EAL: Detected lcore 18 as core 21 on socket 0 00:05:57.806 EAL: Detected lcore 19 as core 25 on socket 0 00:05:57.806 EAL: Detected lcore 20 as core 26 on socket 0 00:05:57.806 EAL: Detected lcore 21 as core 27 on socket 0 00:05:57.806 EAL: Detected lcore 22 as core 28 on socket 0 00:05:57.806 EAL: Detected lcore 23 as core 29 on socket 0 00:05:57.806 EAL: Detected lcore 24 as core 0 on socket 1 00:05:57.806 EAL: Detected lcore 25 as core 1 on socket 1 00:05:57.806 EAL: Detected lcore 26 as core 2 on socket 1 00:05:57.806 EAL: Detected lcore 27 as core 3 on socket 1 00:05:57.806 EAL: Detected lcore 28 as core 4 on socket 1 00:05:57.806 EAL: Detected lcore 29 as core 5 on socket 1 00:05:57.806 EAL: Detected lcore 30 as core 6 on socket 1 00:05:57.806 EAL: Detected lcore 31 as core 9 on socket 1 00:05:57.806 EAL: Detected lcore 32 as core 10 on socket 1 00:05:57.806 EAL: Detected lcore 33 as core 11 on socket 1 00:05:57.806 EAL: Detected lcore 34 as core 12 on socket 1 00:05:57.806 EAL: Detected lcore 35 as core 13 on socket 1 00:05:57.806 EAL: Detected lcore 36 as core 16 on socket 1 00:05:57.806 EAL: Detected lcore 37 as core 17 on socket 1 00:05:57.806 EAL: Detected lcore 38 as core 18 on socket 1 00:05:57.806 EAL: Detected lcore 39 as core 19 on socket 1 00:05:57.806 EAL: Detected lcore 40 as core 20 on socket 1 00:05:57.806 EAL: Detected lcore 41 as core 21 on socket 1 00:05:57.806 EAL: Detected lcore 42 as core 24 on socket 1 00:05:57.806 EAL: Detected lcore 43 as core 25 on socket 1 00:05:57.806 EAL: Detected lcore 44 as core 26 on socket 1 00:05:57.806 EAL: Detected lcore 45 as core 27 on socket 1 00:05:57.806 EAL: Detected lcore 46 as core 28 on socket 1 00:05:57.806 EAL: Detected lcore 47 as core 29 on socket 1 00:05:57.806 EAL: Detected lcore 48 as core 0 on socket 0 00:05:57.806 EAL: Detected lcore 49 as core 1 on socket 0 00:05:57.806 EAL: Detected lcore 50 as core 2 on socket 0 00:05:57.806 EAL: Detected lcore 51 as core 3 on socket 0 00:05:57.806 EAL: Detected lcore 52 as core 4 on socket 0 00:05:57.806 EAL: Detected lcore 53 as core 5 on socket 0 00:05:57.806 EAL: Detected lcore 54 as core 6 on socket 0 00:05:57.806 EAL: Detected lcore 55 as core 8 on socket 0 00:05:57.806 EAL: Detected lcore 56 as core 9 on socket 0 00:05:57.806 EAL: Detected lcore 57 as core 10 on socket 0 00:05:57.806 EAL: Detected lcore 58 as core 11 on socket 0 00:05:57.806 EAL: Detected lcore 59 as core 12 on socket 0 00:05:57.806 EAL: Detected lcore 60 as core 13 on socket 0 00:05:57.806 EAL: Detected lcore 61 as core 16 on socket 0 00:05:57.806 EAL: Detected lcore 62 as core 17 on socket 0 00:05:57.806 EAL: Detected lcore 63 as core 18 on socket 0 00:05:57.806 EAL: Detected lcore 64 as core 19 on socket 0 00:05:57.806 EAL: Detected lcore 65 as core 20 on socket 0 00:05:57.806 EAL: Detected lcore 66 as core 21 on socket 0 00:05:57.806 EAL: Detected lcore 67 as core 25 on socket 0 00:05:57.806 EAL: Detected lcore 68 as core 26 on socket 0 00:05:57.806 EAL: Detected lcore 69 as core 27 on socket 0 00:05:57.806 EAL: Detected lcore 70 as core 28 on socket 0 00:05:57.806 EAL: Detected lcore 71 as core 29 on socket 0 00:05:57.806 EAL: Detected lcore 72 as core 0 on socket 1 00:05:57.806 EAL: Detected lcore 73 as core 1 on socket 1 00:05:57.806 EAL: Detected lcore 74 as core 2 on socket 1 00:05:57.806 EAL: Detected lcore 75 as core 3 on socket 1 00:05:57.806 EAL: Detected lcore 76 as core 4 on socket 1 00:05:57.806 EAL: Detected lcore 77 as core 5 on socket 1 00:05:57.806 EAL: Detected lcore 78 as core 6 on socket 1 00:05:57.806 EAL: Detected lcore 79 as core 9 on socket 1 00:05:57.806 EAL: Detected lcore 80 as core 10 on socket 1 00:05:57.806 EAL: Detected lcore 81 as core 11 on socket 1 00:05:57.806 EAL: Detected lcore 82 as core 12 on socket 1 00:05:57.806 EAL: Detected lcore 83 as core 13 on socket 1 00:05:57.806 EAL: Detected lcore 84 as core 16 on socket 1 00:05:57.806 EAL: Detected lcore 85 as core 17 on socket 1 00:05:57.806 EAL: Detected lcore 86 as core 18 on socket 1 00:05:57.806 EAL: Detected lcore 87 as core 19 on socket 1 00:05:57.806 EAL: Detected lcore 88 as core 20 on socket 1 00:05:57.806 EAL: Detected lcore 89 as core 21 on socket 1 00:05:57.806 EAL: Detected lcore 90 as core 24 on socket 1 00:05:57.806 EAL: Detected lcore 91 as core 25 on socket 1 00:05:57.806 EAL: Detected lcore 92 as core 26 on socket 1 00:05:57.806 EAL: Detected lcore 93 as core 27 on socket 1 00:05:57.806 EAL: Detected lcore 94 as core 28 on socket 1 00:05:57.806 EAL: Detected lcore 95 as core 29 on socket 1 00:05:57.806 EAL: Maximum logical cores by configuration: 128 00:05:57.806 EAL: Detected CPU lcores: 96 00:05:57.806 EAL: Detected NUMA nodes: 2 00:05:57.806 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:57.806 EAL: Detected shared linkage of DPDK 00:05:57.806 EAL: No shared files mode enabled, IPC will be disabled 00:05:57.806 EAL: Bus pci wants IOVA as 'DC' 00:05:57.806 EAL: Buses did not request a specific IOVA mode. 00:05:57.806 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:57.806 EAL: Selected IOVA mode 'VA' 00:05:57.806 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.806 EAL: Probing VFIO support... 00:05:57.806 EAL: IOMMU type 1 (Type 1) is supported 00:05:57.806 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:57.806 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:57.806 EAL: VFIO support initialized 00:05:57.806 EAL: Ask a virtual area of 0x2e000 bytes 00:05:57.806 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:57.807 EAL: Setting up physically contiguous memory... 00:05:57.807 EAL: Setting maximum number of open files to 524288 00:05:57.807 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:57.807 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:57.807 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:57.807 EAL: Ask a virtual area of 0x61000 bytes 00:05:57.807 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:57.807 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:57.807 EAL: Ask a virtual area of 0x400000000 bytes 00:05:57.807 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:57.807 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:57.807 EAL: Ask a virtual area of 0x61000 bytes 00:05:57.807 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:57.807 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:57.807 EAL: Ask a virtual area of 0x400000000 bytes 00:05:57.807 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:57.807 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:57.807 EAL: Ask a virtual area of 0x61000 bytes 00:05:57.807 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:57.807 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:57.807 EAL: Ask a virtual area of 0x400000000 bytes 00:05:57.807 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:57.807 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:57.807 EAL: Ask a virtual area of 0x61000 bytes 00:05:57.807 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:57.807 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:57.807 EAL: Ask a virtual area of 0x400000000 bytes 00:05:57.807 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:57.807 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:57.807 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:57.807 EAL: Ask a virtual area of 0x61000 bytes 00:05:57.807 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:57.807 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:57.807 EAL: Ask a virtual area of 0x400000000 bytes 00:05:57.807 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:57.807 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:57.807 EAL: Ask a virtual area of 0x61000 bytes 00:05:57.807 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:57.807 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:57.807 EAL: Ask a virtual area of 0x400000000 bytes 00:05:57.807 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:57.807 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:57.807 EAL: Ask a virtual area of 0x61000 bytes 00:05:57.807 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:57.807 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:57.807 EAL: Ask a virtual area of 0x400000000 bytes 00:05:57.807 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:57.807 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:57.807 EAL: Ask a virtual area of 0x61000 bytes 00:05:57.807 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:57.807 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:57.807 EAL: Ask a virtual area of 0x400000000 bytes 00:05:57.807 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:57.807 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:57.807 EAL: Hugepages will be freed exactly as allocated. 00:05:57.807 EAL: No shared files mode enabled, IPC is disabled 00:05:57.807 EAL: No shared files mode enabled, IPC is disabled 00:05:57.807 EAL: TSC frequency is ~2300000 KHz 00:05:57.807 EAL: Main lcore 0 is ready (tid=7ff90e279a00;cpuset=[0]) 00:05:57.807 EAL: Trying to obtain current memory policy. 00:05:57.807 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:57.807 EAL: Restoring previous memory policy: 0 00:05:57.807 EAL: request: mp_malloc_sync 00:05:57.807 EAL: No shared files mode enabled, IPC is disabled 00:05:57.807 EAL: Heap on socket 0 was expanded by 2MB 00:05:57.807 EAL: No shared files mode enabled, IPC is disabled 00:05:57.807 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:57.807 EAL: Mem event callback 'spdk:(nil)' registered 00:05:57.807 00:05:57.807 00:05:57.807 CUnit - A unit testing framework for C - Version 2.1-3 00:05:57.807 http://cunit.sourceforge.net/ 00:05:57.807 00:05:57.807 00:05:57.807 Suite: components_suite 00:05:57.807 Test: vtophys_malloc_test ...passed 00:05:57.807 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:57.807 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:57.807 EAL: Restoring previous memory policy: 4 00:05:57.807 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.807 EAL: request: mp_malloc_sync 00:05:57.807 EAL: No shared files mode enabled, IPC is disabled 00:05:57.807 EAL: Heap on socket 0 was expanded by 4MB 00:05:57.807 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.807 EAL: request: mp_malloc_sync 00:05:57.807 EAL: No shared files mode enabled, IPC is disabled 00:05:57.807 EAL: Heap on socket 0 was shrunk by 4MB 00:05:57.807 EAL: Trying to obtain current memory policy. 00:05:57.807 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:57.807 EAL: Restoring previous memory policy: 4 00:05:57.807 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.807 EAL: request: mp_malloc_sync 00:05:57.807 EAL: No shared files mode enabled, IPC is disabled 00:05:57.807 EAL: Heap on socket 0 was expanded by 6MB 00:05:57.807 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.807 EAL: request: mp_malloc_sync 00:05:57.807 EAL: No shared files mode enabled, IPC is disabled 00:05:57.807 EAL: Heap on socket 0 was shrunk by 6MB 00:05:57.807 EAL: Trying to obtain current memory policy. 00:05:57.807 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:57.807 EAL: Restoring previous memory policy: 4 00:05:57.807 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.807 EAL: request: mp_malloc_sync 00:05:57.807 EAL: No shared files mode enabled, IPC is disabled 00:05:57.807 EAL: Heap on socket 0 was expanded by 10MB 00:05:57.807 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.807 EAL: request: mp_malloc_sync 00:05:57.807 EAL: No shared files mode enabled, IPC is disabled 00:05:57.807 EAL: Heap on socket 0 was shrunk by 10MB 00:05:57.807 EAL: Trying to obtain current memory policy. 00:05:57.807 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.066 EAL: Restoring previous memory policy: 4 00:05:58.066 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.066 EAL: request: mp_malloc_sync 00:05:58.066 EAL: No shared files mode enabled, IPC is disabled 00:05:58.066 EAL: Heap on socket 0 was expanded by 18MB 00:05:58.066 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.066 EAL: request: mp_malloc_sync 00:05:58.066 EAL: No shared files mode enabled, IPC is disabled 00:05:58.066 EAL: Heap on socket 0 was shrunk by 18MB 00:05:58.066 EAL: Trying to obtain current memory policy. 00:05:58.066 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.066 EAL: Restoring previous memory policy: 4 00:05:58.066 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.066 EAL: request: mp_malloc_sync 00:05:58.066 EAL: No shared files mode enabled, IPC is disabled 00:05:58.066 EAL: Heap on socket 0 was expanded by 34MB 00:05:58.066 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.066 EAL: request: mp_malloc_sync 00:05:58.066 EAL: No shared files mode enabled, IPC is disabled 00:05:58.066 EAL: Heap on socket 0 was shrunk by 34MB 00:05:58.066 EAL: Trying to obtain current memory policy. 00:05:58.066 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.066 EAL: Restoring previous memory policy: 4 00:05:58.066 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.066 EAL: request: mp_malloc_sync 00:05:58.066 EAL: No shared files mode enabled, IPC is disabled 00:05:58.066 EAL: Heap on socket 0 was expanded by 66MB 00:05:58.066 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.066 EAL: request: mp_malloc_sync 00:05:58.066 EAL: No shared files mode enabled, IPC is disabled 00:05:58.066 EAL: Heap on socket 0 was shrunk by 66MB 00:05:58.066 EAL: Trying to obtain current memory policy. 00:05:58.066 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.066 EAL: Restoring previous memory policy: 4 00:05:58.066 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.066 EAL: request: mp_malloc_sync 00:05:58.066 EAL: No shared files mode enabled, IPC is disabled 00:05:58.066 EAL: Heap on socket 0 was expanded by 130MB 00:05:58.066 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.066 EAL: request: mp_malloc_sync 00:05:58.066 EAL: No shared files mode enabled, IPC is disabled 00:05:58.066 EAL: Heap on socket 0 was shrunk by 130MB 00:05:58.066 EAL: Trying to obtain current memory policy. 00:05:58.066 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.066 EAL: Restoring previous memory policy: 4 00:05:58.066 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.066 EAL: request: mp_malloc_sync 00:05:58.066 EAL: No shared files mode enabled, IPC is disabled 00:05:58.066 EAL: Heap on socket 0 was expanded by 258MB 00:05:58.066 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.066 EAL: request: mp_malloc_sync 00:05:58.066 EAL: No shared files mode enabled, IPC is disabled 00:05:58.066 EAL: Heap on socket 0 was shrunk by 258MB 00:05:58.066 EAL: Trying to obtain current memory policy. 00:05:58.066 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.364 EAL: Restoring previous memory policy: 4 00:05:58.364 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.364 EAL: request: mp_malloc_sync 00:05:58.364 EAL: No shared files mode enabled, IPC is disabled 00:05:58.364 EAL: Heap on socket 0 was expanded by 514MB 00:05:58.364 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.364 EAL: request: mp_malloc_sync 00:05:58.364 EAL: No shared files mode enabled, IPC is disabled 00:05:58.364 EAL: Heap on socket 0 was shrunk by 514MB 00:05:58.364 EAL: Trying to obtain current memory policy. 00:05:58.364 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.622 EAL: Restoring previous memory policy: 4 00:05:58.622 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.622 EAL: request: mp_malloc_sync 00:05:58.622 EAL: No shared files mode enabled, IPC is disabled 00:05:58.622 EAL: Heap on socket 0 was expanded by 1026MB 00:05:58.880 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.880 EAL: request: mp_malloc_sync 00:05:58.880 EAL: No shared files mode enabled, IPC is disabled 00:05:58.880 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:58.880 passed 00:05:58.880 00:05:58.880 Run Summary: Type Total Ran Passed Failed Inactive 00:05:58.880 suites 1 1 n/a 0 0 00:05:58.880 tests 2 2 2 0 0 00:05:58.880 asserts 497 497 497 0 n/a 00:05:58.880 00:05:58.880 Elapsed time = 0.966 seconds 00:05:58.880 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.880 EAL: request: mp_malloc_sync 00:05:58.880 EAL: No shared files mode enabled, IPC is disabled 00:05:58.880 EAL: Heap on socket 0 was shrunk by 2MB 00:05:58.880 EAL: No shared files mode enabled, IPC is disabled 00:05:58.880 EAL: No shared files mode enabled, IPC is disabled 00:05:58.880 EAL: No shared files mode enabled, IPC is disabled 00:05:58.880 00:05:58.880 real 0m1.072s 00:05:58.880 user 0m0.628s 00:05:58.880 sys 0m0.421s 00:05:58.880 15:47:27 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.880 15:47:27 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:58.880 ************************************ 00:05:58.880 END TEST env_vtophys 00:05:58.880 ************************************ 00:05:58.880 15:47:27 env -- common/autotest_common.sh@1142 -- # return 0 00:05:58.880 15:47:27 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:58.880 15:47:27 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:58.880 15:47:27 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.880 15:47:27 env -- common/autotest_common.sh@10 -- # set +x 00:05:58.880 ************************************ 00:05:58.880 START TEST env_pci 00:05:58.880 ************************************ 00:05:58.880 15:47:27 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:58.880 00:05:58.880 00:05:58.880 CUnit - A unit testing framework for C - Version 2.1-3 00:05:58.880 http://cunit.sourceforge.net/ 00:05:58.880 00:05:58.880 00:05:58.880 Suite: pci 00:05:58.880 Test: pci_hook ...[2024-07-15 15:47:27.812192] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3580940 has claimed it 00:05:59.139 EAL: Cannot find device (10000:00:01.0) 00:05:59.139 EAL: Failed to attach device on primary process 00:05:59.139 passed 00:05:59.139 00:05:59.140 Run Summary: Type Total Ran Passed Failed Inactive 00:05:59.140 suites 1 1 n/a 0 0 00:05:59.140 tests 1 1 1 0 0 00:05:59.140 asserts 25 25 25 0 n/a 00:05:59.140 00:05:59.140 Elapsed time = 0.029 seconds 00:05:59.140 00:05:59.140 real 0m0.049s 00:05:59.140 user 0m0.019s 00:05:59.140 sys 0m0.030s 00:05:59.140 15:47:27 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.140 15:47:27 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:59.140 ************************************ 00:05:59.140 END TEST env_pci 00:05:59.140 ************************************ 00:05:59.140 15:47:27 env -- common/autotest_common.sh@1142 -- # return 0 00:05:59.140 15:47:27 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:59.140 15:47:27 env -- env/env.sh@15 -- # uname 00:05:59.140 15:47:27 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:59.140 15:47:27 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:59.140 15:47:27 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:59.140 15:47:27 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:59.140 15:47:27 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.140 15:47:27 env -- common/autotest_common.sh@10 -- # set +x 00:05:59.140 ************************************ 00:05:59.140 START TEST env_dpdk_post_init 00:05:59.140 ************************************ 00:05:59.140 15:47:27 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:59.140 EAL: Detected CPU lcores: 96 00:05:59.140 EAL: Detected NUMA nodes: 2 00:05:59.140 EAL: Detected shared linkage of DPDK 00:05:59.140 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:59.140 EAL: Selected IOVA mode 'VA' 00:05:59.140 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.140 EAL: VFIO support initialized 00:05:59.140 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:59.140 EAL: Using IOMMU type 1 (Type 1) 00:05:59.140 EAL: Ignore mapping IO port bar(1) 00:05:59.140 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:59.140 EAL: Ignore mapping IO port bar(1) 00:05:59.140 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:59.140 EAL: Ignore mapping IO port bar(1) 00:05:59.140 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:59.399 EAL: Ignore mapping IO port bar(1) 00:05:59.399 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:59.399 EAL: Ignore mapping IO port bar(1) 00:05:59.399 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:59.399 EAL: Ignore mapping IO port bar(1) 00:05:59.399 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:59.400 EAL: Ignore mapping IO port bar(1) 00:05:59.400 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:59.400 EAL: Ignore mapping IO port bar(1) 00:05:59.400 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:59.966 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:05:59.966 EAL: Ignore mapping IO port bar(1) 00:05:59.966 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:59.966 EAL: Ignore mapping IO port bar(1) 00:05:59.966 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:06:00.225 EAL: Ignore mapping IO port bar(1) 00:06:00.225 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:06:00.225 EAL: Ignore mapping IO port bar(1) 00:06:00.225 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:06:00.225 EAL: Ignore mapping IO port bar(1) 00:06:00.225 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:06:00.225 EAL: Ignore mapping IO port bar(1) 00:06:00.225 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:06:00.225 EAL: Ignore mapping IO port bar(1) 00:06:00.225 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:06:00.225 EAL: Ignore mapping IO port bar(1) 00:06:00.225 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:06:03.570 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:06:03.570 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:06:03.570 Starting DPDK initialization... 00:06:03.570 Starting SPDK post initialization... 00:06:03.570 SPDK NVMe probe 00:06:03.570 Attaching to 0000:5e:00.0 00:06:03.570 Attached to 0000:5e:00.0 00:06:03.570 Cleaning up... 00:06:03.570 00:06:03.570 real 0m4.349s 00:06:03.570 user 0m3.294s 00:06:03.570 sys 0m0.123s 00:06:03.570 15:47:32 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.570 15:47:32 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:03.570 ************************************ 00:06:03.570 END TEST env_dpdk_post_init 00:06:03.570 ************************************ 00:06:03.570 15:47:32 env -- common/autotest_common.sh@1142 -- # return 0 00:06:03.570 15:47:32 env -- env/env.sh@26 -- # uname 00:06:03.570 15:47:32 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:03.570 15:47:32 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:03.570 15:47:32 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:03.570 15:47:32 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.570 15:47:32 env -- common/autotest_common.sh@10 -- # set +x 00:06:03.570 ************************************ 00:06:03.570 START TEST env_mem_callbacks 00:06:03.570 ************************************ 00:06:03.570 15:47:32 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:03.570 EAL: Detected CPU lcores: 96 00:06:03.570 EAL: Detected NUMA nodes: 2 00:06:03.570 EAL: Detected shared linkage of DPDK 00:06:03.570 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:03.570 EAL: Selected IOVA mode 'VA' 00:06:03.570 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.570 EAL: VFIO support initialized 00:06:03.570 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:03.570 00:06:03.570 00:06:03.570 CUnit - A unit testing framework for C - Version 2.1-3 00:06:03.570 http://cunit.sourceforge.net/ 00:06:03.570 00:06:03.570 00:06:03.570 Suite: memory 00:06:03.570 Test: test ... 00:06:03.570 register 0x200000200000 2097152 00:06:03.570 malloc 3145728 00:06:03.570 register 0x200000400000 4194304 00:06:03.570 buf 0x200000500000 len 3145728 PASSED 00:06:03.570 malloc 64 00:06:03.570 buf 0x2000004fff40 len 64 PASSED 00:06:03.570 malloc 4194304 00:06:03.570 register 0x200000800000 6291456 00:06:03.570 buf 0x200000a00000 len 4194304 PASSED 00:06:03.570 free 0x200000500000 3145728 00:06:03.570 free 0x2000004fff40 64 00:06:03.570 unregister 0x200000400000 4194304 PASSED 00:06:03.570 free 0x200000a00000 4194304 00:06:03.570 unregister 0x200000800000 6291456 PASSED 00:06:03.570 malloc 8388608 00:06:03.570 register 0x200000400000 10485760 00:06:03.570 buf 0x200000600000 len 8388608 PASSED 00:06:03.570 free 0x200000600000 8388608 00:06:03.570 unregister 0x200000400000 10485760 PASSED 00:06:03.570 passed 00:06:03.570 00:06:03.570 Run Summary: Type Total Ran Passed Failed Inactive 00:06:03.570 suites 1 1 n/a 0 0 00:06:03.570 tests 1 1 1 0 0 00:06:03.570 asserts 15 15 15 0 n/a 00:06:03.570 00:06:03.570 Elapsed time = 0.005 seconds 00:06:03.570 00:06:03.570 real 0m0.055s 00:06:03.570 user 0m0.021s 00:06:03.570 sys 0m0.034s 00:06:03.570 15:47:32 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.570 15:47:32 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:03.570 ************************************ 00:06:03.570 END TEST env_mem_callbacks 00:06:03.570 ************************************ 00:06:03.570 15:47:32 env -- common/autotest_common.sh@1142 -- # return 0 00:06:03.570 00:06:03.570 real 0m6.110s 00:06:03.570 user 0m4.275s 00:06:03.570 sys 0m0.909s 00:06:03.570 15:47:32 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.570 15:47:32 env -- common/autotest_common.sh@10 -- # set +x 00:06:03.570 ************************************ 00:06:03.570 END TEST env 00:06:03.570 ************************************ 00:06:03.570 15:47:32 -- common/autotest_common.sh@1142 -- # return 0 00:06:03.570 15:47:32 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:03.570 15:47:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:03.570 15:47:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.570 15:47:32 -- common/autotest_common.sh@10 -- # set +x 00:06:03.570 ************************************ 00:06:03.570 START TEST rpc 00:06:03.570 ************************************ 00:06:03.570 15:47:32 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:03.829 * Looking for test storage... 00:06:03.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:03.829 15:47:32 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3581798 00:06:03.829 15:47:32 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:03.829 15:47:32 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:03.829 15:47:32 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3581798 00:06:03.829 15:47:32 rpc -- common/autotest_common.sh@829 -- # '[' -z 3581798 ']' 00:06:03.829 15:47:32 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.829 15:47:32 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:03.829 15:47:32 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.829 15:47:32 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:03.829 15:47:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.829 [2024-07-15 15:47:32.631971] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:03.829 [2024-07-15 15:47:32.632012] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3581798 ] 00:06:03.829 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.829 [2024-07-15 15:47:32.684890] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.829 [2024-07-15 15:47:32.758610] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:03.829 [2024-07-15 15:47:32.758651] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3581798' to capture a snapshot of events at runtime. 00:06:03.829 [2024-07-15 15:47:32.758659] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:03.829 [2024-07-15 15:47:32.758665] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:03.829 [2024-07-15 15:47:32.758670] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3581798 for offline analysis/debug. 00:06:03.829 [2024-07-15 15:47:32.758694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.763 15:47:33 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.763 15:47:33 rpc -- common/autotest_common.sh@862 -- # return 0 00:06:04.763 15:47:33 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:04.763 15:47:33 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:04.763 15:47:33 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:04.763 15:47:33 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:04.763 15:47:33 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:04.763 15:47:33 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.763 15:47:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.763 ************************************ 00:06:04.763 START TEST rpc_integrity 00:06:04.763 ************************************ 00:06:04.763 15:47:33 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:06:04.763 15:47:33 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:04.763 15:47:33 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.763 15:47:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:04.763 15:47:33 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.763 15:47:33 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:04.763 15:47:33 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:04.763 15:47:33 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:04.763 15:47:33 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:04.763 15:47:33 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.763 15:47:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:04.763 15:47:33 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.763 15:47:33 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:04.763 15:47:33 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:04.763 15:47:33 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.763 15:47:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:04.763 15:47:33 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.763 15:47:33 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:04.763 { 00:06:04.763 "name": "Malloc0", 00:06:04.763 "aliases": [ 00:06:04.763 "f452ec31-668f-4f77-99ff-7a54bac91c93" 00:06:04.763 ], 00:06:04.763 "product_name": "Malloc disk", 00:06:04.763 "block_size": 512, 00:06:04.763 "num_blocks": 16384, 00:06:04.763 "uuid": "f452ec31-668f-4f77-99ff-7a54bac91c93", 00:06:04.763 "assigned_rate_limits": { 00:06:04.763 "rw_ios_per_sec": 0, 00:06:04.763 "rw_mbytes_per_sec": 0, 00:06:04.763 "r_mbytes_per_sec": 0, 00:06:04.763 "w_mbytes_per_sec": 0 00:06:04.763 }, 00:06:04.763 "claimed": false, 00:06:04.763 "zoned": false, 00:06:04.763 "supported_io_types": { 00:06:04.763 "read": true, 00:06:04.763 "write": true, 00:06:04.763 "unmap": true, 00:06:04.763 "flush": true, 00:06:04.763 "reset": true, 00:06:04.763 "nvme_admin": false, 00:06:04.763 "nvme_io": false, 00:06:04.763 "nvme_io_md": false, 00:06:04.763 "write_zeroes": true, 00:06:04.763 "zcopy": true, 00:06:04.763 "get_zone_info": false, 00:06:04.763 "zone_management": false, 00:06:04.763 "zone_append": false, 00:06:04.763 "compare": false, 00:06:04.763 "compare_and_write": false, 00:06:04.763 "abort": true, 00:06:04.763 "seek_hole": false, 00:06:04.763 "seek_data": false, 00:06:04.763 "copy": true, 00:06:04.763 "nvme_iov_md": false 00:06:04.763 }, 00:06:04.763 "memory_domains": [ 00:06:04.763 { 00:06:04.763 "dma_device_id": "system", 00:06:04.763 "dma_device_type": 1 00:06:04.763 }, 00:06:04.764 { 00:06:04.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:04.764 "dma_device_type": 2 00:06:04.764 } 00:06:04.764 ], 00:06:04.764 "driver_specific": {} 00:06:04.764 } 00:06:04.764 ]' 00:06:04.764 15:47:33 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:04.764 15:47:33 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:04.764 15:47:33 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:04.764 15:47:33 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.764 15:47:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:04.764 [2024-07-15 15:47:33.588467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:04.764 [2024-07-15 15:47:33.588498] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:04.764 [2024-07-15 15:47:33.588510] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21522d0 00:06:04.764 [2024-07-15 15:47:33.588516] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:04.764 [2024-07-15 15:47:33.589593] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:04.764 [2024-07-15 15:47:33.589616] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:04.764 Passthru0 00:06:04.764 15:47:33 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.764 15:47:33 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:04.764 15:47:33 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.764 15:47:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:04.764 15:47:33 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.764 15:47:33 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:04.764 { 00:06:04.764 "name": "Malloc0", 00:06:04.764 "aliases": [ 00:06:04.764 "f452ec31-668f-4f77-99ff-7a54bac91c93" 00:06:04.764 ], 00:06:04.764 "product_name": "Malloc disk", 00:06:04.764 "block_size": 512, 00:06:04.764 "num_blocks": 16384, 00:06:04.764 "uuid": "f452ec31-668f-4f77-99ff-7a54bac91c93", 00:06:04.764 "assigned_rate_limits": { 00:06:04.764 "rw_ios_per_sec": 0, 00:06:04.764 "rw_mbytes_per_sec": 0, 00:06:04.764 "r_mbytes_per_sec": 0, 00:06:04.764 "w_mbytes_per_sec": 0 00:06:04.764 }, 00:06:04.764 "claimed": true, 00:06:04.764 "claim_type": "exclusive_write", 00:06:04.764 "zoned": false, 00:06:04.764 "supported_io_types": { 00:06:04.764 "read": true, 00:06:04.764 "write": true, 00:06:04.764 "unmap": true, 00:06:04.764 "flush": true, 00:06:04.764 "reset": true, 00:06:04.764 "nvme_admin": false, 00:06:04.764 "nvme_io": false, 00:06:04.764 "nvme_io_md": false, 00:06:04.764 "write_zeroes": true, 00:06:04.764 "zcopy": true, 00:06:04.764 "get_zone_info": false, 00:06:04.764 "zone_management": false, 00:06:04.764 "zone_append": false, 00:06:04.764 "compare": false, 00:06:04.764 "compare_and_write": false, 00:06:04.764 "abort": true, 00:06:04.764 "seek_hole": false, 00:06:04.764 "seek_data": false, 00:06:04.764 "copy": true, 00:06:04.764 "nvme_iov_md": false 00:06:04.764 }, 00:06:04.764 "memory_domains": [ 00:06:04.764 { 00:06:04.764 "dma_device_id": "system", 00:06:04.764 "dma_device_type": 1 00:06:04.764 }, 00:06:04.764 { 00:06:04.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:04.764 "dma_device_type": 2 00:06:04.764 } 00:06:04.764 ], 00:06:04.764 "driver_specific": {} 00:06:04.764 }, 00:06:04.764 { 00:06:04.764 "name": "Passthru0", 00:06:04.764 "aliases": [ 00:06:04.764 "cc19fb87-a393-5579-8bac-fcab06fb1dcb" 00:06:04.764 ], 00:06:04.764 "product_name": "passthru", 00:06:04.764 "block_size": 512, 00:06:04.764 "num_blocks": 16384, 00:06:04.764 "uuid": "cc19fb87-a393-5579-8bac-fcab06fb1dcb", 00:06:04.764 "assigned_rate_limits": { 00:06:04.764 "rw_ios_per_sec": 0, 00:06:04.764 "rw_mbytes_per_sec": 0, 00:06:04.764 "r_mbytes_per_sec": 0, 00:06:04.764 "w_mbytes_per_sec": 0 00:06:04.764 }, 00:06:04.764 "claimed": false, 00:06:04.764 "zoned": false, 00:06:04.764 "supported_io_types": { 00:06:04.764 "read": true, 00:06:04.764 "write": true, 00:06:04.764 "unmap": true, 00:06:04.764 "flush": true, 00:06:04.764 "reset": true, 00:06:04.764 "nvme_admin": false, 00:06:04.764 "nvme_io": false, 00:06:04.764 "nvme_io_md": false, 00:06:04.764 "write_zeroes": true, 00:06:04.764 "zcopy": true, 00:06:04.764 "get_zone_info": false, 00:06:04.764 "zone_management": false, 00:06:04.764 "zone_append": false, 00:06:04.764 "compare": false, 00:06:04.764 "compare_and_write": false, 00:06:04.764 "abort": true, 00:06:04.764 "seek_hole": false, 00:06:04.764 "seek_data": false, 00:06:04.764 "copy": true, 00:06:04.764 "nvme_iov_md": false 00:06:04.764 }, 00:06:04.764 "memory_domains": [ 00:06:04.764 { 00:06:04.764 "dma_device_id": "system", 00:06:04.764 "dma_device_type": 1 00:06:04.764 }, 00:06:04.764 { 00:06:04.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:04.764 "dma_device_type": 2 00:06:04.764 } 00:06:04.764 ], 00:06:04.764 "driver_specific": { 00:06:04.764 "passthru": { 00:06:04.764 "name": "Passthru0", 00:06:04.764 "base_bdev_name": "Malloc0" 00:06:04.764 } 00:06:04.764 } 00:06:04.764 } 00:06:04.764 ]' 00:06:04.764 15:47:33 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:04.764 15:47:33 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:04.764 15:47:33 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:04.764 15:47:33 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.764 15:47:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:04.764 15:47:33 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.764 15:47:33 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:04.764 15:47:33 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.764 15:47:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:04.764 15:47:33 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.764 15:47:33 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:04.764 15:47:33 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.764 15:47:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:04.764 15:47:33 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.764 15:47:33 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:04.764 15:47:33 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:05.021 15:47:33 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:05.021 00:06:05.021 real 0m0.285s 00:06:05.021 user 0m0.188s 00:06:05.021 sys 0m0.029s 00:06:05.021 15:47:33 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.021 15:47:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.021 ************************************ 00:06:05.021 END TEST rpc_integrity 00:06:05.021 ************************************ 00:06:05.022 15:47:33 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:05.022 15:47:33 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:05.022 15:47:33 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:05.022 15:47:33 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.022 15:47:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.022 ************************************ 00:06:05.022 START TEST rpc_plugins 00:06:05.022 ************************************ 00:06:05.022 15:47:33 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:06:05.022 15:47:33 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:05.022 15:47:33 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.022 15:47:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:05.022 15:47:33 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.022 15:47:33 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:05.022 15:47:33 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:05.022 15:47:33 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.022 15:47:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:05.022 15:47:33 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.022 15:47:33 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:05.022 { 00:06:05.022 "name": "Malloc1", 00:06:05.022 "aliases": [ 00:06:05.022 "0b7a079c-3d08-4125-9bf6-588bbdcfdb8e" 00:06:05.022 ], 00:06:05.022 "product_name": "Malloc disk", 00:06:05.022 "block_size": 4096, 00:06:05.022 "num_blocks": 256, 00:06:05.022 "uuid": "0b7a079c-3d08-4125-9bf6-588bbdcfdb8e", 00:06:05.022 "assigned_rate_limits": { 00:06:05.022 "rw_ios_per_sec": 0, 00:06:05.022 "rw_mbytes_per_sec": 0, 00:06:05.022 "r_mbytes_per_sec": 0, 00:06:05.022 "w_mbytes_per_sec": 0 00:06:05.022 }, 00:06:05.022 "claimed": false, 00:06:05.022 "zoned": false, 00:06:05.022 "supported_io_types": { 00:06:05.022 "read": true, 00:06:05.022 "write": true, 00:06:05.022 "unmap": true, 00:06:05.022 "flush": true, 00:06:05.022 "reset": true, 00:06:05.022 "nvme_admin": false, 00:06:05.022 "nvme_io": false, 00:06:05.022 "nvme_io_md": false, 00:06:05.022 "write_zeroes": true, 00:06:05.022 "zcopy": true, 00:06:05.022 "get_zone_info": false, 00:06:05.022 "zone_management": false, 00:06:05.022 "zone_append": false, 00:06:05.022 "compare": false, 00:06:05.022 "compare_and_write": false, 00:06:05.022 "abort": true, 00:06:05.022 "seek_hole": false, 00:06:05.022 "seek_data": false, 00:06:05.022 "copy": true, 00:06:05.022 "nvme_iov_md": false 00:06:05.022 }, 00:06:05.022 "memory_domains": [ 00:06:05.022 { 00:06:05.022 "dma_device_id": "system", 00:06:05.022 "dma_device_type": 1 00:06:05.022 }, 00:06:05.022 { 00:06:05.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.022 "dma_device_type": 2 00:06:05.022 } 00:06:05.022 ], 00:06:05.022 "driver_specific": {} 00:06:05.022 } 00:06:05.022 ]' 00:06:05.022 15:47:33 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:05.022 15:47:33 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:05.022 15:47:33 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:05.022 15:47:33 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.022 15:47:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:05.022 15:47:33 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.022 15:47:33 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:05.022 15:47:33 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.022 15:47:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:05.022 15:47:33 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.022 15:47:33 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:05.022 15:47:33 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:05.022 15:47:33 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:05.022 00:06:05.022 real 0m0.137s 00:06:05.022 user 0m0.086s 00:06:05.022 sys 0m0.017s 00:06:05.022 15:47:33 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.022 15:47:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:05.022 ************************************ 00:06:05.022 END TEST rpc_plugins 00:06:05.022 ************************************ 00:06:05.278 15:47:33 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:05.278 15:47:33 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:05.278 15:47:33 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:05.278 15:47:33 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.278 15:47:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.278 ************************************ 00:06:05.278 START TEST rpc_trace_cmd_test 00:06:05.278 ************************************ 00:06:05.278 15:47:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:06:05.278 15:47:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:05.278 15:47:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:05.278 15:47:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.278 15:47:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:05.278 15:47:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.278 15:47:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:05.278 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3581798", 00:06:05.278 "tpoint_group_mask": "0x8", 00:06:05.278 "iscsi_conn": { 00:06:05.278 "mask": "0x2", 00:06:05.278 "tpoint_mask": "0x0" 00:06:05.278 }, 00:06:05.278 "scsi": { 00:06:05.278 "mask": "0x4", 00:06:05.278 "tpoint_mask": "0x0" 00:06:05.278 }, 00:06:05.278 "bdev": { 00:06:05.278 "mask": "0x8", 00:06:05.278 "tpoint_mask": "0xffffffffffffffff" 00:06:05.278 }, 00:06:05.278 "nvmf_rdma": { 00:06:05.278 "mask": "0x10", 00:06:05.278 "tpoint_mask": "0x0" 00:06:05.278 }, 00:06:05.278 "nvmf_tcp": { 00:06:05.278 "mask": "0x20", 00:06:05.278 "tpoint_mask": "0x0" 00:06:05.278 }, 00:06:05.278 "ftl": { 00:06:05.278 "mask": "0x40", 00:06:05.278 "tpoint_mask": "0x0" 00:06:05.278 }, 00:06:05.278 "blobfs": { 00:06:05.278 "mask": "0x80", 00:06:05.278 "tpoint_mask": "0x0" 00:06:05.278 }, 00:06:05.278 "dsa": { 00:06:05.278 "mask": "0x200", 00:06:05.278 "tpoint_mask": "0x0" 00:06:05.278 }, 00:06:05.278 "thread": { 00:06:05.278 "mask": "0x400", 00:06:05.278 "tpoint_mask": "0x0" 00:06:05.278 }, 00:06:05.278 "nvme_pcie": { 00:06:05.278 "mask": "0x800", 00:06:05.278 "tpoint_mask": "0x0" 00:06:05.278 }, 00:06:05.278 "iaa": { 00:06:05.278 "mask": "0x1000", 00:06:05.278 "tpoint_mask": "0x0" 00:06:05.278 }, 00:06:05.278 "nvme_tcp": { 00:06:05.278 "mask": "0x2000", 00:06:05.278 "tpoint_mask": "0x0" 00:06:05.278 }, 00:06:05.278 "bdev_nvme": { 00:06:05.278 "mask": "0x4000", 00:06:05.278 "tpoint_mask": "0x0" 00:06:05.278 }, 00:06:05.278 "sock": { 00:06:05.278 "mask": "0x8000", 00:06:05.278 "tpoint_mask": "0x0" 00:06:05.278 } 00:06:05.278 }' 00:06:05.278 15:47:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:05.278 15:47:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:05.278 15:47:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:05.278 15:47:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:05.278 15:47:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:05.278 15:47:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:05.278 15:47:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:05.278 15:47:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:05.278 15:47:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:05.278 15:47:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:05.536 00:06:05.536 real 0m0.206s 00:06:05.536 user 0m0.171s 00:06:05.536 sys 0m0.029s 00:06:05.536 15:47:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.536 15:47:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:05.536 ************************************ 00:06:05.536 END TEST rpc_trace_cmd_test 00:06:05.536 ************************************ 00:06:05.536 15:47:34 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:05.536 15:47:34 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:05.536 15:47:34 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:05.536 15:47:34 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:05.536 15:47:34 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:05.536 15:47:34 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.536 15:47:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.536 ************************************ 00:06:05.536 START TEST rpc_daemon_integrity 00:06:05.536 ************************************ 00:06:05.536 15:47:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:06:05.536 15:47:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:05.536 15:47:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.536 15:47:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.536 15:47:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.536 15:47:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:05.536 15:47:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:05.536 15:47:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:05.536 15:47:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:05.536 15:47:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.536 15:47:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.536 15:47:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.536 15:47:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:05.536 15:47:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:05.536 15:47:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.536 15:47:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.536 15:47:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.536 15:47:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:05.536 { 00:06:05.536 "name": "Malloc2", 00:06:05.536 "aliases": [ 00:06:05.536 "bc63dd50-1038-4836-9362-cf9a58f4ecc7" 00:06:05.536 ], 00:06:05.536 "product_name": "Malloc disk", 00:06:05.536 "block_size": 512, 00:06:05.536 "num_blocks": 16384, 00:06:05.536 "uuid": "bc63dd50-1038-4836-9362-cf9a58f4ecc7", 00:06:05.536 "assigned_rate_limits": { 00:06:05.536 "rw_ios_per_sec": 0, 00:06:05.536 "rw_mbytes_per_sec": 0, 00:06:05.536 "r_mbytes_per_sec": 0, 00:06:05.536 "w_mbytes_per_sec": 0 00:06:05.536 }, 00:06:05.536 "claimed": false, 00:06:05.536 "zoned": false, 00:06:05.536 "supported_io_types": { 00:06:05.536 "read": true, 00:06:05.536 "write": true, 00:06:05.536 "unmap": true, 00:06:05.536 "flush": true, 00:06:05.536 "reset": true, 00:06:05.536 "nvme_admin": false, 00:06:05.536 "nvme_io": false, 00:06:05.536 "nvme_io_md": false, 00:06:05.536 "write_zeroes": true, 00:06:05.536 "zcopy": true, 00:06:05.536 "get_zone_info": false, 00:06:05.536 "zone_management": false, 00:06:05.536 "zone_append": false, 00:06:05.536 "compare": false, 00:06:05.536 "compare_and_write": false, 00:06:05.536 "abort": true, 00:06:05.536 "seek_hole": false, 00:06:05.536 "seek_data": false, 00:06:05.536 "copy": true, 00:06:05.536 "nvme_iov_md": false 00:06:05.536 }, 00:06:05.536 "memory_domains": [ 00:06:05.536 { 00:06:05.536 "dma_device_id": "system", 00:06:05.536 "dma_device_type": 1 00:06:05.536 }, 00:06:05.536 { 00:06:05.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.536 "dma_device_type": 2 00:06:05.536 } 00:06:05.536 ], 00:06:05.536 "driver_specific": {} 00:06:05.536 } 00:06:05.536 ]' 00:06:05.536 15:47:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:05.536 15:47:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:05.536 15:47:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:05.536 15:47:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.536 15:47:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.536 [2024-07-15 15:47:34.406833] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:05.536 [2024-07-15 15:47:34.406862] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:05.536 [2024-07-15 15:47:34.406874] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22e9ac0 00:06:05.536 [2024-07-15 15:47:34.406880] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:05.536 [2024-07-15 15:47:34.407827] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:05.536 [2024-07-15 15:47:34.407848] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:05.536 Passthru0 00:06:05.536 15:47:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.536 15:47:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:05.536 15:47:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.536 15:47:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.536 15:47:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.536 15:47:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:05.536 { 00:06:05.536 "name": "Malloc2", 00:06:05.536 "aliases": [ 00:06:05.536 "bc63dd50-1038-4836-9362-cf9a58f4ecc7" 00:06:05.536 ], 00:06:05.536 "product_name": "Malloc disk", 00:06:05.536 "block_size": 512, 00:06:05.536 "num_blocks": 16384, 00:06:05.536 "uuid": "bc63dd50-1038-4836-9362-cf9a58f4ecc7", 00:06:05.536 "assigned_rate_limits": { 00:06:05.536 "rw_ios_per_sec": 0, 00:06:05.536 "rw_mbytes_per_sec": 0, 00:06:05.536 "r_mbytes_per_sec": 0, 00:06:05.536 "w_mbytes_per_sec": 0 00:06:05.536 }, 00:06:05.536 "claimed": true, 00:06:05.536 "claim_type": "exclusive_write", 00:06:05.536 "zoned": false, 00:06:05.536 "supported_io_types": { 00:06:05.536 "read": true, 00:06:05.536 "write": true, 00:06:05.536 "unmap": true, 00:06:05.536 "flush": true, 00:06:05.536 "reset": true, 00:06:05.536 "nvme_admin": false, 00:06:05.536 "nvme_io": false, 00:06:05.536 "nvme_io_md": false, 00:06:05.536 "write_zeroes": true, 00:06:05.536 "zcopy": true, 00:06:05.536 "get_zone_info": false, 00:06:05.536 "zone_management": false, 00:06:05.536 "zone_append": false, 00:06:05.536 "compare": false, 00:06:05.536 "compare_and_write": false, 00:06:05.536 "abort": true, 00:06:05.536 "seek_hole": false, 00:06:05.536 "seek_data": false, 00:06:05.536 "copy": true, 00:06:05.536 "nvme_iov_md": false 00:06:05.536 }, 00:06:05.536 "memory_domains": [ 00:06:05.536 { 00:06:05.536 "dma_device_id": "system", 00:06:05.536 "dma_device_type": 1 00:06:05.536 }, 00:06:05.536 { 00:06:05.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.536 "dma_device_type": 2 00:06:05.536 } 00:06:05.536 ], 00:06:05.536 "driver_specific": {} 00:06:05.536 }, 00:06:05.536 { 00:06:05.536 "name": "Passthru0", 00:06:05.536 "aliases": [ 00:06:05.536 "6c368d65-fb9e-568a-b90f-8889041a926b" 00:06:05.536 ], 00:06:05.536 "product_name": "passthru", 00:06:05.536 "block_size": 512, 00:06:05.536 "num_blocks": 16384, 00:06:05.536 "uuid": "6c368d65-fb9e-568a-b90f-8889041a926b", 00:06:05.536 "assigned_rate_limits": { 00:06:05.536 "rw_ios_per_sec": 0, 00:06:05.536 "rw_mbytes_per_sec": 0, 00:06:05.536 "r_mbytes_per_sec": 0, 00:06:05.536 "w_mbytes_per_sec": 0 00:06:05.536 }, 00:06:05.536 "claimed": false, 00:06:05.536 "zoned": false, 00:06:05.536 "supported_io_types": { 00:06:05.536 "read": true, 00:06:05.536 "write": true, 00:06:05.536 "unmap": true, 00:06:05.536 "flush": true, 00:06:05.536 "reset": true, 00:06:05.536 "nvme_admin": false, 00:06:05.536 "nvme_io": false, 00:06:05.536 "nvme_io_md": false, 00:06:05.536 "write_zeroes": true, 00:06:05.536 "zcopy": true, 00:06:05.536 "get_zone_info": false, 00:06:05.536 "zone_management": false, 00:06:05.536 "zone_append": false, 00:06:05.536 "compare": false, 00:06:05.536 "compare_and_write": false, 00:06:05.536 "abort": true, 00:06:05.536 "seek_hole": false, 00:06:05.536 "seek_data": false, 00:06:05.536 "copy": true, 00:06:05.536 "nvme_iov_md": false 00:06:05.536 }, 00:06:05.536 "memory_domains": [ 00:06:05.536 { 00:06:05.536 "dma_device_id": "system", 00:06:05.536 "dma_device_type": 1 00:06:05.536 }, 00:06:05.536 { 00:06:05.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.536 "dma_device_type": 2 00:06:05.536 } 00:06:05.536 ], 00:06:05.536 "driver_specific": { 00:06:05.536 "passthru": { 00:06:05.536 "name": "Passthru0", 00:06:05.536 "base_bdev_name": "Malloc2" 00:06:05.536 } 00:06:05.536 } 00:06:05.536 } 00:06:05.536 ]' 00:06:05.536 15:47:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:05.795 15:47:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:05.795 15:47:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:05.795 15:47:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.795 15:47:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.795 15:47:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.795 15:47:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:05.795 15:47:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.795 15:47:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.795 15:47:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.795 15:47:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:05.795 15:47:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.795 15:47:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.795 15:47:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.795 15:47:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:05.795 15:47:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:05.795 15:47:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:05.795 00:06:05.795 real 0m0.278s 00:06:05.795 user 0m0.177s 00:06:05.795 sys 0m0.034s 00:06:05.795 15:47:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.795 15:47:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:05.795 ************************************ 00:06:05.795 END TEST rpc_daemon_integrity 00:06:05.795 ************************************ 00:06:05.795 15:47:34 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:05.795 15:47:34 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:05.795 15:47:34 rpc -- rpc/rpc.sh@84 -- # killprocess 3581798 00:06:05.795 15:47:34 rpc -- common/autotest_common.sh@948 -- # '[' -z 3581798 ']' 00:06:05.795 15:47:34 rpc -- common/autotest_common.sh@952 -- # kill -0 3581798 00:06:05.795 15:47:34 rpc -- common/autotest_common.sh@953 -- # uname 00:06:05.795 15:47:34 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:05.795 15:47:34 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3581798 00:06:05.795 15:47:34 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:05.795 15:47:34 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:05.795 15:47:34 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3581798' 00:06:05.795 killing process with pid 3581798 00:06:05.795 15:47:34 rpc -- common/autotest_common.sh@967 -- # kill 3581798 00:06:05.795 15:47:34 rpc -- common/autotest_common.sh@972 -- # wait 3581798 00:06:06.053 00:06:06.053 real 0m2.442s 00:06:06.053 user 0m3.158s 00:06:06.053 sys 0m0.654s 00:06:06.053 15:47:34 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.053 15:47:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.053 ************************************ 00:06:06.053 END TEST rpc 00:06:06.053 ************************************ 00:06:06.053 15:47:34 -- common/autotest_common.sh@1142 -- # return 0 00:06:06.053 15:47:34 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:06.053 15:47:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:06.053 15:47:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.053 15:47:34 -- common/autotest_common.sh@10 -- # set +x 00:06:06.312 ************************************ 00:06:06.312 START TEST skip_rpc 00:06:06.312 ************************************ 00:06:06.312 15:47:35 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:06.312 * Looking for test storage... 00:06:06.312 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:06.312 15:47:35 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:06.312 15:47:35 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:06.312 15:47:35 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:06.312 15:47:35 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:06.312 15:47:35 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.312 15:47:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.312 ************************************ 00:06:06.312 START TEST skip_rpc 00:06:06.312 ************************************ 00:06:06.312 15:47:35 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:06:06.312 15:47:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3582433 00:06:06.312 15:47:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:06.312 15:47:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:06.312 15:47:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:06.312 [2024-07-15 15:47:35.177719] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:06.312 [2024-07-15 15:47:35.177758] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3582433 ] 00:06:06.312 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.312 [2024-07-15 15:47:35.229748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.569 [2024-07-15 15:47:35.303348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.826 15:47:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:11.826 15:47:40 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:11.826 15:47:40 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:11.826 15:47:40 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:11.826 15:47:40 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:11.827 15:47:40 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:11.827 15:47:40 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:11.827 15:47:40 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:06:11.827 15:47:40 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.827 15:47:40 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.827 15:47:40 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:11.827 15:47:40 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:11.827 15:47:40 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:11.827 15:47:40 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:11.827 15:47:40 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:11.827 15:47:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:11.827 15:47:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3582433 00:06:11.827 15:47:40 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 3582433 ']' 00:06:11.827 15:47:40 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 3582433 00:06:11.827 15:47:40 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:06:11.827 15:47:40 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:11.827 15:47:40 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3582433 00:06:11.827 15:47:40 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:11.827 15:47:40 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:11.827 15:47:40 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3582433' 00:06:11.827 killing process with pid 3582433 00:06:11.827 15:47:40 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 3582433 00:06:11.827 15:47:40 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 3582433 00:06:11.827 00:06:11.827 real 0m5.362s 00:06:11.827 user 0m5.126s 00:06:11.827 sys 0m0.261s 00:06:11.827 15:47:40 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.827 15:47:40 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.827 ************************************ 00:06:11.827 END TEST skip_rpc 00:06:11.827 ************************************ 00:06:11.827 15:47:40 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:11.827 15:47:40 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:11.827 15:47:40 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:11.827 15:47:40 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.827 15:47:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.827 ************************************ 00:06:11.827 START TEST skip_rpc_with_json 00:06:11.827 ************************************ 00:06:11.827 15:47:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:06:11.827 15:47:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:11.827 15:47:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3583379 00:06:11.827 15:47:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:11.827 15:47:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:11.827 15:47:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3583379 00:06:11.827 15:47:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 3583379 ']' 00:06:11.827 15:47:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.827 15:47:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.827 15:47:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.827 15:47:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.827 15:47:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:11.827 [2024-07-15 15:47:40.602118] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:11.827 [2024-07-15 15:47:40.602156] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3583379 ] 00:06:11.827 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.827 [2024-07-15 15:47:40.655896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.827 [2024-07-15 15:47:40.722876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.759 15:47:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.759 15:47:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:06:12.759 15:47:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:12.759 15:47:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.759 15:47:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:12.759 [2024-07-15 15:47:41.407971] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:12.759 request: 00:06:12.759 { 00:06:12.759 "trtype": "tcp", 00:06:12.759 "method": "nvmf_get_transports", 00:06:12.759 "req_id": 1 00:06:12.759 } 00:06:12.759 Got JSON-RPC error response 00:06:12.759 response: 00:06:12.759 { 00:06:12.759 "code": -19, 00:06:12.759 "message": "No such device" 00:06:12.759 } 00:06:12.759 15:47:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:12.759 15:47:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:12.759 15:47:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.759 15:47:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:12.759 [2024-07-15 15:47:41.416062] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:12.759 15:47:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.759 15:47:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:12.759 15:47:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.759 15:47:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:12.759 15:47:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.759 15:47:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:12.759 { 00:06:12.759 "subsystems": [ 00:06:12.759 { 00:06:12.759 "subsystem": "vfio_user_target", 00:06:12.759 "config": null 00:06:12.759 }, 00:06:12.759 { 00:06:12.759 "subsystem": "keyring", 00:06:12.759 "config": [] 00:06:12.759 }, 00:06:12.759 { 00:06:12.759 "subsystem": "iobuf", 00:06:12.759 "config": [ 00:06:12.759 { 00:06:12.759 "method": "iobuf_set_options", 00:06:12.759 "params": { 00:06:12.759 "small_pool_count": 8192, 00:06:12.759 "large_pool_count": 1024, 00:06:12.759 "small_bufsize": 8192, 00:06:12.759 "large_bufsize": 135168 00:06:12.759 } 00:06:12.759 } 00:06:12.759 ] 00:06:12.759 }, 00:06:12.759 { 00:06:12.759 "subsystem": "sock", 00:06:12.759 "config": [ 00:06:12.759 { 00:06:12.759 "method": "sock_set_default_impl", 00:06:12.759 "params": { 00:06:12.759 "impl_name": "posix" 00:06:12.759 } 00:06:12.759 }, 00:06:12.759 { 00:06:12.759 "method": "sock_impl_set_options", 00:06:12.759 "params": { 00:06:12.759 "impl_name": "ssl", 00:06:12.759 "recv_buf_size": 4096, 00:06:12.759 "send_buf_size": 4096, 00:06:12.759 "enable_recv_pipe": true, 00:06:12.759 "enable_quickack": false, 00:06:12.759 "enable_placement_id": 0, 00:06:12.759 "enable_zerocopy_send_server": true, 00:06:12.759 "enable_zerocopy_send_client": false, 00:06:12.759 "zerocopy_threshold": 0, 00:06:12.759 "tls_version": 0, 00:06:12.759 "enable_ktls": false 00:06:12.759 } 00:06:12.759 }, 00:06:12.759 { 00:06:12.759 "method": "sock_impl_set_options", 00:06:12.759 "params": { 00:06:12.759 "impl_name": "posix", 00:06:12.759 "recv_buf_size": 2097152, 00:06:12.759 "send_buf_size": 2097152, 00:06:12.759 "enable_recv_pipe": true, 00:06:12.759 "enable_quickack": false, 00:06:12.759 "enable_placement_id": 0, 00:06:12.759 "enable_zerocopy_send_server": true, 00:06:12.759 "enable_zerocopy_send_client": false, 00:06:12.759 "zerocopy_threshold": 0, 00:06:12.759 "tls_version": 0, 00:06:12.759 "enable_ktls": false 00:06:12.759 } 00:06:12.759 } 00:06:12.759 ] 00:06:12.759 }, 00:06:12.759 { 00:06:12.759 "subsystem": "vmd", 00:06:12.759 "config": [] 00:06:12.759 }, 00:06:12.759 { 00:06:12.759 "subsystem": "accel", 00:06:12.759 "config": [ 00:06:12.759 { 00:06:12.759 "method": "accel_set_options", 00:06:12.759 "params": { 00:06:12.759 "small_cache_size": 128, 00:06:12.759 "large_cache_size": 16, 00:06:12.759 "task_count": 2048, 00:06:12.759 "sequence_count": 2048, 00:06:12.759 "buf_count": 2048 00:06:12.759 } 00:06:12.759 } 00:06:12.759 ] 00:06:12.759 }, 00:06:12.759 { 00:06:12.759 "subsystem": "bdev", 00:06:12.759 "config": [ 00:06:12.759 { 00:06:12.759 "method": "bdev_set_options", 00:06:12.759 "params": { 00:06:12.759 "bdev_io_pool_size": 65535, 00:06:12.759 "bdev_io_cache_size": 256, 00:06:12.759 "bdev_auto_examine": true, 00:06:12.759 "iobuf_small_cache_size": 128, 00:06:12.759 "iobuf_large_cache_size": 16 00:06:12.759 } 00:06:12.759 }, 00:06:12.759 { 00:06:12.759 "method": "bdev_raid_set_options", 00:06:12.759 "params": { 00:06:12.759 "process_window_size_kb": 1024 00:06:12.759 } 00:06:12.759 }, 00:06:12.759 { 00:06:12.759 "method": "bdev_iscsi_set_options", 00:06:12.759 "params": { 00:06:12.759 "timeout_sec": 30 00:06:12.759 } 00:06:12.759 }, 00:06:12.759 { 00:06:12.759 "method": "bdev_nvme_set_options", 00:06:12.759 "params": { 00:06:12.759 "action_on_timeout": "none", 00:06:12.759 "timeout_us": 0, 00:06:12.759 "timeout_admin_us": 0, 00:06:12.759 "keep_alive_timeout_ms": 10000, 00:06:12.759 "arbitration_burst": 0, 00:06:12.759 "low_priority_weight": 0, 00:06:12.759 "medium_priority_weight": 0, 00:06:12.759 "high_priority_weight": 0, 00:06:12.759 "nvme_adminq_poll_period_us": 10000, 00:06:12.759 "nvme_ioq_poll_period_us": 0, 00:06:12.759 "io_queue_requests": 0, 00:06:12.759 "delay_cmd_submit": true, 00:06:12.759 "transport_retry_count": 4, 00:06:12.759 "bdev_retry_count": 3, 00:06:12.759 "transport_ack_timeout": 0, 00:06:12.759 "ctrlr_loss_timeout_sec": 0, 00:06:12.759 "reconnect_delay_sec": 0, 00:06:12.759 "fast_io_fail_timeout_sec": 0, 00:06:12.759 "disable_auto_failback": false, 00:06:12.759 "generate_uuids": false, 00:06:12.759 "transport_tos": 0, 00:06:12.759 "nvme_error_stat": false, 00:06:12.759 "rdma_srq_size": 0, 00:06:12.759 "io_path_stat": false, 00:06:12.759 "allow_accel_sequence": false, 00:06:12.759 "rdma_max_cq_size": 0, 00:06:12.759 "rdma_cm_event_timeout_ms": 0, 00:06:12.759 "dhchap_digests": [ 00:06:12.759 "sha256", 00:06:12.759 "sha384", 00:06:12.759 "sha512" 00:06:12.759 ], 00:06:12.759 "dhchap_dhgroups": [ 00:06:12.759 "null", 00:06:12.759 "ffdhe2048", 00:06:12.759 "ffdhe3072", 00:06:12.759 "ffdhe4096", 00:06:12.759 "ffdhe6144", 00:06:12.759 "ffdhe8192" 00:06:12.759 ] 00:06:12.759 } 00:06:12.759 }, 00:06:12.759 { 00:06:12.759 "method": "bdev_nvme_set_hotplug", 00:06:12.759 "params": { 00:06:12.759 "period_us": 100000, 00:06:12.759 "enable": false 00:06:12.759 } 00:06:12.759 }, 00:06:12.759 { 00:06:12.759 "method": "bdev_wait_for_examine" 00:06:12.759 } 00:06:12.759 ] 00:06:12.759 }, 00:06:12.759 { 00:06:12.759 "subsystem": "scsi", 00:06:12.759 "config": null 00:06:12.759 }, 00:06:12.759 { 00:06:12.760 "subsystem": "scheduler", 00:06:12.760 "config": [ 00:06:12.760 { 00:06:12.760 "method": "framework_set_scheduler", 00:06:12.760 "params": { 00:06:12.760 "name": "static" 00:06:12.760 } 00:06:12.760 } 00:06:12.760 ] 00:06:12.760 }, 00:06:12.760 { 00:06:12.760 "subsystem": "vhost_scsi", 00:06:12.760 "config": [] 00:06:12.760 }, 00:06:12.760 { 00:06:12.760 "subsystem": "vhost_blk", 00:06:12.760 "config": [] 00:06:12.760 }, 00:06:12.760 { 00:06:12.760 "subsystem": "ublk", 00:06:12.760 "config": [] 00:06:12.760 }, 00:06:12.760 { 00:06:12.760 "subsystem": "nbd", 00:06:12.760 "config": [] 00:06:12.760 }, 00:06:12.760 { 00:06:12.760 "subsystem": "nvmf", 00:06:12.760 "config": [ 00:06:12.760 { 00:06:12.760 "method": "nvmf_set_config", 00:06:12.760 "params": { 00:06:12.760 "discovery_filter": "match_any", 00:06:12.760 "admin_cmd_passthru": { 00:06:12.760 "identify_ctrlr": false 00:06:12.760 } 00:06:12.760 } 00:06:12.760 }, 00:06:12.760 { 00:06:12.760 "method": "nvmf_set_max_subsystems", 00:06:12.760 "params": { 00:06:12.760 "max_subsystems": 1024 00:06:12.760 } 00:06:12.760 }, 00:06:12.760 { 00:06:12.760 "method": "nvmf_set_crdt", 00:06:12.760 "params": { 00:06:12.760 "crdt1": 0, 00:06:12.760 "crdt2": 0, 00:06:12.760 "crdt3": 0 00:06:12.760 } 00:06:12.760 }, 00:06:12.760 { 00:06:12.760 "method": "nvmf_create_transport", 00:06:12.760 "params": { 00:06:12.760 "trtype": "TCP", 00:06:12.760 "max_queue_depth": 128, 00:06:12.760 "max_io_qpairs_per_ctrlr": 127, 00:06:12.760 "in_capsule_data_size": 4096, 00:06:12.760 "max_io_size": 131072, 00:06:12.760 "io_unit_size": 131072, 00:06:12.760 "max_aq_depth": 128, 00:06:12.760 "num_shared_buffers": 511, 00:06:12.760 "buf_cache_size": 4294967295, 00:06:12.760 "dif_insert_or_strip": false, 00:06:12.760 "zcopy": false, 00:06:12.760 "c2h_success": true, 00:06:12.760 "sock_priority": 0, 00:06:12.760 "abort_timeout_sec": 1, 00:06:12.760 "ack_timeout": 0, 00:06:12.760 "data_wr_pool_size": 0 00:06:12.760 } 00:06:12.760 } 00:06:12.760 ] 00:06:12.760 }, 00:06:12.760 { 00:06:12.760 "subsystem": "iscsi", 00:06:12.760 "config": [ 00:06:12.760 { 00:06:12.760 "method": "iscsi_set_options", 00:06:12.760 "params": { 00:06:12.760 "node_base": "iqn.2016-06.io.spdk", 00:06:12.760 "max_sessions": 128, 00:06:12.760 "max_connections_per_session": 2, 00:06:12.760 "max_queue_depth": 64, 00:06:12.760 "default_time2wait": 2, 00:06:12.760 "default_time2retain": 20, 00:06:12.760 "first_burst_length": 8192, 00:06:12.760 "immediate_data": true, 00:06:12.760 "allow_duplicated_isid": false, 00:06:12.760 "error_recovery_level": 0, 00:06:12.760 "nop_timeout": 60, 00:06:12.760 "nop_in_interval": 30, 00:06:12.760 "disable_chap": false, 00:06:12.760 "require_chap": false, 00:06:12.760 "mutual_chap": false, 00:06:12.760 "chap_group": 0, 00:06:12.760 "max_large_datain_per_connection": 64, 00:06:12.760 "max_r2t_per_connection": 4, 00:06:12.760 "pdu_pool_size": 36864, 00:06:12.760 "immediate_data_pool_size": 16384, 00:06:12.760 "data_out_pool_size": 2048 00:06:12.760 } 00:06:12.760 } 00:06:12.760 ] 00:06:12.760 } 00:06:12.760 ] 00:06:12.760 } 00:06:12.760 15:47:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:12.760 15:47:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3583379 00:06:12.760 15:47:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 3583379 ']' 00:06:12.760 15:47:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 3583379 00:06:12.760 15:47:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:12.760 15:47:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:12.760 15:47:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3583379 00:06:12.760 15:47:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:12.760 15:47:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:12.760 15:47:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3583379' 00:06:12.760 killing process with pid 3583379 00:06:12.760 15:47:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 3583379 00:06:12.760 15:47:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 3583379 00:06:13.016 15:47:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3583617 00:06:13.016 15:47:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:13.016 15:47:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:18.272 15:47:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3583617 00:06:18.273 15:47:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 3583617 ']' 00:06:18.273 15:47:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 3583617 00:06:18.273 15:47:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:18.273 15:47:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:18.273 15:47:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3583617 00:06:18.273 15:47:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:18.273 15:47:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:18.273 15:47:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3583617' 00:06:18.273 killing process with pid 3583617 00:06:18.273 15:47:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 3583617 00:06:18.273 15:47:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 3583617 00:06:18.531 15:47:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:18.531 15:47:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:18.531 00:06:18.531 real 0m6.715s 00:06:18.531 user 0m6.563s 00:06:18.531 sys 0m0.548s 00:06:18.531 15:47:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.531 15:47:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:18.531 ************************************ 00:06:18.531 END TEST skip_rpc_with_json 00:06:18.531 ************************************ 00:06:18.531 15:47:47 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:18.531 15:47:47 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:18.532 15:47:47 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:18.532 15:47:47 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.532 15:47:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.532 ************************************ 00:06:18.532 START TEST skip_rpc_with_delay 00:06:18.532 ************************************ 00:06:18.532 15:47:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:06:18.532 15:47:47 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:18.532 15:47:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:06:18.532 15:47:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:18.532 15:47:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:18.532 15:47:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:18.532 15:47:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:18.532 15:47:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:18.532 15:47:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:18.532 15:47:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:18.532 15:47:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:18.532 15:47:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:18.532 15:47:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:18.532 [2024-07-15 15:47:47.385479] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:18.532 [2024-07-15 15:47:47.385539] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:18.532 15:47:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:06:18.532 15:47:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:18.532 15:47:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:18.532 15:47:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:18.532 00:06:18.532 real 0m0.064s 00:06:18.532 user 0m0.040s 00:06:18.532 sys 0m0.023s 00:06:18.532 15:47:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.532 15:47:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:18.532 ************************************ 00:06:18.532 END TEST skip_rpc_with_delay 00:06:18.532 ************************************ 00:06:18.532 15:47:47 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:18.532 15:47:47 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:18.532 15:47:47 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:18.532 15:47:47 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:18.532 15:47:47 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:18.532 15:47:47 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.532 15:47:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.532 ************************************ 00:06:18.532 START TEST exit_on_failed_rpc_init 00:06:18.532 ************************************ 00:06:18.532 15:47:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:06:18.532 15:47:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3584595 00:06:18.532 15:47:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3584595 00:06:18.532 15:47:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 3584595 ']' 00:06:18.532 15:47:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:18.532 15:47:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.532 15:47:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:18.532 15:47:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.532 15:47:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:18.532 15:47:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:18.791 [2024-07-15 15:47:47.496253] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:18.791 [2024-07-15 15:47:47.496293] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3584595 ] 00:06:18.791 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.791 [2024-07-15 15:47:47.549380] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.791 [2024-07-15 15:47:47.629032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.358 15:47:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.358 15:47:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:06:19.358 15:47:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:19.358 15:47:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:19.358 15:47:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:06:19.616 15:47:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:19.616 15:47:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.616 15:47:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.616 15:47:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.616 15:47:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.616 15:47:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.616 15:47:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.616 15:47:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.617 15:47:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:19.617 15:47:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:19.617 [2024-07-15 15:47:48.344386] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:19.617 [2024-07-15 15:47:48.344431] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3584814 ] 00:06:19.617 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.617 [2024-07-15 15:47:48.397829] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.617 [2024-07-15 15:47:48.472862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.617 [2024-07-15 15:47:48.472929] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:19.617 [2024-07-15 15:47:48.472938] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:19.617 [2024-07-15 15:47:48.472943] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:19.617 15:47:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:06:19.617 15:47:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:19.617 15:47:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:06:19.617 15:47:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:06:19.617 15:47:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:06:19.617 15:47:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:19.617 15:47:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:19.617 15:47:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3584595 00:06:19.617 15:47:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 3584595 ']' 00:06:19.617 15:47:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 3584595 00:06:19.875 15:47:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:06:19.875 15:47:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:19.875 15:47:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3584595 00:06:19.875 15:47:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:19.875 15:47:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:19.875 15:47:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3584595' 00:06:19.875 killing process with pid 3584595 00:06:19.875 15:47:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 3584595 00:06:19.875 15:47:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 3584595 00:06:20.134 00:06:20.134 real 0m1.437s 00:06:20.134 user 0m1.667s 00:06:20.134 sys 0m0.376s 00:06:20.134 15:47:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.134 15:47:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:20.134 ************************************ 00:06:20.134 END TEST exit_on_failed_rpc_init 00:06:20.134 ************************************ 00:06:20.134 15:47:48 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:20.134 15:47:48 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:20.134 00:06:20.134 real 0m13.926s 00:06:20.134 user 0m13.542s 00:06:20.134 sys 0m1.431s 00:06:20.134 15:47:48 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.134 15:47:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.134 ************************************ 00:06:20.134 END TEST skip_rpc 00:06:20.134 ************************************ 00:06:20.134 15:47:48 -- common/autotest_common.sh@1142 -- # return 0 00:06:20.134 15:47:48 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:20.134 15:47:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:20.134 15:47:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.134 15:47:48 -- common/autotest_common.sh@10 -- # set +x 00:06:20.134 ************************************ 00:06:20.134 START TEST rpc_client 00:06:20.134 ************************************ 00:06:20.134 15:47:48 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:20.393 * Looking for test storage... 00:06:20.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:20.393 15:47:49 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:20.393 OK 00:06:20.393 15:47:49 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:20.393 00:06:20.393 real 0m0.113s 00:06:20.393 user 0m0.050s 00:06:20.393 sys 0m0.069s 00:06:20.393 15:47:49 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.393 15:47:49 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:20.393 ************************************ 00:06:20.393 END TEST rpc_client 00:06:20.393 ************************************ 00:06:20.393 15:47:49 -- common/autotest_common.sh@1142 -- # return 0 00:06:20.393 15:47:49 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:20.393 15:47:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:20.393 15:47:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.393 15:47:49 -- common/autotest_common.sh@10 -- # set +x 00:06:20.393 ************************************ 00:06:20.393 START TEST json_config 00:06:20.393 ************************************ 00:06:20.393 15:47:49 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:20.393 15:47:49 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:20.393 15:47:49 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:20.393 15:47:49 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:20.393 15:47:49 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:20.393 15:47:49 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:20.393 15:47:49 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:20.393 15:47:49 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:20.393 15:47:49 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:20.393 15:47:49 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:20.393 15:47:49 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:20.393 15:47:49 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:20.393 15:47:49 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:20.393 15:47:49 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:20.393 15:47:49 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:20.393 15:47:49 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:20.393 15:47:49 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:20.393 15:47:49 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:20.393 15:47:49 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:20.393 15:47:49 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:20.393 15:47:49 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:20.393 15:47:49 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:20.393 15:47:49 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:20.393 15:47:49 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.393 15:47:49 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.393 15:47:49 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.393 15:47:49 json_config -- paths/export.sh@5 -- # export PATH 00:06:20.393 15:47:49 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.393 15:47:49 json_config -- nvmf/common.sh@47 -- # : 0 00:06:20.393 15:47:49 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:20.393 15:47:49 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:20.393 15:47:49 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:20.393 15:47:49 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:20.393 15:47:49 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:20.393 15:47:49 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:20.393 15:47:49 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:20.393 15:47:49 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:20.393 15:47:49 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:20.393 15:47:49 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:20.393 15:47:49 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:20.393 15:47:49 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:20.393 15:47:49 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:20.393 15:47:49 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:20.393 15:47:49 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:20.393 15:47:49 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:20.393 15:47:49 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:20.393 15:47:49 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:20.393 15:47:49 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:20.393 15:47:49 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:20.393 15:47:49 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:20.393 15:47:49 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:20.393 15:47:49 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:20.393 15:47:49 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:06:20.393 INFO: JSON configuration test init 00:06:20.393 15:47:49 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:06:20.393 15:47:49 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:06:20.393 15:47:49 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:20.393 15:47:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:20.393 15:47:49 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:06:20.393 15:47:49 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:20.393 15:47:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:20.393 15:47:49 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:06:20.393 15:47:49 json_config -- json_config/common.sh@9 -- # local app=target 00:06:20.393 15:47:49 json_config -- json_config/common.sh@10 -- # shift 00:06:20.393 15:47:49 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:20.393 15:47:49 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:20.393 15:47:49 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:20.393 15:47:49 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:20.393 15:47:49 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:20.393 15:47:49 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3584944 00:06:20.393 15:47:49 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:20.393 Waiting for target to run... 00:06:20.393 15:47:49 json_config -- json_config/common.sh@25 -- # waitforlisten 3584944 /var/tmp/spdk_tgt.sock 00:06:20.393 15:47:49 json_config -- common/autotest_common.sh@829 -- # '[' -z 3584944 ']' 00:06:20.393 15:47:49 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:20.393 15:47:49 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:20.393 15:47:49 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.394 15:47:49 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:20.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:20.394 15:47:49 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.394 15:47:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:20.652 [2024-07-15 15:47:49.330193] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:20.652 [2024-07-15 15:47:49.330269] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3584944 ] 00:06:20.652 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.910 [2024-07-15 15:47:49.609251] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.910 [2024-07-15 15:47:49.683112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.476 15:47:50 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.476 15:47:50 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:21.476 15:47:50 json_config -- json_config/common.sh@26 -- # echo '' 00:06:21.476 00:06:21.476 15:47:50 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:06:21.476 15:47:50 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:06:21.476 15:47:50 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:21.476 15:47:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:21.476 15:47:50 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:06:21.476 15:47:50 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:06:21.476 15:47:50 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:21.476 15:47:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:21.476 15:47:50 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:21.476 15:47:50 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:06:21.476 15:47:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:24.817 15:47:53 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:06:24.817 15:47:53 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:24.817 15:47:53 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:24.817 15:47:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.817 15:47:53 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:24.817 15:47:53 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:24.817 15:47:53 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:24.817 15:47:53 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:24.817 15:47:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:24.817 15:47:53 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:24.817 15:47:53 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:24.817 15:47:53 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:24.817 15:47:53 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:24.817 15:47:53 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:06:24.817 15:47:53 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:24.817 15:47:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.817 15:47:53 json_config -- json_config/json_config.sh@55 -- # return 0 00:06:24.817 15:47:53 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:06:24.817 15:47:53 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:24.817 15:47:53 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:24.817 15:47:53 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:06:24.817 15:47:53 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:06:24.817 15:47:53 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:06:24.817 15:47:53 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:24.817 15:47:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.817 15:47:53 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:24.817 15:47:53 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:06:24.817 15:47:53 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:06:24.817 15:47:53 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:24.817 15:47:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:24.817 MallocForNvmf0 00:06:24.817 15:47:53 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:24.817 15:47:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:25.075 MallocForNvmf1 00:06:25.075 15:47:53 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:25.075 15:47:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:25.075 [2024-07-15 15:47:53.941107] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:25.075 15:47:53 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:25.075 15:47:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:25.334 15:47:54 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:25.334 15:47:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:25.592 15:47:54 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:25.592 15:47:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:25.592 15:47:54 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:25.592 15:47:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:25.850 [2024-07-15 15:47:54.615230] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:25.850 15:47:54 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:06:25.850 15:47:54 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:25.850 15:47:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.850 15:47:54 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:25.850 15:47:54 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:25.850 15:47:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.850 15:47:54 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:25.850 15:47:54 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:25.850 15:47:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:26.108 MallocBdevForConfigChangeCheck 00:06:26.109 15:47:54 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:26.109 15:47:54 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:26.109 15:47:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:26.109 15:47:54 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:26.109 15:47:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:26.367 15:47:55 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:26.367 INFO: shutting down applications... 00:06:26.367 15:47:55 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:26.367 15:47:55 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:26.367 15:47:55 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:26.367 15:47:55 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:28.269 Calling clear_iscsi_subsystem 00:06:28.269 Calling clear_nvmf_subsystem 00:06:28.269 Calling clear_nbd_subsystem 00:06:28.269 Calling clear_ublk_subsystem 00:06:28.269 Calling clear_vhost_blk_subsystem 00:06:28.269 Calling clear_vhost_scsi_subsystem 00:06:28.269 Calling clear_bdev_subsystem 00:06:28.269 15:47:56 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:28.269 15:47:56 json_config -- json_config/json_config.sh@343 -- # count=100 00:06:28.269 15:47:56 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:28.269 15:47:56 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:28.269 15:47:56 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:28.269 15:47:56 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:28.269 15:47:57 json_config -- json_config/json_config.sh@345 -- # break 00:06:28.269 15:47:57 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:28.269 15:47:57 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:28.269 15:47:57 json_config -- json_config/common.sh@31 -- # local app=target 00:06:28.269 15:47:57 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:28.269 15:47:57 json_config -- json_config/common.sh@35 -- # [[ -n 3584944 ]] 00:06:28.269 15:47:57 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3584944 00:06:28.269 15:47:57 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:28.269 15:47:57 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:28.269 15:47:57 json_config -- json_config/common.sh@41 -- # kill -0 3584944 00:06:28.269 15:47:57 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:28.837 15:47:57 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:28.837 15:47:57 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:28.837 15:47:57 json_config -- json_config/common.sh@41 -- # kill -0 3584944 00:06:28.837 15:47:57 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:28.837 15:47:57 json_config -- json_config/common.sh@43 -- # break 00:06:28.837 15:47:57 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:28.837 15:47:57 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:28.837 SPDK target shutdown done 00:06:28.837 15:47:57 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:28.837 INFO: relaunching applications... 00:06:28.837 15:47:57 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:28.837 15:47:57 json_config -- json_config/common.sh@9 -- # local app=target 00:06:28.837 15:47:57 json_config -- json_config/common.sh@10 -- # shift 00:06:28.837 15:47:57 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:28.837 15:47:57 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:28.837 15:47:57 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:28.837 15:47:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:28.837 15:47:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:28.837 15:47:57 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3586544 00:06:28.837 15:47:57 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:28.837 Waiting for target to run... 00:06:28.837 15:47:57 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:28.837 15:47:57 json_config -- json_config/common.sh@25 -- # waitforlisten 3586544 /var/tmp/spdk_tgt.sock 00:06:28.837 15:47:57 json_config -- common/autotest_common.sh@829 -- # '[' -z 3586544 ']' 00:06:28.837 15:47:57 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:28.837 15:47:57 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:28.837 15:47:57 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:28.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:28.837 15:47:57 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:28.837 15:47:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:28.837 [2024-07-15 15:47:57.682668] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:28.837 [2024-07-15 15:47:57.682731] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3586544 ] 00:06:28.837 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.404 [2024-07-15 15:47:58.120478] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.404 [2024-07-15 15:47:58.206371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.686 [2024-07-15 15:48:01.215605] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:32.687 [2024-07-15 15:48:01.247925] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:32.944 15:48:01 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:32.944 15:48:01 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:32.944 15:48:01 json_config -- json_config/common.sh@26 -- # echo '' 00:06:32.944 00:06:32.944 15:48:01 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:32.944 15:48:01 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:32.944 INFO: Checking if target configuration is the same... 00:06:32.944 15:48:01 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:32.944 15:48:01 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:32.944 15:48:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:32.944 + '[' 2 -ne 2 ']' 00:06:32.944 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:32.944 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:32.944 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:32.944 +++ basename /dev/fd/62 00:06:32.944 ++ mktemp /tmp/62.XXX 00:06:32.944 + tmp_file_1=/tmp/62.HPP 00:06:32.944 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:32.944 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:32.944 + tmp_file_2=/tmp/spdk_tgt_config.json.QEt 00:06:32.944 + ret=0 00:06:32.944 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:33.511 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:33.511 + diff -u /tmp/62.HPP /tmp/spdk_tgt_config.json.QEt 00:06:33.511 + echo 'INFO: JSON config files are the same' 00:06:33.511 INFO: JSON config files are the same 00:06:33.511 + rm /tmp/62.HPP /tmp/spdk_tgt_config.json.QEt 00:06:33.511 + exit 0 00:06:33.511 15:48:02 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:33.511 15:48:02 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:33.511 INFO: changing configuration and checking if this can be detected... 00:06:33.511 15:48:02 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:33.511 15:48:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:33.511 15:48:02 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:33.511 15:48:02 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:33.511 15:48:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:33.511 + '[' 2 -ne 2 ']' 00:06:33.511 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:33.511 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:33.511 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:33.511 +++ basename /dev/fd/62 00:06:33.511 ++ mktemp /tmp/62.XXX 00:06:33.511 + tmp_file_1=/tmp/62.mb3 00:06:33.511 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:33.511 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:33.511 + tmp_file_2=/tmp/spdk_tgt_config.json.ZPL 00:06:33.511 + ret=0 00:06:33.511 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:33.769 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:34.028 + diff -u /tmp/62.mb3 /tmp/spdk_tgt_config.json.ZPL 00:06:34.028 + ret=1 00:06:34.028 + echo '=== Start of file: /tmp/62.mb3 ===' 00:06:34.028 + cat /tmp/62.mb3 00:06:34.028 + echo '=== End of file: /tmp/62.mb3 ===' 00:06:34.028 + echo '' 00:06:34.028 + echo '=== Start of file: /tmp/spdk_tgt_config.json.ZPL ===' 00:06:34.028 + cat /tmp/spdk_tgt_config.json.ZPL 00:06:34.028 + echo '=== End of file: /tmp/spdk_tgt_config.json.ZPL ===' 00:06:34.028 + echo '' 00:06:34.028 + rm /tmp/62.mb3 /tmp/spdk_tgt_config.json.ZPL 00:06:34.028 + exit 1 00:06:34.028 15:48:02 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:34.028 INFO: configuration change detected. 00:06:34.028 15:48:02 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:34.028 15:48:02 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:34.028 15:48:02 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:34.028 15:48:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:34.028 15:48:02 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:34.028 15:48:02 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:34.028 15:48:02 json_config -- json_config/json_config.sh@317 -- # [[ -n 3586544 ]] 00:06:34.028 15:48:02 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:34.028 15:48:02 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:34.028 15:48:02 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:34.028 15:48:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:34.028 15:48:02 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:34.028 15:48:02 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:34.028 15:48:02 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:34.028 15:48:02 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:34.028 15:48:02 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:34.028 15:48:02 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:34.028 15:48:02 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:34.028 15:48:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:34.028 15:48:02 json_config -- json_config/json_config.sh@323 -- # killprocess 3586544 00:06:34.028 15:48:02 json_config -- common/autotest_common.sh@948 -- # '[' -z 3586544 ']' 00:06:34.028 15:48:02 json_config -- common/autotest_common.sh@952 -- # kill -0 3586544 00:06:34.028 15:48:02 json_config -- common/autotest_common.sh@953 -- # uname 00:06:34.028 15:48:02 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:34.028 15:48:02 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3586544 00:06:34.028 15:48:02 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:34.028 15:48:02 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:34.028 15:48:02 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3586544' 00:06:34.028 killing process with pid 3586544 00:06:34.028 15:48:02 json_config -- common/autotest_common.sh@967 -- # kill 3586544 00:06:34.028 15:48:02 json_config -- common/autotest_common.sh@972 -- # wait 3586544 00:06:35.929 15:48:04 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:35.929 15:48:04 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:35.929 15:48:04 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:35.929 15:48:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:35.929 15:48:04 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:35.929 15:48:04 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:35.929 INFO: Success 00:06:35.929 00:06:35.929 real 0m15.207s 00:06:35.929 user 0m15.955s 00:06:35.929 sys 0m1.853s 00:06:35.929 15:48:04 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.929 15:48:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:35.929 ************************************ 00:06:35.929 END TEST json_config 00:06:35.929 ************************************ 00:06:35.929 15:48:04 -- common/autotest_common.sh@1142 -- # return 0 00:06:35.929 15:48:04 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:35.929 15:48:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:35.929 15:48:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.929 15:48:04 -- common/autotest_common.sh@10 -- # set +x 00:06:35.929 ************************************ 00:06:35.929 START TEST json_config_extra_key 00:06:35.929 ************************************ 00:06:35.929 15:48:04 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:35.929 15:48:04 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:35.929 15:48:04 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:35.929 15:48:04 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:35.929 15:48:04 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:35.929 15:48:04 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:35.929 15:48:04 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:35.929 15:48:04 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:35.929 15:48:04 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:35.929 15:48:04 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:35.929 15:48:04 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:35.929 15:48:04 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:35.929 15:48:04 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:35.929 15:48:04 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:35.929 15:48:04 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:35.929 15:48:04 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:35.929 15:48:04 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:35.929 15:48:04 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:35.929 15:48:04 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:35.929 15:48:04 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:35.929 15:48:04 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:35.929 15:48:04 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:35.929 15:48:04 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:35.930 15:48:04 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.930 15:48:04 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.930 15:48:04 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.930 15:48:04 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:35.930 15:48:04 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.930 15:48:04 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:35.930 15:48:04 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:35.930 15:48:04 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:35.930 15:48:04 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:35.930 15:48:04 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:35.930 15:48:04 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:35.930 15:48:04 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:35.930 15:48:04 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:35.930 15:48:04 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:35.930 15:48:04 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:35.930 15:48:04 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:35.930 15:48:04 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:35.930 15:48:04 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:35.930 15:48:04 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:35.930 15:48:04 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:35.930 15:48:04 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:35.930 15:48:04 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:35.930 15:48:04 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:35.930 15:48:04 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:35.930 15:48:04 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:35.930 INFO: launching applications... 00:06:35.930 15:48:04 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:35.930 15:48:04 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:35.930 15:48:04 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:35.930 15:48:04 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:35.930 15:48:04 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:35.930 15:48:04 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:35.930 15:48:04 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:35.930 15:48:04 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:35.930 15:48:04 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3588001 00:06:35.930 15:48:04 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:35.930 Waiting for target to run... 00:06:35.930 15:48:04 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3588001 /var/tmp/spdk_tgt.sock 00:06:35.930 15:48:04 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:35.930 15:48:04 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 3588001 ']' 00:06:35.930 15:48:04 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:35.930 15:48:04 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:35.930 15:48:04 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:35.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:35.930 15:48:04 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:35.930 15:48:04 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:35.930 [2024-07-15 15:48:04.598385] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:35.930 [2024-07-15 15:48:04.598435] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3588001 ] 00:06:35.930 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.188 [2024-07-15 15:48:05.031026] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.188 [2024-07-15 15:48:05.120375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.755 15:48:05 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:36.755 15:48:05 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:36.755 15:48:05 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:36.755 00:06:36.755 15:48:05 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:36.755 INFO: shutting down applications... 00:06:36.755 15:48:05 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:36.755 15:48:05 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:36.755 15:48:05 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:36.755 15:48:05 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3588001 ]] 00:06:36.755 15:48:05 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3588001 00:06:36.755 15:48:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:36.755 15:48:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:36.755 15:48:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3588001 00:06:36.755 15:48:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:37.014 15:48:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:37.014 15:48:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:37.014 15:48:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3588001 00:06:37.014 15:48:05 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:37.014 15:48:05 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:37.014 15:48:05 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:37.014 15:48:05 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:37.014 SPDK target shutdown done 00:06:37.014 15:48:05 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:37.014 Success 00:06:37.014 00:06:37.014 real 0m1.448s 00:06:37.014 user 0m1.078s 00:06:37.014 sys 0m0.524s 00:06:37.014 15:48:05 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.014 15:48:05 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:37.014 ************************************ 00:06:37.014 END TEST json_config_extra_key 00:06:37.014 ************************************ 00:06:37.014 15:48:05 -- common/autotest_common.sh@1142 -- # return 0 00:06:37.014 15:48:05 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:37.014 15:48:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:37.014 15:48:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.014 15:48:05 -- common/autotest_common.sh@10 -- # set +x 00:06:37.272 ************************************ 00:06:37.272 START TEST alias_rpc 00:06:37.272 ************************************ 00:06:37.272 15:48:05 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:37.272 * Looking for test storage... 00:06:37.272 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:37.272 15:48:06 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:37.272 15:48:06 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3588337 00:06:37.272 15:48:06 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3588337 00:06:37.272 15:48:06 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:37.272 15:48:06 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 3588337 ']' 00:06:37.272 15:48:06 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.272 15:48:06 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:37.272 15:48:06 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.272 15:48:06 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:37.272 15:48:06 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.272 [2024-07-15 15:48:06.112957] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:37.272 [2024-07-15 15:48:06.113011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3588337 ] 00:06:37.272 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.272 [2024-07-15 15:48:06.167875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.530 [2024-07-15 15:48:06.249051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.095 15:48:06 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:38.095 15:48:06 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:38.095 15:48:06 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:38.353 15:48:07 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3588337 00:06:38.353 15:48:07 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 3588337 ']' 00:06:38.353 15:48:07 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 3588337 00:06:38.353 15:48:07 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:38.353 15:48:07 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:38.353 15:48:07 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3588337 00:06:38.353 15:48:07 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:38.353 15:48:07 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:38.353 15:48:07 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3588337' 00:06:38.353 killing process with pid 3588337 00:06:38.353 15:48:07 alias_rpc -- common/autotest_common.sh@967 -- # kill 3588337 00:06:38.353 15:48:07 alias_rpc -- common/autotest_common.sh@972 -- # wait 3588337 00:06:38.611 00:06:38.611 real 0m1.474s 00:06:38.611 user 0m1.618s 00:06:38.611 sys 0m0.383s 00:06:38.611 15:48:07 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.611 15:48:07 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.611 ************************************ 00:06:38.611 END TEST alias_rpc 00:06:38.611 ************************************ 00:06:38.611 15:48:07 -- common/autotest_common.sh@1142 -- # return 0 00:06:38.611 15:48:07 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:38.611 15:48:07 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:38.611 15:48:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:38.611 15:48:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.611 15:48:07 -- common/autotest_common.sh@10 -- # set +x 00:06:38.611 ************************************ 00:06:38.611 START TEST spdkcli_tcp 00:06:38.611 ************************************ 00:06:38.611 15:48:07 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:38.870 * Looking for test storage... 00:06:38.870 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:38.870 15:48:07 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:38.870 15:48:07 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:38.870 15:48:07 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:38.870 15:48:07 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:38.870 15:48:07 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:38.870 15:48:07 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:38.870 15:48:07 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:38.870 15:48:07 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:38.870 15:48:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:38.870 15:48:07 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3588625 00:06:38.870 15:48:07 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3588625 00:06:38.870 15:48:07 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:38.870 15:48:07 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 3588625 ']' 00:06:38.870 15:48:07 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.870 15:48:07 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:38.870 15:48:07 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.870 15:48:07 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:38.870 15:48:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:38.870 [2024-07-15 15:48:07.651890] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:38.870 [2024-07-15 15:48:07.651941] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3588625 ] 00:06:38.870 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.870 [2024-07-15 15:48:07.706787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:38.870 [2024-07-15 15:48:07.782544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.870 [2024-07-15 15:48:07.782547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.805 15:48:08 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:39.805 15:48:08 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:39.805 15:48:08 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3588833 00:06:39.805 15:48:08 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:39.805 15:48:08 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:39.805 [ 00:06:39.805 "bdev_malloc_delete", 00:06:39.805 "bdev_malloc_create", 00:06:39.805 "bdev_null_resize", 00:06:39.805 "bdev_null_delete", 00:06:39.805 "bdev_null_create", 00:06:39.805 "bdev_nvme_cuse_unregister", 00:06:39.805 "bdev_nvme_cuse_register", 00:06:39.805 "bdev_opal_new_user", 00:06:39.805 "bdev_opal_set_lock_state", 00:06:39.805 "bdev_opal_delete", 00:06:39.805 "bdev_opal_get_info", 00:06:39.805 "bdev_opal_create", 00:06:39.805 "bdev_nvme_opal_revert", 00:06:39.805 "bdev_nvme_opal_init", 00:06:39.805 "bdev_nvme_send_cmd", 00:06:39.805 "bdev_nvme_get_path_iostat", 00:06:39.805 "bdev_nvme_get_mdns_discovery_info", 00:06:39.805 "bdev_nvme_stop_mdns_discovery", 00:06:39.805 "bdev_nvme_start_mdns_discovery", 00:06:39.805 "bdev_nvme_set_multipath_policy", 00:06:39.805 "bdev_nvme_set_preferred_path", 00:06:39.805 "bdev_nvme_get_io_paths", 00:06:39.805 "bdev_nvme_remove_error_injection", 00:06:39.805 "bdev_nvme_add_error_injection", 00:06:39.805 "bdev_nvme_get_discovery_info", 00:06:39.805 "bdev_nvme_stop_discovery", 00:06:39.805 "bdev_nvme_start_discovery", 00:06:39.805 "bdev_nvme_get_controller_health_info", 00:06:39.805 "bdev_nvme_disable_controller", 00:06:39.805 "bdev_nvme_enable_controller", 00:06:39.805 "bdev_nvme_reset_controller", 00:06:39.805 "bdev_nvme_get_transport_statistics", 00:06:39.805 "bdev_nvme_apply_firmware", 00:06:39.805 "bdev_nvme_detach_controller", 00:06:39.805 "bdev_nvme_get_controllers", 00:06:39.805 "bdev_nvme_attach_controller", 00:06:39.805 "bdev_nvme_set_hotplug", 00:06:39.805 "bdev_nvme_set_options", 00:06:39.805 "bdev_passthru_delete", 00:06:39.805 "bdev_passthru_create", 00:06:39.805 "bdev_lvol_set_parent_bdev", 00:06:39.805 "bdev_lvol_set_parent", 00:06:39.805 "bdev_lvol_check_shallow_copy", 00:06:39.805 "bdev_lvol_start_shallow_copy", 00:06:39.805 "bdev_lvol_grow_lvstore", 00:06:39.805 "bdev_lvol_get_lvols", 00:06:39.805 "bdev_lvol_get_lvstores", 00:06:39.805 "bdev_lvol_delete", 00:06:39.805 "bdev_lvol_set_read_only", 00:06:39.805 "bdev_lvol_resize", 00:06:39.805 "bdev_lvol_decouple_parent", 00:06:39.805 "bdev_lvol_inflate", 00:06:39.805 "bdev_lvol_rename", 00:06:39.805 "bdev_lvol_clone_bdev", 00:06:39.805 "bdev_lvol_clone", 00:06:39.805 "bdev_lvol_snapshot", 00:06:39.805 "bdev_lvol_create", 00:06:39.805 "bdev_lvol_delete_lvstore", 00:06:39.805 "bdev_lvol_rename_lvstore", 00:06:39.805 "bdev_lvol_create_lvstore", 00:06:39.805 "bdev_raid_set_options", 00:06:39.805 "bdev_raid_remove_base_bdev", 00:06:39.805 "bdev_raid_add_base_bdev", 00:06:39.805 "bdev_raid_delete", 00:06:39.805 "bdev_raid_create", 00:06:39.805 "bdev_raid_get_bdevs", 00:06:39.805 "bdev_error_inject_error", 00:06:39.805 "bdev_error_delete", 00:06:39.805 "bdev_error_create", 00:06:39.805 "bdev_split_delete", 00:06:39.805 "bdev_split_create", 00:06:39.805 "bdev_delay_delete", 00:06:39.805 "bdev_delay_create", 00:06:39.805 "bdev_delay_update_latency", 00:06:39.805 "bdev_zone_block_delete", 00:06:39.805 "bdev_zone_block_create", 00:06:39.805 "blobfs_create", 00:06:39.805 "blobfs_detect", 00:06:39.805 "blobfs_set_cache_size", 00:06:39.805 "bdev_aio_delete", 00:06:39.805 "bdev_aio_rescan", 00:06:39.806 "bdev_aio_create", 00:06:39.806 "bdev_ftl_set_property", 00:06:39.806 "bdev_ftl_get_properties", 00:06:39.806 "bdev_ftl_get_stats", 00:06:39.806 "bdev_ftl_unmap", 00:06:39.806 "bdev_ftl_unload", 00:06:39.806 "bdev_ftl_delete", 00:06:39.806 "bdev_ftl_load", 00:06:39.806 "bdev_ftl_create", 00:06:39.806 "bdev_virtio_attach_controller", 00:06:39.806 "bdev_virtio_scsi_get_devices", 00:06:39.806 "bdev_virtio_detach_controller", 00:06:39.806 "bdev_virtio_blk_set_hotplug", 00:06:39.806 "bdev_iscsi_delete", 00:06:39.806 "bdev_iscsi_create", 00:06:39.806 "bdev_iscsi_set_options", 00:06:39.806 "accel_error_inject_error", 00:06:39.806 "ioat_scan_accel_module", 00:06:39.806 "dsa_scan_accel_module", 00:06:39.806 "iaa_scan_accel_module", 00:06:39.806 "vfu_virtio_create_scsi_endpoint", 00:06:39.806 "vfu_virtio_scsi_remove_target", 00:06:39.806 "vfu_virtio_scsi_add_target", 00:06:39.806 "vfu_virtio_create_blk_endpoint", 00:06:39.806 "vfu_virtio_delete_endpoint", 00:06:39.806 "keyring_file_remove_key", 00:06:39.806 "keyring_file_add_key", 00:06:39.806 "keyring_linux_set_options", 00:06:39.806 "iscsi_get_histogram", 00:06:39.806 "iscsi_enable_histogram", 00:06:39.806 "iscsi_set_options", 00:06:39.806 "iscsi_get_auth_groups", 00:06:39.806 "iscsi_auth_group_remove_secret", 00:06:39.806 "iscsi_auth_group_add_secret", 00:06:39.806 "iscsi_delete_auth_group", 00:06:39.806 "iscsi_create_auth_group", 00:06:39.806 "iscsi_set_discovery_auth", 00:06:39.806 "iscsi_get_options", 00:06:39.806 "iscsi_target_node_request_logout", 00:06:39.806 "iscsi_target_node_set_redirect", 00:06:39.806 "iscsi_target_node_set_auth", 00:06:39.806 "iscsi_target_node_add_lun", 00:06:39.806 "iscsi_get_stats", 00:06:39.806 "iscsi_get_connections", 00:06:39.806 "iscsi_portal_group_set_auth", 00:06:39.806 "iscsi_start_portal_group", 00:06:39.806 "iscsi_delete_portal_group", 00:06:39.806 "iscsi_create_portal_group", 00:06:39.806 "iscsi_get_portal_groups", 00:06:39.806 "iscsi_delete_target_node", 00:06:39.806 "iscsi_target_node_remove_pg_ig_maps", 00:06:39.806 "iscsi_target_node_add_pg_ig_maps", 00:06:39.806 "iscsi_create_target_node", 00:06:39.806 "iscsi_get_target_nodes", 00:06:39.806 "iscsi_delete_initiator_group", 00:06:39.806 "iscsi_initiator_group_remove_initiators", 00:06:39.806 "iscsi_initiator_group_add_initiators", 00:06:39.806 "iscsi_create_initiator_group", 00:06:39.806 "iscsi_get_initiator_groups", 00:06:39.806 "nvmf_set_crdt", 00:06:39.806 "nvmf_set_config", 00:06:39.806 "nvmf_set_max_subsystems", 00:06:39.806 "nvmf_stop_mdns_prr", 00:06:39.806 "nvmf_publish_mdns_prr", 00:06:39.806 "nvmf_subsystem_get_listeners", 00:06:39.806 "nvmf_subsystem_get_qpairs", 00:06:39.806 "nvmf_subsystem_get_controllers", 00:06:39.806 "nvmf_get_stats", 00:06:39.806 "nvmf_get_transports", 00:06:39.806 "nvmf_create_transport", 00:06:39.806 "nvmf_get_targets", 00:06:39.806 "nvmf_delete_target", 00:06:39.806 "nvmf_create_target", 00:06:39.806 "nvmf_subsystem_allow_any_host", 00:06:39.806 "nvmf_subsystem_remove_host", 00:06:39.806 "nvmf_subsystem_add_host", 00:06:39.806 "nvmf_ns_remove_host", 00:06:39.806 "nvmf_ns_add_host", 00:06:39.806 "nvmf_subsystem_remove_ns", 00:06:39.806 "nvmf_subsystem_add_ns", 00:06:39.806 "nvmf_subsystem_listener_set_ana_state", 00:06:39.806 "nvmf_discovery_get_referrals", 00:06:39.806 "nvmf_discovery_remove_referral", 00:06:39.806 "nvmf_discovery_add_referral", 00:06:39.806 "nvmf_subsystem_remove_listener", 00:06:39.806 "nvmf_subsystem_add_listener", 00:06:39.806 "nvmf_delete_subsystem", 00:06:39.806 "nvmf_create_subsystem", 00:06:39.806 "nvmf_get_subsystems", 00:06:39.806 "env_dpdk_get_mem_stats", 00:06:39.806 "nbd_get_disks", 00:06:39.806 "nbd_stop_disk", 00:06:39.806 "nbd_start_disk", 00:06:39.806 "ublk_recover_disk", 00:06:39.806 "ublk_get_disks", 00:06:39.806 "ublk_stop_disk", 00:06:39.806 "ublk_start_disk", 00:06:39.806 "ublk_destroy_target", 00:06:39.806 "ublk_create_target", 00:06:39.806 "virtio_blk_create_transport", 00:06:39.806 "virtio_blk_get_transports", 00:06:39.806 "vhost_controller_set_coalescing", 00:06:39.806 "vhost_get_controllers", 00:06:39.806 "vhost_delete_controller", 00:06:39.806 "vhost_create_blk_controller", 00:06:39.806 "vhost_scsi_controller_remove_target", 00:06:39.806 "vhost_scsi_controller_add_target", 00:06:39.806 "vhost_start_scsi_controller", 00:06:39.806 "vhost_create_scsi_controller", 00:06:39.806 "thread_set_cpumask", 00:06:39.806 "framework_get_governor", 00:06:39.806 "framework_get_scheduler", 00:06:39.806 "framework_set_scheduler", 00:06:39.806 "framework_get_reactors", 00:06:39.806 "thread_get_io_channels", 00:06:39.806 "thread_get_pollers", 00:06:39.806 "thread_get_stats", 00:06:39.806 "framework_monitor_context_switch", 00:06:39.806 "spdk_kill_instance", 00:06:39.806 "log_enable_timestamps", 00:06:39.806 "log_get_flags", 00:06:39.806 "log_clear_flag", 00:06:39.806 "log_set_flag", 00:06:39.806 "log_get_level", 00:06:39.806 "log_set_level", 00:06:39.806 "log_get_print_level", 00:06:39.806 "log_set_print_level", 00:06:39.806 "framework_enable_cpumask_locks", 00:06:39.806 "framework_disable_cpumask_locks", 00:06:39.806 "framework_wait_init", 00:06:39.806 "framework_start_init", 00:06:39.806 "scsi_get_devices", 00:06:39.806 "bdev_get_histogram", 00:06:39.806 "bdev_enable_histogram", 00:06:39.806 "bdev_set_qos_limit", 00:06:39.806 "bdev_set_qd_sampling_period", 00:06:39.806 "bdev_get_bdevs", 00:06:39.806 "bdev_reset_iostat", 00:06:39.806 "bdev_get_iostat", 00:06:39.806 "bdev_examine", 00:06:39.806 "bdev_wait_for_examine", 00:06:39.806 "bdev_set_options", 00:06:39.806 "notify_get_notifications", 00:06:39.806 "notify_get_types", 00:06:39.806 "accel_get_stats", 00:06:39.806 "accel_set_options", 00:06:39.806 "accel_set_driver", 00:06:39.806 "accel_crypto_key_destroy", 00:06:39.806 "accel_crypto_keys_get", 00:06:39.806 "accel_crypto_key_create", 00:06:39.806 "accel_assign_opc", 00:06:39.806 "accel_get_module_info", 00:06:39.806 "accel_get_opc_assignments", 00:06:39.806 "vmd_rescan", 00:06:39.806 "vmd_remove_device", 00:06:39.806 "vmd_enable", 00:06:39.806 "sock_get_default_impl", 00:06:39.806 "sock_set_default_impl", 00:06:39.806 "sock_impl_set_options", 00:06:39.806 "sock_impl_get_options", 00:06:39.806 "iobuf_get_stats", 00:06:39.806 "iobuf_set_options", 00:06:39.806 "keyring_get_keys", 00:06:39.806 "framework_get_pci_devices", 00:06:39.806 "framework_get_config", 00:06:39.806 "framework_get_subsystems", 00:06:39.806 "vfu_tgt_set_base_path", 00:06:39.806 "trace_get_info", 00:06:39.806 "trace_get_tpoint_group_mask", 00:06:39.806 "trace_disable_tpoint_group", 00:06:39.806 "trace_enable_tpoint_group", 00:06:39.806 "trace_clear_tpoint_mask", 00:06:39.806 "trace_set_tpoint_mask", 00:06:39.806 "spdk_get_version", 00:06:39.806 "rpc_get_methods" 00:06:39.806 ] 00:06:39.806 15:48:08 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:39.806 15:48:08 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:39.806 15:48:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:39.806 15:48:08 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:39.806 15:48:08 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3588625 00:06:39.806 15:48:08 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 3588625 ']' 00:06:39.806 15:48:08 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 3588625 00:06:39.806 15:48:08 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:39.806 15:48:08 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:39.806 15:48:08 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3588625 00:06:39.806 15:48:08 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:39.806 15:48:08 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:39.806 15:48:08 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3588625' 00:06:39.806 killing process with pid 3588625 00:06:39.806 15:48:08 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 3588625 00:06:39.806 15:48:08 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 3588625 00:06:40.372 00:06:40.372 real 0m1.516s 00:06:40.372 user 0m2.810s 00:06:40.372 sys 0m0.443s 00:06:40.372 15:48:09 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.372 15:48:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:40.372 ************************************ 00:06:40.372 END TEST spdkcli_tcp 00:06:40.372 ************************************ 00:06:40.372 15:48:09 -- common/autotest_common.sh@1142 -- # return 0 00:06:40.372 15:48:09 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:40.372 15:48:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:40.372 15:48:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.372 15:48:09 -- common/autotest_common.sh@10 -- # set +x 00:06:40.372 ************************************ 00:06:40.372 START TEST dpdk_mem_utility 00:06:40.372 ************************************ 00:06:40.372 15:48:09 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:40.372 * Looking for test storage... 00:06:40.372 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:40.372 15:48:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:40.372 15:48:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3589226 00:06:40.372 15:48:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3589226 00:06:40.372 15:48:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:40.372 15:48:09 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 3589226 ']' 00:06:40.372 15:48:09 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.372 15:48:09 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:40.372 15:48:09 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.372 15:48:09 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:40.372 15:48:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:40.372 [2024-07-15 15:48:09.224335] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:40.372 [2024-07-15 15:48:09.224385] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3589226 ] 00:06:40.372 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.372 [2024-07-15 15:48:09.277072] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.641 [2024-07-15 15:48:09.353733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.250 15:48:10 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:41.250 15:48:10 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:41.250 15:48:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:41.250 15:48:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:41.250 15:48:10 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.250 15:48:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:41.250 { 00:06:41.250 "filename": "/tmp/spdk_mem_dump.txt" 00:06:41.250 } 00:06:41.250 15:48:10 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.250 15:48:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:41.250 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:41.250 1 heaps totaling size 814.000000 MiB 00:06:41.250 size: 814.000000 MiB heap id: 0 00:06:41.250 end heaps---------- 00:06:41.250 8 mempools totaling size 598.116089 MiB 00:06:41.250 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:41.250 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:41.250 size: 84.521057 MiB name: bdev_io_3589226 00:06:41.250 size: 51.011292 MiB name: evtpool_3589226 00:06:41.250 size: 50.003479 MiB name: msgpool_3589226 00:06:41.250 size: 21.763794 MiB name: PDU_Pool 00:06:41.250 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:41.250 size: 0.026123 MiB name: Session_Pool 00:06:41.250 end mempools------- 00:06:41.250 6 memzones totaling size 4.142822 MiB 00:06:41.250 size: 1.000366 MiB name: RG_ring_0_3589226 00:06:41.250 size: 1.000366 MiB name: RG_ring_1_3589226 00:06:41.250 size: 1.000366 MiB name: RG_ring_4_3589226 00:06:41.250 size: 1.000366 MiB name: RG_ring_5_3589226 00:06:41.250 size: 0.125366 MiB name: RG_ring_2_3589226 00:06:41.250 size: 0.015991 MiB name: RG_ring_3_3589226 00:06:41.250 end memzones------- 00:06:41.250 15:48:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:41.250 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:41.250 list of free elements. size: 12.519348 MiB 00:06:41.250 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:41.250 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:41.250 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:41.250 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:41.250 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:41.250 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:41.250 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:41.250 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:41.250 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:41.250 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:41.250 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:41.250 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:41.250 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:41.250 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:41.250 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:41.250 list of standard malloc elements. size: 199.218079 MiB 00:06:41.250 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:41.250 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:41.250 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:41.250 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:41.250 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:41.250 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:41.250 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:41.250 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:41.250 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:41.250 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:41.250 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:41.250 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:41.250 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:41.250 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:41.250 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:41.250 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:41.250 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:41.250 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:41.250 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:41.250 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:41.250 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:41.250 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:41.250 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:41.250 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:41.250 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:41.250 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:41.250 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:41.250 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:41.250 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:41.250 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:41.250 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:41.250 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:41.250 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:41.250 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:41.250 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:41.250 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:41.250 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:41.250 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:41.250 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:41.250 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:41.250 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:41.250 list of memzone associated elements. size: 602.262573 MiB 00:06:41.250 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:41.250 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:41.250 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:41.250 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:41.250 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:41.250 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3589226_0 00:06:41.250 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:41.250 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3589226_0 00:06:41.250 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:41.250 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3589226_0 00:06:41.250 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:41.250 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:41.250 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:41.250 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:41.250 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:41.250 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3589226 00:06:41.250 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:41.250 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3589226 00:06:41.250 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:41.250 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3589226 00:06:41.250 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:41.250 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:41.250 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:41.250 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:41.250 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:41.250 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:41.250 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:41.250 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:41.250 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:41.250 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3589226 00:06:41.250 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:41.250 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3589226 00:06:41.250 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:41.250 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3589226 00:06:41.250 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:41.250 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3589226 00:06:41.250 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:41.250 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3589226 00:06:41.250 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:41.250 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:41.250 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:41.250 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:41.250 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:41.251 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:41.251 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:41.251 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3589226 00:06:41.251 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:41.251 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:41.251 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:41.251 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:41.251 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:41.251 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3589226 00:06:41.251 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:41.251 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:41.251 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:41.251 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3589226 00:06:41.251 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:41.251 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3589226 00:06:41.251 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:41.251 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:41.251 15:48:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:41.251 15:48:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3589226 00:06:41.251 15:48:10 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 3589226 ']' 00:06:41.251 15:48:10 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 3589226 00:06:41.251 15:48:10 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:41.251 15:48:10 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:41.251 15:48:10 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3589226 00:06:41.251 15:48:10 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:41.251 15:48:10 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:41.251 15:48:10 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3589226' 00:06:41.251 killing process with pid 3589226 00:06:41.251 15:48:10 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 3589226 00:06:41.251 15:48:10 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 3589226 00:06:41.818 00:06:41.818 real 0m1.401s 00:06:41.818 user 0m1.482s 00:06:41.818 sys 0m0.396s 00:06:41.818 15:48:10 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.818 15:48:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:41.818 ************************************ 00:06:41.818 END TEST dpdk_mem_utility 00:06:41.818 ************************************ 00:06:41.818 15:48:10 -- common/autotest_common.sh@1142 -- # return 0 00:06:41.818 15:48:10 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:41.818 15:48:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:41.818 15:48:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.818 15:48:10 -- common/autotest_common.sh@10 -- # set +x 00:06:41.818 ************************************ 00:06:41.818 START TEST event 00:06:41.818 ************************************ 00:06:41.818 15:48:10 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:41.818 * Looking for test storage... 00:06:41.818 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:41.818 15:48:10 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:41.818 15:48:10 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:41.818 15:48:10 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:41.818 15:48:10 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:41.819 15:48:10 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.819 15:48:10 event -- common/autotest_common.sh@10 -- # set +x 00:06:41.819 ************************************ 00:06:41.819 START TEST event_perf 00:06:41.819 ************************************ 00:06:41.819 15:48:10 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:41.819 Running I/O for 1 seconds...[2024-07-15 15:48:10.672136] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:41.819 [2024-07-15 15:48:10.672203] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3589611 ] 00:06:41.819 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.819 [2024-07-15 15:48:10.730630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:42.074 [2024-07-15 15:48:10.808545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.074 [2024-07-15 15:48:10.808643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.074 [2024-07-15 15:48:10.808716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:42.074 [2024-07-15 15:48:10.808718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.006 Running I/O for 1 seconds... 00:06:43.006 lcore 0: 210188 00:06:43.006 lcore 1: 210187 00:06:43.006 lcore 2: 210187 00:06:43.006 lcore 3: 210187 00:06:43.006 done. 00:06:43.006 00:06:43.006 real 0m1.227s 00:06:43.006 user 0m4.148s 00:06:43.006 sys 0m0.076s 00:06:43.006 15:48:11 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.006 15:48:11 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:43.006 ************************************ 00:06:43.006 END TEST event_perf 00:06:43.006 ************************************ 00:06:43.006 15:48:11 event -- common/autotest_common.sh@1142 -- # return 0 00:06:43.006 15:48:11 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:43.006 15:48:11 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:43.006 15:48:11 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.006 15:48:11 event -- common/autotest_common.sh@10 -- # set +x 00:06:43.006 ************************************ 00:06:43.006 START TEST event_reactor 00:06:43.006 ************************************ 00:06:43.006 15:48:11 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:43.264 [2024-07-15 15:48:11.955700] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:43.264 [2024-07-15 15:48:11.955767] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3589866 ] 00:06:43.264 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.264 [2024-07-15 15:48:12.011446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.264 [2024-07-15 15:48:12.082568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.637 test_start 00:06:44.637 oneshot 00:06:44.637 tick 100 00:06:44.637 tick 100 00:06:44.637 tick 250 00:06:44.637 tick 100 00:06:44.637 tick 100 00:06:44.637 tick 100 00:06:44.637 tick 250 00:06:44.637 tick 500 00:06:44.637 tick 100 00:06:44.637 tick 100 00:06:44.637 tick 250 00:06:44.637 tick 100 00:06:44.637 tick 100 00:06:44.637 test_end 00:06:44.637 00:06:44.637 real 0m1.213s 00:06:44.637 user 0m1.143s 00:06:44.637 sys 0m0.066s 00:06:44.637 15:48:13 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.637 15:48:13 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:44.637 ************************************ 00:06:44.637 END TEST event_reactor 00:06:44.637 ************************************ 00:06:44.637 15:48:13 event -- common/autotest_common.sh@1142 -- # return 0 00:06:44.637 15:48:13 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:44.637 15:48:13 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:44.638 15:48:13 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.638 15:48:13 event -- common/autotest_common.sh@10 -- # set +x 00:06:44.638 ************************************ 00:06:44.638 START TEST event_reactor_perf 00:06:44.638 ************************************ 00:06:44.638 15:48:13 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:44.638 [2024-07-15 15:48:13.235743] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:44.638 [2024-07-15 15:48:13.235809] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3590114 ] 00:06:44.638 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.638 [2024-07-15 15:48:13.293573] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.638 [2024-07-15 15:48:13.362551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.568 test_start 00:06:45.568 test_end 00:06:45.568 Performance: 510277 events per second 00:06:45.568 00:06:45.568 real 0m1.215s 00:06:45.568 user 0m1.140s 00:06:45.568 sys 0m0.071s 00:06:45.568 15:48:14 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.568 15:48:14 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:45.568 ************************************ 00:06:45.568 END TEST event_reactor_perf 00:06:45.568 ************************************ 00:06:45.568 15:48:14 event -- common/autotest_common.sh@1142 -- # return 0 00:06:45.568 15:48:14 event -- event/event.sh@49 -- # uname -s 00:06:45.568 15:48:14 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:45.568 15:48:14 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:45.568 15:48:14 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:45.568 15:48:14 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.568 15:48:14 event -- common/autotest_common.sh@10 -- # set +x 00:06:45.827 ************************************ 00:06:45.827 START TEST event_scheduler 00:06:45.827 ************************************ 00:06:45.827 15:48:14 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:45.827 * Looking for test storage... 00:06:45.827 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:45.827 15:48:14 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:45.827 15:48:14 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3590393 00:06:45.827 15:48:14 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:45.827 15:48:14 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3590393 00:06:45.827 15:48:14 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 3590393 ']' 00:06:45.827 15:48:14 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.827 15:48:14 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.827 15:48:14 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.827 15:48:14 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.827 15:48:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:45.827 15:48:14 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:45.827 [2024-07-15 15:48:14.635049] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:45.827 [2024-07-15 15:48:14.635103] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3590393 ] 00:06:45.827 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.827 [2024-07-15 15:48:14.689687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:46.085 [2024-07-15 15:48:14.772544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.085 [2024-07-15 15:48:14.772628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.085 [2024-07-15 15:48:14.772644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:46.085 [2024-07-15 15:48:14.772646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.651 15:48:15 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.651 15:48:15 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:46.651 15:48:15 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:46.651 15:48:15 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.651 15:48:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:46.651 [2024-07-15 15:48:15.463078] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:46.651 [2024-07-15 15:48:15.463095] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:46.651 [2024-07-15 15:48:15.463105] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:46.651 [2024-07-15 15:48:15.463111] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:46.651 [2024-07-15 15:48:15.463116] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:46.651 15:48:15 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.651 15:48:15 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:46.651 15:48:15 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.651 15:48:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:46.651 [2024-07-15 15:48:15.535424] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:46.651 15:48:15 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.651 15:48:15 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:46.651 15:48:15 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:46.651 15:48:15 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.651 15:48:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:46.651 ************************************ 00:06:46.651 START TEST scheduler_create_thread 00:06:46.651 ************************************ 00:06:46.651 15:48:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:46.651 15:48:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:46.651 15:48:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.652 15:48:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.910 2 00:06:46.910 15:48:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.910 15:48:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:46.910 15:48:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.910 15:48:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.910 3 00:06:46.910 15:48:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.910 15:48:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:46.910 15:48:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.910 15:48:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.910 4 00:06:46.910 15:48:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.910 15:48:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:46.910 15:48:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.910 15:48:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.910 5 00:06:46.910 15:48:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.910 15:48:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:46.910 15:48:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.910 15:48:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.910 6 00:06:46.910 15:48:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.910 15:48:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:46.910 15:48:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.910 15:48:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.910 7 00:06:46.910 15:48:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.910 15:48:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:46.910 15:48:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.910 15:48:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.910 8 00:06:46.910 15:48:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.910 15:48:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:46.910 15:48:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.910 15:48:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.910 9 00:06:46.910 15:48:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.910 15:48:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:46.910 15:48:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.910 15:48:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.910 10 00:06:46.910 15:48:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.910 15:48:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:46.910 15:48:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.910 15:48:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.910 15:48:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.910 15:48:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:46.910 15:48:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:46.910 15:48:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.910 15:48:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.522 15:48:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.522 15:48:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:47.522 15:48:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.522 15:48:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.896 15:48:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.896 15:48:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:48.896 15:48:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:48.896 15:48:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.896 15:48:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:49.830 15:48:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.830 00:06:49.830 real 0m3.103s 00:06:49.830 user 0m0.024s 00:06:49.830 sys 0m0.004s 00:06:49.830 15:48:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.830 15:48:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:49.830 ************************************ 00:06:49.830 END TEST scheduler_create_thread 00:06:49.830 ************************************ 00:06:49.830 15:48:18 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:49.830 15:48:18 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:49.830 15:48:18 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3590393 00:06:49.830 15:48:18 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 3590393 ']' 00:06:49.830 15:48:18 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 3590393 00:06:49.830 15:48:18 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:49.830 15:48:18 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:49.830 15:48:18 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3590393 00:06:49.830 15:48:18 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:49.830 15:48:18 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:49.830 15:48:18 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3590393' 00:06:49.830 killing process with pid 3590393 00:06:49.830 15:48:18 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 3590393 00:06:49.830 15:48:18 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 3590393 00:06:50.396 [2024-07-15 15:48:19.054609] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:50.396 00:06:50.396 real 0m4.763s 00:06:50.396 user 0m9.308s 00:06:50.396 sys 0m0.367s 00:06:50.396 15:48:19 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.396 15:48:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:50.396 ************************************ 00:06:50.396 END TEST event_scheduler 00:06:50.396 ************************************ 00:06:50.396 15:48:19 event -- common/autotest_common.sh@1142 -- # return 0 00:06:50.396 15:48:19 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:50.396 15:48:19 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:50.396 15:48:19 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:50.396 15:48:19 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.396 15:48:19 event -- common/autotest_common.sh@10 -- # set +x 00:06:50.654 ************************************ 00:06:50.654 START TEST app_repeat 00:06:50.654 ************************************ 00:06:50.654 15:48:19 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:50.654 15:48:19 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.654 15:48:19 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.654 15:48:19 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:50.654 15:48:19 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:50.654 15:48:19 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:50.654 15:48:19 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:50.654 15:48:19 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:50.654 15:48:19 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3591155 00:06:50.654 15:48:19 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:50.654 15:48:19 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:50.654 15:48:19 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3591155' 00:06:50.654 Process app_repeat pid: 3591155 00:06:50.654 15:48:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:50.654 15:48:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:50.654 spdk_app_start Round 0 00:06:50.654 15:48:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3591155 /var/tmp/spdk-nbd.sock 00:06:50.654 15:48:19 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3591155 ']' 00:06:50.654 15:48:19 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:50.654 15:48:19 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:50.654 15:48:19 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:50.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:50.654 15:48:19 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:50.654 15:48:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:50.654 [2024-07-15 15:48:19.372355] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:06:50.654 [2024-07-15 15:48:19.372405] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3591155 ] 00:06:50.654 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.654 [2024-07-15 15:48:19.430438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:50.654 [2024-07-15 15:48:19.508776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.654 [2024-07-15 15:48:19.508778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.587 15:48:20 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:51.587 15:48:20 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:51.587 15:48:20 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.587 Malloc0 00:06:51.587 15:48:20 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.845 Malloc1 00:06:51.845 15:48:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:51.845 15:48:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.845 15:48:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.845 15:48:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:51.845 15:48:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.845 15:48:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:51.845 15:48:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:51.845 15:48:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.845 15:48:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.845 15:48:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:51.845 15:48:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.845 15:48:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:51.845 15:48:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:51.845 15:48:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:51.845 15:48:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.846 15:48:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:51.846 /dev/nbd0 00:06:51.846 15:48:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:51.846 15:48:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:51.846 15:48:20 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:51.846 15:48:20 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:51.846 15:48:20 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:51.846 15:48:20 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:51.846 15:48:20 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:51.846 15:48:20 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:51.846 15:48:20 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:51.846 15:48:20 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:51.846 15:48:20 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:51.846 1+0 records in 00:06:51.846 1+0 records out 00:06:51.846 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000185607 s, 22.1 MB/s 00:06:51.846 15:48:20 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:51.846 15:48:20 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:51.846 15:48:20 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:51.846 15:48:20 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:51.846 15:48:20 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:51.846 15:48:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.846 15:48:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.846 15:48:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:52.104 /dev/nbd1 00:06:52.104 15:48:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:52.104 15:48:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:52.104 15:48:20 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:52.104 15:48:20 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:52.104 15:48:20 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:52.104 15:48:20 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:52.104 15:48:20 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:52.104 15:48:20 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:52.104 15:48:20 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:52.104 15:48:20 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:52.104 15:48:20 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:52.104 1+0 records in 00:06:52.104 1+0 records out 00:06:52.104 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216502 s, 18.9 MB/s 00:06:52.104 15:48:20 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:52.104 15:48:20 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:52.104 15:48:20 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:52.104 15:48:20 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:52.104 15:48:20 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:52.104 15:48:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.104 15:48:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.104 15:48:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:52.104 15:48:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.104 15:48:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:52.362 15:48:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:52.362 { 00:06:52.362 "nbd_device": "/dev/nbd0", 00:06:52.362 "bdev_name": "Malloc0" 00:06:52.362 }, 00:06:52.362 { 00:06:52.362 "nbd_device": "/dev/nbd1", 00:06:52.362 "bdev_name": "Malloc1" 00:06:52.362 } 00:06:52.362 ]' 00:06:52.362 15:48:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:52.362 { 00:06:52.362 "nbd_device": "/dev/nbd0", 00:06:52.362 "bdev_name": "Malloc0" 00:06:52.362 }, 00:06:52.362 { 00:06:52.362 "nbd_device": "/dev/nbd1", 00:06:52.362 "bdev_name": "Malloc1" 00:06:52.362 } 00:06:52.362 ]' 00:06:52.362 15:48:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.362 15:48:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:52.362 /dev/nbd1' 00:06:52.362 15:48:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:52.362 /dev/nbd1' 00:06:52.362 15:48:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.362 15:48:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:52.362 15:48:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:52.362 15:48:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:52.362 15:48:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:52.362 15:48:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:52.362 15:48:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.362 15:48:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.362 15:48:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:52.362 15:48:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:52.362 15:48:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:52.362 15:48:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:52.362 256+0 records in 00:06:52.362 256+0 records out 00:06:52.362 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103318 s, 101 MB/s 00:06:52.362 15:48:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.363 15:48:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:52.363 256+0 records in 00:06:52.363 256+0 records out 00:06:52.363 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146065 s, 71.8 MB/s 00:06:52.363 15:48:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.363 15:48:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:52.363 256+0 records in 00:06:52.363 256+0 records out 00:06:52.363 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148156 s, 70.8 MB/s 00:06:52.363 15:48:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:52.363 15:48:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.363 15:48:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.363 15:48:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:52.363 15:48:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:52.363 15:48:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:52.363 15:48:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:52.363 15:48:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.363 15:48:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:52.363 15:48:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.363 15:48:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:52.363 15:48:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:52.363 15:48:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:52.363 15:48:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.363 15:48:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.363 15:48:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:52.363 15:48:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:52.363 15:48:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.363 15:48:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:52.621 15:48:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:52.621 15:48:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:52.621 15:48:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:52.621 15:48:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.621 15:48:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.621 15:48:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:52.621 15:48:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:52.621 15:48:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.621 15:48:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.621 15:48:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:52.879 15:48:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:52.879 15:48:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:52.879 15:48:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:52.879 15:48:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.879 15:48:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.879 15:48:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:52.879 15:48:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:52.879 15:48:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.879 15:48:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:52.879 15:48:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.879 15:48:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:52.879 15:48:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:52.879 15:48:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:52.879 15:48:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:53.137 15:48:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:53.137 15:48:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:53.137 15:48:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:53.137 15:48:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:53.137 15:48:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:53.137 15:48:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:53.137 15:48:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:53.137 15:48:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:53.137 15:48:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:53.137 15:48:21 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:53.137 15:48:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:53.395 [2024-07-15 15:48:22.234079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:53.395 [2024-07-15 15:48:22.301797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.395 [2024-07-15 15:48:22.301798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.654 [2024-07-15 15:48:22.343044] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:53.654 [2024-07-15 15:48:22.343082] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:56.178 15:48:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:56.179 15:48:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:56.179 spdk_app_start Round 1 00:06:56.179 15:48:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3591155 /var/tmp/spdk-nbd.sock 00:06:56.179 15:48:25 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3591155 ']' 00:06:56.179 15:48:25 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:56.179 15:48:25 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:56.179 15:48:25 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:56.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:56.179 15:48:25 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:56.179 15:48:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:56.436 15:48:25 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:56.436 15:48:25 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:56.436 15:48:25 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:56.694 Malloc0 00:06:56.694 15:48:25 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:56.694 Malloc1 00:06:56.694 15:48:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:56.694 15:48:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.694 15:48:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:56.694 15:48:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:56.694 15:48:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.694 15:48:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:56.694 15:48:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:56.694 15:48:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.694 15:48:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:56.694 15:48:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:56.694 15:48:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.694 15:48:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:56.695 15:48:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:56.695 15:48:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:56.695 15:48:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:56.695 15:48:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:56.952 /dev/nbd0 00:06:56.952 15:48:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:56.952 15:48:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:56.952 15:48:25 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:56.952 15:48:25 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:56.952 15:48:25 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:56.952 15:48:25 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:56.952 15:48:25 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:56.952 15:48:25 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:56.952 15:48:25 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:56.952 15:48:25 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:56.952 15:48:25 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:56.952 1+0 records in 00:06:56.952 1+0 records out 00:06:56.952 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198966 s, 20.6 MB/s 00:06:56.952 15:48:25 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:56.952 15:48:25 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:56.952 15:48:25 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:56.952 15:48:25 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:56.952 15:48:25 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:56.952 15:48:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:56.952 15:48:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:56.952 15:48:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:57.249 /dev/nbd1 00:06:57.249 15:48:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:57.249 15:48:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:57.249 15:48:25 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:57.249 15:48:25 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:57.249 15:48:25 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:57.249 15:48:25 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:57.249 15:48:25 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:57.249 15:48:25 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:57.249 15:48:25 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:57.249 15:48:25 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:57.249 15:48:25 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:57.249 1+0 records in 00:06:57.249 1+0 records out 00:06:57.249 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022491 s, 18.2 MB/s 00:06:57.249 15:48:25 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:57.249 15:48:25 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:57.249 15:48:25 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:57.249 15:48:25 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:57.249 15:48:25 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:57.249 15:48:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:57.249 15:48:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:57.249 15:48:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:57.249 15:48:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.249 15:48:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:57.507 15:48:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:57.507 { 00:06:57.507 "nbd_device": "/dev/nbd0", 00:06:57.507 "bdev_name": "Malloc0" 00:06:57.507 }, 00:06:57.507 { 00:06:57.507 "nbd_device": "/dev/nbd1", 00:06:57.507 "bdev_name": "Malloc1" 00:06:57.507 } 00:06:57.507 ]' 00:06:57.507 15:48:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:57.507 { 00:06:57.507 "nbd_device": "/dev/nbd0", 00:06:57.507 "bdev_name": "Malloc0" 00:06:57.507 }, 00:06:57.507 { 00:06:57.507 "nbd_device": "/dev/nbd1", 00:06:57.507 "bdev_name": "Malloc1" 00:06:57.507 } 00:06:57.507 ]' 00:06:57.507 15:48:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:57.507 15:48:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:57.507 /dev/nbd1' 00:06:57.507 15:48:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:57.507 /dev/nbd1' 00:06:57.507 15:48:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:57.507 15:48:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:57.507 15:48:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:57.507 15:48:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:57.507 15:48:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:57.507 15:48:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:57.507 15:48:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:57.507 15:48:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:57.507 15:48:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:57.507 15:48:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:57.507 15:48:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:57.507 15:48:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:57.507 256+0 records in 00:06:57.507 256+0 records out 00:06:57.507 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103297 s, 102 MB/s 00:06:57.507 15:48:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:57.507 15:48:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:57.507 256+0 records in 00:06:57.507 256+0 records out 00:06:57.507 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136451 s, 76.8 MB/s 00:06:57.507 15:48:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:57.507 15:48:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:57.507 256+0 records in 00:06:57.507 256+0 records out 00:06:57.507 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142208 s, 73.7 MB/s 00:06:57.507 15:48:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:57.507 15:48:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:57.507 15:48:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:57.507 15:48:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:57.507 15:48:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:57.507 15:48:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:57.507 15:48:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:57.507 15:48:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:57.507 15:48:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:57.507 15:48:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:57.507 15:48:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:57.507 15:48:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:57.507 15:48:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:57.507 15:48:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.507 15:48:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:57.507 15:48:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:57.507 15:48:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:57.507 15:48:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.507 15:48:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:57.765 15:48:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:57.765 15:48:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:57.765 15:48:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:57.765 15:48:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.765 15:48:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.765 15:48:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:57.765 15:48:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:57.765 15:48:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.765 15:48:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.765 15:48:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:57.765 15:48:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:57.765 15:48:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:57.765 15:48:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:57.765 15:48:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.765 15:48:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.765 15:48:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:57.765 15:48:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:57.765 15:48:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.765 15:48:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:57.765 15:48:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.765 15:48:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:58.023 15:48:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:58.023 15:48:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:58.023 15:48:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:58.023 15:48:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:58.023 15:48:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:58.023 15:48:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:58.023 15:48:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:58.023 15:48:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:58.023 15:48:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:58.023 15:48:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:58.023 15:48:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:58.023 15:48:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:58.023 15:48:26 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:58.281 15:48:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:58.539 [2024-07-15 15:48:27.270322] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:58.539 [2024-07-15 15:48:27.337646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.539 [2024-07-15 15:48:27.337648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.539 [2024-07-15 15:48:27.379344] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:58.539 [2024-07-15 15:48:27.379393] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:01.815 15:48:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:01.815 15:48:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:01.815 spdk_app_start Round 2 00:07:01.815 15:48:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3591155 /var/tmp/spdk-nbd.sock 00:07:01.815 15:48:30 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3591155 ']' 00:07:01.815 15:48:30 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:01.815 15:48:30 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:01.815 15:48:30 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:01.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:01.815 15:48:30 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:01.815 15:48:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:01.815 15:48:30 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:01.815 15:48:30 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:01.815 15:48:30 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:01.815 Malloc0 00:07:01.815 15:48:30 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:01.815 Malloc1 00:07:01.815 15:48:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:01.815 15:48:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.815 15:48:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:01.815 15:48:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:01.815 15:48:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.815 15:48:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:01.815 15:48:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:01.815 15:48:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.815 15:48:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:01.815 15:48:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:01.815 15:48:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.815 15:48:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:01.815 15:48:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:01.815 15:48:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:01.815 15:48:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:01.815 15:48:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:02.074 /dev/nbd0 00:07:02.074 15:48:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:02.074 15:48:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:02.074 15:48:30 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:02.074 15:48:30 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:02.074 15:48:30 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:02.074 15:48:30 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:02.074 15:48:30 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:02.074 15:48:30 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:02.074 15:48:30 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:02.074 15:48:30 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:02.074 15:48:30 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:02.074 1+0 records in 00:07:02.074 1+0 records out 00:07:02.074 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000182531 s, 22.4 MB/s 00:07:02.074 15:48:30 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:02.074 15:48:30 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:02.074 15:48:30 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:02.074 15:48:30 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:02.074 15:48:30 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:02.074 15:48:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:02.074 15:48:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:02.074 15:48:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:02.074 /dev/nbd1 00:07:02.074 15:48:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:02.331 15:48:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:02.331 15:48:31 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:02.331 15:48:31 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:02.331 15:48:31 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:02.331 15:48:31 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:02.331 15:48:31 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:02.331 15:48:31 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:02.331 15:48:31 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:02.332 15:48:31 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:02.332 15:48:31 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:02.332 1+0 records in 00:07:02.332 1+0 records out 00:07:02.332 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000182221 s, 22.5 MB/s 00:07:02.332 15:48:31 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:02.332 15:48:31 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:02.332 15:48:31 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:02.332 15:48:31 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:02.332 15:48:31 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:02.332 15:48:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:02.332 15:48:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:02.332 15:48:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:02.332 15:48:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.332 15:48:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:02.332 15:48:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:02.332 { 00:07:02.332 "nbd_device": "/dev/nbd0", 00:07:02.332 "bdev_name": "Malloc0" 00:07:02.332 }, 00:07:02.332 { 00:07:02.332 "nbd_device": "/dev/nbd1", 00:07:02.332 "bdev_name": "Malloc1" 00:07:02.332 } 00:07:02.332 ]' 00:07:02.332 15:48:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:02.332 { 00:07:02.332 "nbd_device": "/dev/nbd0", 00:07:02.332 "bdev_name": "Malloc0" 00:07:02.332 }, 00:07:02.332 { 00:07:02.332 "nbd_device": "/dev/nbd1", 00:07:02.332 "bdev_name": "Malloc1" 00:07:02.332 } 00:07:02.332 ]' 00:07:02.332 15:48:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:02.332 15:48:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:02.332 /dev/nbd1' 00:07:02.332 15:48:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:02.332 /dev/nbd1' 00:07:02.332 15:48:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:02.332 15:48:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:02.332 15:48:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:02.332 15:48:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:02.332 15:48:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:02.332 15:48:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:02.332 15:48:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.332 15:48:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:02.332 15:48:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:02.332 15:48:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:02.332 15:48:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:02.332 15:48:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:02.332 256+0 records in 00:07:02.332 256+0 records out 00:07:02.332 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00996957 s, 105 MB/s 00:07:02.332 15:48:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:02.332 15:48:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:02.589 256+0 records in 00:07:02.589 256+0 records out 00:07:02.589 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014161 s, 74.0 MB/s 00:07:02.589 15:48:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:02.589 15:48:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:02.589 256+0 records in 00:07:02.589 256+0 records out 00:07:02.589 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0156349 s, 67.1 MB/s 00:07:02.589 15:48:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:02.589 15:48:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.589 15:48:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:02.589 15:48:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:02.589 15:48:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:02.589 15:48:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:02.589 15:48:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:02.589 15:48:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:02.589 15:48:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:02.589 15:48:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:02.589 15:48:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:02.589 15:48:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:02.589 15:48:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:02.589 15:48:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.589 15:48:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.589 15:48:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:02.589 15:48:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:02.589 15:48:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.589 15:48:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:02.589 15:48:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:02.589 15:48:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:02.589 15:48:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:02.589 15:48:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.589 15:48:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.589 15:48:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:02.589 15:48:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:02.589 15:48:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.589 15:48:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.589 15:48:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:02.846 15:48:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:02.846 15:48:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:02.846 15:48:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:02.846 15:48:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.846 15:48:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.846 15:48:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:02.846 15:48:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:02.846 15:48:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.846 15:48:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:02.846 15:48:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.846 15:48:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:03.103 15:48:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:03.103 15:48:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:03.103 15:48:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:03.103 15:48:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:03.103 15:48:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:03.103 15:48:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:03.103 15:48:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:03.103 15:48:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:03.103 15:48:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:03.103 15:48:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:03.103 15:48:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:03.103 15:48:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:03.103 15:48:31 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:03.360 15:48:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:03.618 [2024-07-15 15:48:32.300543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:03.618 [2024-07-15 15:48:32.367570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.618 [2024-07-15 15:48:32.367581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.618 [2024-07-15 15:48:32.408889] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:03.618 [2024-07-15 15:48:32.408931] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:06.896 15:48:35 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3591155 /var/tmp/spdk-nbd.sock 00:07:06.897 15:48:35 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3591155 ']' 00:07:06.897 15:48:35 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:06.897 15:48:35 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:06.897 15:48:35 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:06.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:06.897 15:48:35 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:06.897 15:48:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:06.897 15:48:35 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:06.897 15:48:35 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:06.897 15:48:35 event.app_repeat -- event/event.sh@39 -- # killprocess 3591155 00:07:06.897 15:48:35 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 3591155 ']' 00:07:06.897 15:48:35 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 3591155 00:07:06.897 15:48:35 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:07:06.897 15:48:35 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:06.897 15:48:35 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3591155 00:07:06.897 15:48:35 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:06.897 15:48:35 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:06.897 15:48:35 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3591155' 00:07:06.897 killing process with pid 3591155 00:07:06.897 15:48:35 event.app_repeat -- common/autotest_common.sh@967 -- # kill 3591155 00:07:06.897 15:48:35 event.app_repeat -- common/autotest_common.sh@972 -- # wait 3591155 00:07:06.897 spdk_app_start is called in Round 0. 00:07:06.897 Shutdown signal received, stop current app iteration 00:07:06.897 Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 reinitialization... 00:07:06.897 spdk_app_start is called in Round 1. 00:07:06.897 Shutdown signal received, stop current app iteration 00:07:06.897 Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 reinitialization... 00:07:06.897 spdk_app_start is called in Round 2. 00:07:06.897 Shutdown signal received, stop current app iteration 00:07:06.897 Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 reinitialization... 00:07:06.897 spdk_app_start is called in Round 3. 00:07:06.897 Shutdown signal received, stop current app iteration 00:07:06.897 15:48:35 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:06.897 15:48:35 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:06.897 00:07:06.897 real 0m16.156s 00:07:06.897 user 0m34.997s 00:07:06.897 sys 0m2.302s 00:07:06.897 15:48:35 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.897 15:48:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:06.897 ************************************ 00:07:06.897 END TEST app_repeat 00:07:06.897 ************************************ 00:07:06.897 15:48:35 event -- common/autotest_common.sh@1142 -- # return 0 00:07:06.897 15:48:35 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:06.897 15:48:35 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:06.897 15:48:35 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:06.897 15:48:35 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.897 15:48:35 event -- common/autotest_common.sh@10 -- # set +x 00:07:06.897 ************************************ 00:07:06.897 START TEST cpu_locks 00:07:06.897 ************************************ 00:07:06.897 15:48:35 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:06.897 * Looking for test storage... 00:07:06.897 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:06.897 15:48:35 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:06.897 15:48:35 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:06.897 15:48:35 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:06.897 15:48:35 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:06.897 15:48:35 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:06.897 15:48:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.897 15:48:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:06.897 ************************************ 00:07:06.897 START TEST default_locks 00:07:06.897 ************************************ 00:07:06.897 15:48:35 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:07:06.897 15:48:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3594129 00:07:06.897 15:48:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3594129 00:07:06.897 15:48:35 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 3594129 ']' 00:07:06.897 15:48:35 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.897 15:48:35 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:06.897 15:48:35 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.897 15:48:35 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:06.897 15:48:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:06.897 15:48:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:06.897 [2024-07-15 15:48:35.723657] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:06.897 [2024-07-15 15:48:35.723703] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3594129 ] 00:07:06.897 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.897 [2024-07-15 15:48:35.778164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.155 [2024-07-15 15:48:35.859513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.720 15:48:36 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:07.721 15:48:36 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:07:07.721 15:48:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3594129 00:07:07.721 15:48:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3594129 00:07:07.721 15:48:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:07.978 lslocks: write error 00:07:07.978 15:48:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3594129 00:07:07.978 15:48:36 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 3594129 ']' 00:07:07.978 15:48:36 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 3594129 00:07:07.978 15:48:36 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:07:07.978 15:48:36 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:07.978 15:48:36 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3594129 00:07:07.978 15:48:36 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:07.978 15:48:36 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:07.979 15:48:36 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3594129' 00:07:07.979 killing process with pid 3594129 00:07:07.979 15:48:36 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 3594129 00:07:07.979 15:48:36 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 3594129 00:07:08.235 15:48:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3594129 00:07:08.235 15:48:37 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:07:08.235 15:48:37 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3594129 00:07:08.235 15:48:37 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:08.235 15:48:37 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:08.235 15:48:37 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:08.235 15:48:37 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:08.235 15:48:37 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 3594129 00:07:08.235 15:48:37 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 3594129 ']' 00:07:08.235 15:48:37 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.235 15:48:37 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:08.235 15:48:37 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.235 15:48:37 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:08.235 15:48:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:08.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3594129) - No such process 00:07:08.235 ERROR: process (pid: 3594129) is no longer running 00:07:08.235 15:48:37 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:08.235 15:48:37 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:07:08.235 15:48:37 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:07:08.235 15:48:37 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:08.235 15:48:37 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:08.235 15:48:37 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:08.235 15:48:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:08.235 15:48:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:08.236 15:48:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:08.236 15:48:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:08.236 00:07:08.236 real 0m1.378s 00:07:08.236 user 0m1.450s 00:07:08.236 sys 0m0.406s 00:07:08.236 15:48:37 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.236 15:48:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:08.236 ************************************ 00:07:08.236 END TEST default_locks 00:07:08.236 ************************************ 00:07:08.236 15:48:37 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:08.236 15:48:37 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:08.236 15:48:37 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:08.236 15:48:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.236 15:48:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:08.236 ************************************ 00:07:08.236 START TEST default_locks_via_rpc 00:07:08.236 ************************************ 00:07:08.236 15:48:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:07:08.236 15:48:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3594406 00:07:08.236 15:48:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3594406 00:07:08.236 15:48:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3594406 ']' 00:07:08.236 15:48:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.236 15:48:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:08.236 15:48:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.236 15:48:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:08.236 15:48:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.236 15:48:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:08.236 [2024-07-15 15:48:37.159363] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:08.236 [2024-07-15 15:48:37.159409] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3594406 ] 00:07:08.493 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.493 [2024-07-15 15:48:37.212778] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.493 [2024-07-15 15:48:37.292651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.058 15:48:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:09.058 15:48:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:09.058 15:48:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:09.059 15:48:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.059 15:48:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.059 15:48:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.059 15:48:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:09.059 15:48:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:09.059 15:48:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:09.059 15:48:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:09.059 15:48:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:09.059 15:48:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.059 15:48:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.059 15:48:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.059 15:48:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3594406 00:07:09.059 15:48:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3594406 00:07:09.059 15:48:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:09.316 15:48:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3594406 00:07:09.316 15:48:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 3594406 ']' 00:07:09.316 15:48:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 3594406 00:07:09.574 15:48:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:07:09.574 15:48:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:09.574 15:48:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3594406 00:07:09.574 15:48:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:09.574 15:48:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:09.574 15:48:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3594406' 00:07:09.574 killing process with pid 3594406 00:07:09.574 15:48:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 3594406 00:07:09.574 15:48:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 3594406 00:07:09.832 00:07:09.832 real 0m1.487s 00:07:09.832 user 0m1.571s 00:07:09.832 sys 0m0.455s 00:07:09.832 15:48:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.832 15:48:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.832 ************************************ 00:07:09.832 END TEST default_locks_via_rpc 00:07:09.832 ************************************ 00:07:09.832 15:48:38 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:09.832 15:48:38 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:09.832 15:48:38 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:09.832 15:48:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.832 15:48:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:09.832 ************************************ 00:07:09.832 START TEST non_locking_app_on_locked_coremask 00:07:09.832 ************************************ 00:07:09.832 15:48:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:07:09.832 15:48:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3594775 00:07:09.832 15:48:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3594775 /var/tmp/spdk.sock 00:07:09.832 15:48:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3594775 ']' 00:07:09.832 15:48:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.832 15:48:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:09.832 15:48:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.832 15:48:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:09.832 15:48:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:09.832 15:48:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.832 [2024-07-15 15:48:38.715123] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:09.832 [2024-07-15 15:48:38.715168] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3594775 ] 00:07:09.832 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.090 [2024-07-15 15:48:38.767875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.090 [2024-07-15 15:48:38.847587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.656 15:48:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:10.656 15:48:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:10.656 15:48:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3594884 00:07:10.656 15:48:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:10.656 15:48:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3594884 /var/tmp/spdk2.sock 00:07:10.656 15:48:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3594884 ']' 00:07:10.656 15:48:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:10.656 15:48:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:10.656 15:48:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:10.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:10.656 15:48:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:10.656 15:48:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:10.656 [2024-07-15 15:48:39.543032] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:10.656 [2024-07-15 15:48:39.543079] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3594884 ] 00:07:10.656 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.914 [2024-07-15 15:48:39.617818] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:10.914 [2024-07-15 15:48:39.617841] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.914 [2024-07-15 15:48:39.763675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.480 15:48:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:11.480 15:48:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:11.480 15:48:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3594775 00:07:11.480 15:48:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3594775 00:07:11.480 15:48:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:12.046 lslocks: write error 00:07:12.046 15:48:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3594775 00:07:12.046 15:48:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3594775 ']' 00:07:12.046 15:48:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3594775 00:07:12.046 15:48:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:12.046 15:48:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:12.046 15:48:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3594775 00:07:12.046 15:48:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:12.046 15:48:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:12.046 15:48:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3594775' 00:07:12.046 killing process with pid 3594775 00:07:12.046 15:48:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3594775 00:07:12.046 15:48:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3594775 00:07:12.613 15:48:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3594884 00:07:12.613 15:48:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3594884 ']' 00:07:12.613 15:48:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3594884 00:07:12.613 15:48:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:12.613 15:48:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:12.613 15:48:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3594884 00:07:12.613 15:48:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:12.613 15:48:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:12.613 15:48:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3594884' 00:07:12.613 killing process with pid 3594884 00:07:12.613 15:48:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3594884 00:07:12.613 15:48:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3594884 00:07:13.180 00:07:13.180 real 0m3.151s 00:07:13.180 user 0m3.369s 00:07:13.180 sys 0m0.876s 00:07:13.180 15:48:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.180 15:48:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.180 ************************************ 00:07:13.180 END TEST non_locking_app_on_locked_coremask 00:07:13.180 ************************************ 00:07:13.180 15:48:41 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:13.180 15:48:41 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:13.180 15:48:41 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:13.180 15:48:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.180 15:48:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:13.180 ************************************ 00:07:13.180 START TEST locking_app_on_unlocked_coremask 00:07:13.180 ************************************ 00:07:13.180 15:48:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:07:13.180 15:48:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3595372 00:07:13.180 15:48:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3595372 /var/tmp/spdk.sock 00:07:13.180 15:48:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:13.180 15:48:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3595372 ']' 00:07:13.180 15:48:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.180 15:48:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:13.180 15:48:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.180 15:48:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:13.180 15:48:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.180 [2024-07-15 15:48:41.933240] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:13.180 [2024-07-15 15:48:41.933282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3595372 ] 00:07:13.180 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.180 [2024-07-15 15:48:41.987207] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:13.180 [2024-07-15 15:48:41.987242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.180 [2024-07-15 15:48:42.055157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.115 15:48:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:14.115 15:48:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:14.115 15:48:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3595485 00:07:14.115 15:48:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3595485 /var/tmp/spdk2.sock 00:07:14.115 15:48:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:14.115 15:48:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3595485 ']' 00:07:14.115 15:48:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:14.115 15:48:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:14.115 15:48:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:14.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:14.115 15:48:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:14.115 15:48:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.115 [2024-07-15 15:48:42.774451] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:14.115 [2024-07-15 15:48:42.774505] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3595485 ] 00:07:14.115 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.115 [2024-07-15 15:48:42.866250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.115 [2024-07-15 15:48:43.012230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.746 15:48:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:14.746 15:48:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:14.746 15:48:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3595485 00:07:14.746 15:48:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3595485 00:07:14.746 15:48:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:15.311 lslocks: write error 00:07:15.311 15:48:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3595372 00:07:15.311 15:48:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3595372 ']' 00:07:15.311 15:48:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 3595372 00:07:15.311 15:48:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:15.311 15:48:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:15.311 15:48:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3595372 00:07:15.311 15:48:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:15.311 15:48:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:15.311 15:48:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3595372' 00:07:15.311 killing process with pid 3595372 00:07:15.311 15:48:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 3595372 00:07:15.311 15:48:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 3595372 00:07:15.879 15:48:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3595485 00:07:15.879 15:48:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3595485 ']' 00:07:15.879 15:48:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 3595485 00:07:15.879 15:48:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:15.879 15:48:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:15.879 15:48:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3595485 00:07:15.879 15:48:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:15.879 15:48:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:15.879 15:48:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3595485' 00:07:15.879 killing process with pid 3595485 00:07:15.879 15:48:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 3595485 00:07:15.879 15:48:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 3595485 00:07:16.139 00:07:16.139 real 0m3.118s 00:07:16.139 user 0m3.355s 00:07:16.139 sys 0m0.875s 00:07:16.139 15:48:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.139 15:48:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:16.139 ************************************ 00:07:16.139 END TEST locking_app_on_unlocked_coremask 00:07:16.139 ************************************ 00:07:16.139 15:48:45 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:16.139 15:48:45 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:16.139 15:48:45 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:16.139 15:48:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.139 15:48:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:16.139 ************************************ 00:07:16.139 START TEST locking_app_on_locked_coremask 00:07:16.139 ************************************ 00:07:16.139 15:48:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:07:16.139 15:48:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3595880 00:07:16.139 15:48:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3595880 /var/tmp/spdk.sock 00:07:16.139 15:48:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3595880 ']' 00:07:16.139 15:48:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.139 15:48:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:16.139 15:48:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.139 15:48:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:16.139 15:48:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:16.139 15:48:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:16.397 [2024-07-15 15:48:45.113399] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:16.397 [2024-07-15 15:48:45.113439] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3595880 ] 00:07:16.397 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.397 [2024-07-15 15:48:45.165067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.397 [2024-07-15 15:48:45.244948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.332 15:48:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:17.332 15:48:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:17.332 15:48:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3596106 00:07:17.332 15:48:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3596106 /var/tmp/spdk2.sock 00:07:17.332 15:48:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:17.332 15:48:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3596106 /var/tmp/spdk2.sock 00:07:17.332 15:48:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:17.332 15:48:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:17.332 15:48:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.332 15:48:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:17.332 15:48:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.332 15:48:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3596106 /var/tmp/spdk2.sock 00:07:17.332 15:48:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3596106 ']' 00:07:17.332 15:48:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:17.332 15:48:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:17.332 15:48:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:17.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:17.332 15:48:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:17.332 15:48:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:17.332 [2024-07-15 15:48:45.952633] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:17.332 [2024-07-15 15:48:45.952682] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3596106 ] 00:07:17.332 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.332 [2024-07-15 15:48:46.026719] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3595880 has claimed it. 00:07:17.332 [2024-07-15 15:48:46.026750] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:17.898 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3596106) - No such process 00:07:17.898 ERROR: process (pid: 3596106) is no longer running 00:07:17.898 15:48:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:17.898 15:48:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:17.898 15:48:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:17.898 15:48:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:17.898 15:48:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:17.898 15:48:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:17.898 15:48:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3595880 00:07:17.898 15:48:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3595880 00:07:17.898 15:48:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:18.157 lslocks: write error 00:07:18.157 15:48:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3595880 00:07:18.157 15:48:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3595880 ']' 00:07:18.157 15:48:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3595880 00:07:18.157 15:48:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:18.157 15:48:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:18.157 15:48:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3595880 00:07:18.157 15:48:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:18.157 15:48:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:18.157 15:48:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3595880' 00:07:18.157 killing process with pid 3595880 00:07:18.157 15:48:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3595880 00:07:18.157 15:48:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3595880 00:07:18.416 00:07:18.416 real 0m2.266s 00:07:18.416 user 0m2.512s 00:07:18.416 sys 0m0.599s 00:07:18.416 15:48:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.416 15:48:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:18.416 ************************************ 00:07:18.416 END TEST locking_app_on_locked_coremask 00:07:18.416 ************************************ 00:07:18.674 15:48:47 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:18.674 15:48:47 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:18.674 15:48:47 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:18.674 15:48:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.674 15:48:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:18.674 ************************************ 00:07:18.674 START TEST locking_overlapped_coremask 00:07:18.674 ************************************ 00:07:18.674 15:48:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:07:18.674 15:48:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3596374 00:07:18.674 15:48:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3596374 /var/tmp/spdk.sock 00:07:18.674 15:48:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 3596374 ']' 00:07:18.674 15:48:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.674 15:48:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:18.674 15:48:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.674 15:48:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:18.674 15:48:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:18.674 15:48:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:18.674 [2024-07-15 15:48:47.440754] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:18.674 [2024-07-15 15:48:47.440795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3596374 ] 00:07:18.674 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.674 [2024-07-15 15:48:47.492477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:18.675 [2024-07-15 15:48:47.573282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.675 [2024-07-15 15:48:47.573377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.675 [2024-07-15 15:48:47.573376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.610 15:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:19.610 15:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:19.610 15:48:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3596489 00:07:19.610 15:48:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3596489 /var/tmp/spdk2.sock 00:07:19.610 15:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:19.610 15:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3596489 /var/tmp/spdk2.sock 00:07:19.610 15:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:19.610 15:48:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:19.610 15:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.610 15:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:19.610 15:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.610 15:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3596489 /var/tmp/spdk2.sock 00:07:19.610 15:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 3596489 ']' 00:07:19.610 15:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:19.610 15:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:19.610 15:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:19.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:19.610 15:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:19.610 15:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:19.610 [2024-07-15 15:48:48.290266] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:19.610 [2024-07-15 15:48:48.290318] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3596489 ] 00:07:19.610 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.610 [2024-07-15 15:48:48.366427] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3596374 has claimed it. 00:07:19.610 [2024-07-15 15:48:48.366468] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:20.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3596489) - No such process 00:07:20.176 ERROR: process (pid: 3596489) is no longer running 00:07:20.176 15:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:20.176 15:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:20.176 15:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:20.176 15:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:20.176 15:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:20.176 15:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:20.176 15:48:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:20.176 15:48:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:20.176 15:48:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:20.177 15:48:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:20.177 15:48:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3596374 00:07:20.177 15:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 3596374 ']' 00:07:20.177 15:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 3596374 00:07:20.177 15:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:07:20.177 15:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:20.177 15:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3596374 00:07:20.177 15:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:20.177 15:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:20.177 15:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3596374' 00:07:20.177 killing process with pid 3596374 00:07:20.177 15:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 3596374 00:07:20.177 15:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 3596374 00:07:20.435 00:07:20.435 real 0m1.887s 00:07:20.435 user 0m5.349s 00:07:20.435 sys 0m0.394s 00:07:20.435 15:48:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.435 15:48:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:20.435 ************************************ 00:07:20.435 END TEST locking_overlapped_coremask 00:07:20.435 ************************************ 00:07:20.435 15:48:49 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:20.435 15:48:49 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:20.435 15:48:49 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:20.435 15:48:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.435 15:48:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:20.435 ************************************ 00:07:20.435 START TEST locking_overlapped_coremask_via_rpc 00:07:20.435 ************************************ 00:07:20.435 15:48:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:07:20.435 15:48:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3596643 00:07:20.435 15:48:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3596643 /var/tmp/spdk.sock 00:07:20.435 15:48:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:20.435 15:48:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3596643 ']' 00:07:20.435 15:48:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.435 15:48:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:20.435 15:48:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.435 15:48:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:20.435 15:48:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.694 [2024-07-15 15:48:49.399358] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:20.694 [2024-07-15 15:48:49.399399] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3596643 ] 00:07:20.694 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.694 [2024-07-15 15:48:49.453288] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:20.694 [2024-07-15 15:48:49.453316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:20.694 [2024-07-15 15:48:49.526214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.694 [2024-07-15 15:48:49.526312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.694 [2024-07-15 15:48:49.526312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.259 15:48:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:21.259 15:48:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:21.259 15:48:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:21.259 15:48:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3596875 00:07:21.259 15:48:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3596875 /var/tmp/spdk2.sock 00:07:21.259 15:48:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3596875 ']' 00:07:21.259 15:48:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:21.259 15:48:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:21.259 15:48:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:21.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:21.259 15:48:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:21.259 15:48:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.517 [2024-07-15 15:48:50.231525] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:21.517 [2024-07-15 15:48:50.231573] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3596875 ] 00:07:21.517 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.517 [2024-07-15 15:48:50.307194] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:21.517 [2024-07-15 15:48:50.307229] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:21.775 [2024-07-15 15:48:50.458433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:21.775 [2024-07-15 15:48:50.458562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.775 [2024-07-15 15:48:50.458562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:22.339 15:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:22.339 15:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:22.339 15:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:22.339 15:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.339 15:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.339 15:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.339 15:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:22.339 15:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:22.339 15:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:22.339 15:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:22.340 15:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:22.340 15:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:22.340 15:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:22.340 15:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:22.340 15:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.340 15:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.340 [2024-07-15 15:48:51.058304] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3596643 has claimed it. 00:07:22.340 request: 00:07:22.340 { 00:07:22.340 "method": "framework_enable_cpumask_locks", 00:07:22.340 "req_id": 1 00:07:22.340 } 00:07:22.340 Got JSON-RPC error response 00:07:22.340 response: 00:07:22.340 { 00:07:22.340 "code": -32603, 00:07:22.340 "message": "Failed to claim CPU core: 2" 00:07:22.340 } 00:07:22.340 15:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:22.340 15:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:22.340 15:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:22.340 15:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:22.340 15:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:22.340 15:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3596643 /var/tmp/spdk.sock 00:07:22.340 15:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3596643 ']' 00:07:22.340 15:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.340 15:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:22.340 15:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.340 15:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:22.340 15:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.340 15:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:22.340 15:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:22.340 15:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3596875 /var/tmp/spdk2.sock 00:07:22.340 15:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3596875 ']' 00:07:22.340 15:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:22.340 15:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:22.340 15:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:22.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:22.340 15:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:22.340 15:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.597 15:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:22.597 15:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:22.597 15:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:22.597 15:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:22.597 15:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:22.597 15:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:22.597 00:07:22.597 real 0m2.104s 00:07:22.597 user 0m0.866s 00:07:22.597 sys 0m0.170s 00:07:22.597 15:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.597 15:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.597 ************************************ 00:07:22.597 END TEST locking_overlapped_coremask_via_rpc 00:07:22.597 ************************************ 00:07:22.597 15:48:51 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:22.597 15:48:51 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:22.597 15:48:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3596643 ]] 00:07:22.597 15:48:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3596643 00:07:22.597 15:48:51 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3596643 ']' 00:07:22.597 15:48:51 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3596643 00:07:22.597 15:48:51 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:22.597 15:48:51 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:22.597 15:48:51 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3596643 00:07:22.855 15:48:51 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:22.855 15:48:51 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:22.855 15:48:51 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3596643' 00:07:22.855 killing process with pid 3596643 00:07:22.855 15:48:51 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 3596643 00:07:22.855 15:48:51 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 3596643 00:07:23.113 15:48:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3596875 ]] 00:07:23.113 15:48:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3596875 00:07:23.113 15:48:51 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3596875 ']' 00:07:23.113 15:48:51 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3596875 00:07:23.113 15:48:51 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:23.113 15:48:51 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:23.113 15:48:51 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3596875 00:07:23.113 15:48:51 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:23.113 15:48:51 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:23.113 15:48:51 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3596875' 00:07:23.113 killing process with pid 3596875 00:07:23.113 15:48:51 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 3596875 00:07:23.113 15:48:51 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 3596875 00:07:23.372 15:48:52 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:23.372 15:48:52 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:23.372 15:48:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3596643 ]] 00:07:23.372 15:48:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3596643 00:07:23.372 15:48:52 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3596643 ']' 00:07:23.372 15:48:52 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3596643 00:07:23.372 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3596643) - No such process 00:07:23.372 15:48:52 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 3596643 is not found' 00:07:23.372 Process with pid 3596643 is not found 00:07:23.372 15:48:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3596875 ]] 00:07:23.372 15:48:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3596875 00:07:23.372 15:48:52 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3596875 ']' 00:07:23.372 15:48:52 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3596875 00:07:23.372 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3596875) - No such process 00:07:23.372 15:48:52 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 3596875 is not found' 00:07:23.372 Process with pid 3596875 is not found 00:07:23.372 15:48:52 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:23.372 00:07:23.372 real 0m16.660s 00:07:23.372 user 0m29.042s 00:07:23.372 sys 0m4.656s 00:07:23.372 15:48:52 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.372 15:48:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:23.372 ************************************ 00:07:23.372 END TEST cpu_locks 00:07:23.372 ************************************ 00:07:23.372 15:48:52 event -- common/autotest_common.sh@1142 -- # return 0 00:07:23.372 00:07:23.372 real 0m41.697s 00:07:23.372 user 1m19.950s 00:07:23.372 sys 0m7.855s 00:07:23.372 15:48:52 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.372 15:48:52 event -- common/autotest_common.sh@10 -- # set +x 00:07:23.372 ************************************ 00:07:23.372 END TEST event 00:07:23.372 ************************************ 00:07:23.372 15:48:52 -- common/autotest_common.sh@1142 -- # return 0 00:07:23.372 15:48:52 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:23.372 15:48:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:23.372 15:48:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.372 15:48:52 -- common/autotest_common.sh@10 -- # set +x 00:07:23.630 ************************************ 00:07:23.630 START TEST thread 00:07:23.630 ************************************ 00:07:23.630 15:48:52 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:23.630 * Looking for test storage... 00:07:23.630 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:23.630 15:48:52 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:23.630 15:48:52 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:23.630 15:48:52 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.630 15:48:52 thread -- common/autotest_common.sh@10 -- # set +x 00:07:23.630 ************************************ 00:07:23.630 START TEST thread_poller_perf 00:07:23.630 ************************************ 00:07:23.630 15:48:52 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:23.630 [2024-07-15 15:48:52.436322] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:23.630 [2024-07-15 15:48:52.436389] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3597344 ] 00:07:23.630 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.630 [2024-07-15 15:48:52.494801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.887 [2024-07-15 15:48:52.570314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.887 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:24.822 ====================================== 00:07:24.822 busy:2308925286 (cyc) 00:07:24.822 total_run_count: 412000 00:07:24.822 tsc_hz: 2300000000 (cyc) 00:07:24.822 ====================================== 00:07:24.822 poller_cost: 5604 (cyc), 2436 (nsec) 00:07:24.822 00:07:24.822 real 0m1.230s 00:07:24.822 user 0m1.159s 00:07:24.822 sys 0m0.067s 00:07:24.822 15:48:53 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.822 15:48:53 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:24.822 ************************************ 00:07:24.822 END TEST thread_poller_perf 00:07:24.822 ************************************ 00:07:24.822 15:48:53 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:24.822 15:48:53 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:24.822 15:48:53 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:24.822 15:48:53 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.822 15:48:53 thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.822 ************************************ 00:07:24.822 START TEST thread_poller_perf 00:07:24.822 ************************************ 00:07:24.822 15:48:53 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:24.822 [2024-07-15 15:48:53.732079] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:24.822 [2024-07-15 15:48:53.732145] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3597563 ] 00:07:25.080 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.080 [2024-07-15 15:48:53.790354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.080 [2024-07-15 15:48:53.864389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.080 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:26.016 ====================================== 00:07:26.016 busy:2301685732 (cyc) 00:07:26.016 total_run_count: 5463000 00:07:26.016 tsc_hz: 2300000000 (cyc) 00:07:26.016 ====================================== 00:07:26.016 poller_cost: 421 (cyc), 183 (nsec) 00:07:26.016 00:07:26.016 real 0m1.221s 00:07:26.016 user 0m1.149s 00:07:26.016 sys 0m0.068s 00:07:26.016 15:48:54 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.016 15:48:54 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:26.016 ************************************ 00:07:26.016 END TEST thread_poller_perf 00:07:26.016 ************************************ 00:07:26.275 15:48:54 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:26.275 15:48:54 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:26.275 00:07:26.275 real 0m2.645s 00:07:26.275 user 0m2.391s 00:07:26.275 sys 0m0.261s 00:07:26.275 15:48:54 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.275 15:48:54 thread -- common/autotest_common.sh@10 -- # set +x 00:07:26.275 ************************************ 00:07:26.275 END TEST thread 00:07:26.275 ************************************ 00:07:26.275 15:48:54 -- common/autotest_common.sh@1142 -- # return 0 00:07:26.275 15:48:54 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:26.275 15:48:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:26.275 15:48:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.275 15:48:54 -- common/autotest_common.sh@10 -- # set +x 00:07:26.275 ************************************ 00:07:26.275 START TEST accel 00:07:26.275 ************************************ 00:07:26.275 15:48:55 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:26.275 * Looking for test storage... 00:07:26.275 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:26.275 15:48:55 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:26.275 15:48:55 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:26.275 15:48:55 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:26.275 15:48:55 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3597873 00:07:26.275 15:48:55 accel -- accel/accel.sh@63 -- # waitforlisten 3597873 00:07:26.275 15:48:55 accel -- common/autotest_common.sh@829 -- # '[' -z 3597873 ']' 00:07:26.275 15:48:55 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.275 15:48:55 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:26.275 15:48:55 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:26.275 15:48:55 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:26.275 15:48:55 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.275 15:48:55 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:26.275 15:48:55 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:26.275 15:48:55 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:26.275 15:48:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.275 15:48:55 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.275 15:48:55 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.275 15:48:55 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:26.275 15:48:55 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:26.275 15:48:55 accel -- accel/accel.sh@41 -- # jq -r . 00:07:26.275 [2024-07-15 15:48:55.158839] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:26.275 [2024-07-15 15:48:55.158883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3597873 ] 00:07:26.275 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.534 [2024-07-15 15:48:55.214008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.534 [2024-07-15 15:48:55.289354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.101 15:48:55 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:27.101 15:48:55 accel -- common/autotest_common.sh@862 -- # return 0 00:07:27.101 15:48:55 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:27.101 15:48:55 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:27.101 15:48:55 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:27.101 15:48:55 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:27.101 15:48:55 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:27.101 15:48:55 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:27.101 15:48:55 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:27.101 15:48:55 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.101 15:48:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.101 15:48:55 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.101 15:48:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.101 15:48:55 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.101 15:48:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.101 15:48:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:27.101 15:48:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.101 15:48:55 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.101 15:48:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.101 15:48:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:27.101 15:48:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.101 15:48:55 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.101 15:48:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.101 15:48:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:27.101 15:48:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.101 15:48:55 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.101 15:48:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.101 15:48:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:27.101 15:48:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.101 15:48:56 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.101 15:48:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.101 15:48:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:27.101 15:48:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.101 15:48:56 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.101 15:48:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.101 15:48:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:27.101 15:48:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.101 15:48:56 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.102 15:48:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.102 15:48:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:27.102 15:48:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.102 15:48:56 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.102 15:48:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.102 15:48:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:27.102 15:48:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.102 15:48:56 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.102 15:48:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.102 15:48:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:27.102 15:48:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.102 15:48:56 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.102 15:48:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.102 15:48:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:27.102 15:48:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.102 15:48:56 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.102 15:48:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.102 15:48:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:27.102 15:48:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.102 15:48:56 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.102 15:48:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.102 15:48:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:27.102 15:48:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.102 15:48:56 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.102 15:48:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.102 15:48:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:27.102 15:48:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.102 15:48:56 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.102 15:48:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.102 15:48:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:27.102 15:48:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.102 15:48:56 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.102 15:48:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.102 15:48:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:27.102 15:48:56 accel -- accel/accel.sh@75 -- # killprocess 3597873 00:07:27.102 15:48:56 accel -- common/autotest_common.sh@948 -- # '[' -z 3597873 ']' 00:07:27.102 15:48:56 accel -- common/autotest_common.sh@952 -- # kill -0 3597873 00:07:27.102 15:48:56 accel -- common/autotest_common.sh@953 -- # uname 00:07:27.102 15:48:56 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:27.102 15:48:56 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3597873 00:07:27.361 15:48:56 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:27.361 15:48:56 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:27.361 15:48:56 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3597873' 00:07:27.361 killing process with pid 3597873 00:07:27.361 15:48:56 accel -- common/autotest_common.sh@967 -- # kill 3597873 00:07:27.361 15:48:56 accel -- common/autotest_common.sh@972 -- # wait 3597873 00:07:27.619 15:48:56 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:27.619 15:48:56 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:27.619 15:48:56 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:27.619 15:48:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.619 15:48:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.619 15:48:56 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:07:27.619 15:48:56 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:27.619 15:48:56 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:27.619 15:48:56 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:27.619 15:48:56 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:27.619 15:48:56 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.620 15:48:56 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.620 15:48:56 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:27.620 15:48:56 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:27.620 15:48:56 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:27.620 15:48:56 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.620 15:48:56 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:27.620 15:48:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:27.620 15:48:56 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:27.620 15:48:56 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:27.620 15:48:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.620 15:48:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.620 ************************************ 00:07:27.620 START TEST accel_missing_filename 00:07:27.620 ************************************ 00:07:27.620 15:48:56 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:07:27.620 15:48:56 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:27.620 15:48:56 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:27.620 15:48:56 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:27.620 15:48:56 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:27.620 15:48:56 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:27.620 15:48:56 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:27.620 15:48:56 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:27.620 15:48:56 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:27.620 15:48:56 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:27.620 15:48:56 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:27.620 15:48:56 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:27.620 15:48:56 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.620 15:48:56 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.620 15:48:56 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:27.620 15:48:56 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:27.620 15:48:56 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:27.620 [2024-07-15 15:48:56.503663] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:27.620 [2024-07-15 15:48:56.503723] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3598173 ] 00:07:27.620 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.879 [2024-07-15 15:48:56.560234] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.879 [2024-07-15 15:48:56.634713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.879 [2024-07-15 15:48:56.675632] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:27.879 [2024-07-15 15:48:56.735246] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:07:27.879 A filename is required. 00:07:27.879 15:48:56 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:27.879 15:48:56 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:27.879 15:48:56 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:27.879 15:48:56 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:27.879 15:48:56 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:27.879 15:48:56 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:27.879 00:07:27.879 real 0m0.323s 00:07:27.879 user 0m0.247s 00:07:27.879 sys 0m0.114s 00:07:27.879 15:48:56 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.879 15:48:56 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:27.879 ************************************ 00:07:27.879 END TEST accel_missing_filename 00:07:27.879 ************************************ 00:07:28.138 15:48:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:28.138 15:48:56 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:28.138 15:48:56 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:28.138 15:48:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.138 15:48:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.138 ************************************ 00:07:28.138 START TEST accel_compress_verify 00:07:28.138 ************************************ 00:07:28.138 15:48:56 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:28.138 15:48:56 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:28.138 15:48:56 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:28.138 15:48:56 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:28.138 15:48:56 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.138 15:48:56 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:28.138 15:48:56 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.138 15:48:56 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:28.138 15:48:56 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:28.138 15:48:56 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:28.138 15:48:56 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.138 15:48:56 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.138 15:48:56 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.138 15:48:56 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.138 15:48:56 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.138 15:48:56 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:28.138 15:48:56 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:28.138 [2024-07-15 15:48:56.901575] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:28.138 [2024-07-15 15:48:56.901643] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3598262 ] 00:07:28.138 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.138 [2024-07-15 15:48:56.958845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.138 [2024-07-15 15:48:57.029737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.138 [2024-07-15 15:48:57.070913] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:28.398 [2024-07-15 15:48:57.130942] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:07:28.398 00:07:28.398 Compression does not support the verify option, aborting. 00:07:28.398 15:48:57 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:28.398 15:48:57 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:28.398 15:48:57 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:28.398 15:48:57 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:28.398 15:48:57 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:28.398 15:48:57 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:28.398 00:07:28.398 real 0m0.332s 00:07:28.398 user 0m0.256s 00:07:28.398 sys 0m0.117s 00:07:28.398 15:48:57 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.398 15:48:57 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:28.398 ************************************ 00:07:28.398 END TEST accel_compress_verify 00:07:28.398 ************************************ 00:07:28.398 15:48:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:28.398 15:48:57 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:28.398 15:48:57 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:28.398 15:48:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.398 15:48:57 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.398 ************************************ 00:07:28.398 START TEST accel_wrong_workload 00:07:28.398 ************************************ 00:07:28.398 15:48:57 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:07:28.398 15:48:57 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:28.398 15:48:57 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:28.398 15:48:57 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:28.398 15:48:57 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.398 15:48:57 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:28.398 15:48:57 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.398 15:48:57 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:28.398 15:48:57 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:28.398 15:48:57 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:28.398 15:48:57 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.398 15:48:57 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.398 15:48:57 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.398 15:48:57 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.398 15:48:57 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.398 15:48:57 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:28.398 15:48:57 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:28.398 Unsupported workload type: foobar 00:07:28.398 [2024-07-15 15:48:57.286047] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:28.398 accel_perf options: 00:07:28.398 [-h help message] 00:07:28.398 [-q queue depth per core] 00:07:28.398 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:28.398 [-T number of threads per core 00:07:28.398 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:28.398 [-t time in seconds] 00:07:28.398 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:28.398 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:28.398 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:28.398 [-l for compress/decompress workloads, name of uncompressed input file 00:07:28.398 [-S for crc32c workload, use this seed value (default 0) 00:07:28.398 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:28.398 [-f for fill workload, use this BYTE value (default 255) 00:07:28.398 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:28.398 [-y verify result if this switch is on] 00:07:28.398 [-a tasks to allocate per core (default: same value as -q)] 00:07:28.398 Can be used to spread operations across a wider range of memory. 00:07:28.398 15:48:57 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:28.398 15:48:57 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:28.398 15:48:57 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:28.398 15:48:57 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:28.398 00:07:28.398 real 0m0.018s 00:07:28.398 user 0m0.013s 00:07:28.398 sys 0m0.006s 00:07:28.398 15:48:57 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.398 15:48:57 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:28.398 ************************************ 00:07:28.398 END TEST accel_wrong_workload 00:07:28.398 ************************************ 00:07:28.398 Error: writing output failed: Broken pipe 00:07:28.398 15:48:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:28.398 15:48:57 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:28.398 15:48:57 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:28.398 15:48:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.398 15:48:57 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.715 ************************************ 00:07:28.715 START TEST accel_negative_buffers 00:07:28.715 ************************************ 00:07:28.715 15:48:57 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:28.715 15:48:57 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:28.715 15:48:57 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:28.715 15:48:57 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:28.715 15:48:57 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.715 15:48:57 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:28.715 15:48:57 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.715 15:48:57 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:28.715 15:48:57 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:28.715 15:48:57 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:28.715 15:48:57 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.715 15:48:57 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.715 15:48:57 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.715 15:48:57 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.715 15:48:57 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.715 15:48:57 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:28.715 15:48:57 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:28.715 -x option must be non-negative. 00:07:28.715 [2024-07-15 15:48:57.377892] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:28.715 accel_perf options: 00:07:28.715 [-h help message] 00:07:28.715 [-q queue depth per core] 00:07:28.715 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:28.715 [-T number of threads per core 00:07:28.716 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:28.716 [-t time in seconds] 00:07:28.716 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:28.716 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:28.716 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:28.716 [-l for compress/decompress workloads, name of uncompressed input file 00:07:28.716 [-S for crc32c workload, use this seed value (default 0) 00:07:28.716 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:28.716 [-f for fill workload, use this BYTE value (default 255) 00:07:28.716 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:28.716 [-y verify result if this switch is on] 00:07:28.716 [-a tasks to allocate per core (default: same value as -q)] 00:07:28.716 Can be used to spread operations across a wider range of memory. 00:07:28.716 15:48:57 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:28.716 15:48:57 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:28.716 15:48:57 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:28.716 15:48:57 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:28.716 00:07:28.716 real 0m0.031s 00:07:28.716 user 0m0.019s 00:07:28.716 sys 0m0.012s 00:07:28.716 15:48:57 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.716 15:48:57 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:28.716 ************************************ 00:07:28.716 END TEST accel_negative_buffers 00:07:28.716 ************************************ 00:07:28.716 Error: writing output failed: Broken pipe 00:07:28.716 15:48:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:28.716 15:48:57 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:28.716 15:48:57 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:28.716 15:48:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.716 15:48:57 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.716 ************************************ 00:07:28.716 START TEST accel_crc32c 00:07:28.716 ************************************ 00:07:28.716 15:48:57 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:28.716 15:48:57 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:28.716 15:48:57 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:28.716 15:48:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.716 15:48:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.716 15:48:57 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:28.716 15:48:57 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:28.716 15:48:57 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:28.716 15:48:57 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.716 15:48:57 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.716 15:48:57 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.716 15:48:57 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.716 15:48:57 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.716 15:48:57 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:28.716 15:48:57 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:28.716 [2024-07-15 15:48:57.470975] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:28.716 [2024-07-15 15:48:57.471031] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3598325 ] 00:07:28.716 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.716 [2024-07-15 15:48:57.526637] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.716 [2024-07-15 15:48:57.601104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.716 15:48:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.996 15:48:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.933 15:48:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:29.933 15:48:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.933 15:48:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.933 15:48:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.933 15:48:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:29.933 15:48:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.933 15:48:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.933 15:48:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.933 15:48:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:29.933 15:48:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.933 15:48:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.933 15:48:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.933 15:48:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:29.933 15:48:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.933 15:48:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.933 15:48:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.933 15:48:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:29.933 15:48:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.933 15:48:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.933 15:48:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.933 15:48:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:29.933 15:48:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.933 15:48:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.933 15:48:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.933 15:48:58 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:29.933 15:48:58 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:29.933 15:48:58 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.933 00:07:29.933 real 0m1.336s 00:07:29.933 user 0m1.239s 00:07:29.933 sys 0m0.111s 00:07:29.933 15:48:58 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.933 15:48:58 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:29.933 ************************************ 00:07:29.933 END TEST accel_crc32c 00:07:29.933 ************************************ 00:07:29.933 15:48:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:29.933 15:48:58 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:29.933 15:48:58 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:29.933 15:48:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.933 15:48:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.933 ************************************ 00:07:29.933 START TEST accel_crc32c_C2 00:07:29.933 ************************************ 00:07:29.933 15:48:58 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:29.933 15:48:58 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:29.933 15:48:58 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:29.933 15:48:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.933 15:48:58 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:29.933 15:48:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.933 15:48:58 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:29.933 15:48:58 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:29.933 15:48:58 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.933 15:48:58 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.933 15:48:58 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.933 15:48:58 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.933 15:48:58 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.933 15:48:58 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:29.933 15:48:58 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:30.193 [2024-07-15 15:48:58.869099] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:30.193 [2024-07-15 15:48:58.869162] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3598584 ] 00:07:30.193 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.193 [2024-07-15 15:48:58.924542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.193 [2024-07-15 15:48:58.997187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.193 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.194 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:30.194 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.194 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.194 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.194 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:30.194 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.194 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.194 15:48:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:31.574 15:49:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:31.574 15:49:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.574 15:49:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:31.574 15:49:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:31.574 15:49:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:31.574 15:49:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.574 15:49:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:31.574 15:49:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:31.574 15:49:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:31.574 15:49:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.574 15:49:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:31.574 15:49:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:31.574 15:49:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:31.574 15:49:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.574 15:49:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:31.574 15:49:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:31.574 15:49:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:31.574 15:49:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.574 15:49:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:31.574 15:49:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:31.574 15:49:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:31.574 15:49:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.574 15:49:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:31.574 15:49:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:31.574 15:49:00 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:31.574 15:49:00 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:31.574 15:49:00 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.574 00:07:31.574 real 0m1.337s 00:07:31.574 user 0m1.232s 00:07:31.574 sys 0m0.118s 00:07:31.574 15:49:00 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:31.574 15:49:00 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:31.574 ************************************ 00:07:31.574 END TEST accel_crc32c_C2 00:07:31.574 ************************************ 00:07:31.574 15:49:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:31.575 15:49:00 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:31.575 15:49:00 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:31.575 15:49:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.575 15:49:00 accel -- common/autotest_common.sh@10 -- # set +x 00:07:31.575 ************************************ 00:07:31.575 START TEST accel_copy 00:07:31.575 ************************************ 00:07:31.575 15:49:00 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:31.575 [2024-07-15 15:49:00.259417] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:31.575 [2024-07-15 15:49:00.259455] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3598829 ] 00:07:31.575 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.575 [2024-07-15 15:49:00.311680] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.575 [2024-07-15 15:49:00.384444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.575 15:49:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.954 15:49:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:32.954 15:49:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.954 15:49:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.954 15:49:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.954 15:49:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:32.954 15:49:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.954 15:49:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.954 15:49:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.954 15:49:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:32.954 15:49:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.954 15:49:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.954 15:49:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.954 15:49:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:32.954 15:49:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.954 15:49:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.954 15:49:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.954 15:49:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:32.954 15:49:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.954 15:49:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.954 15:49:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.954 15:49:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:32.954 15:49:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.954 15:49:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.954 15:49:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.954 15:49:01 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:32.954 15:49:01 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:32.954 15:49:01 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.954 00:07:32.954 real 0m1.320s 00:07:32.954 user 0m1.230s 00:07:32.954 sys 0m0.102s 00:07:32.954 15:49:01 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.954 15:49:01 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:32.954 ************************************ 00:07:32.954 END TEST accel_copy 00:07:32.954 ************************************ 00:07:32.954 15:49:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:32.954 15:49:01 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:32.954 15:49:01 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:32.954 15:49:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.954 15:49:01 accel -- common/autotest_common.sh@10 -- # set +x 00:07:32.954 ************************************ 00:07:32.954 START TEST accel_fill 00:07:32.954 ************************************ 00:07:32.954 15:49:01 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:32.954 [2024-07-15 15:49:01.655617] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:32.954 [2024-07-15 15:49:01.655667] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3599082 ] 00:07:32.954 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.954 [2024-07-15 15:49:01.709088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.954 [2024-07-15 15:49:01.781970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.954 15:49:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:34.333 15:49:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:34.333 15:49:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:34.333 15:49:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:34.333 15:49:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:34.333 15:49:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:34.333 15:49:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:34.333 15:49:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:34.333 15:49:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:34.333 15:49:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:34.333 15:49:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:34.333 15:49:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:34.333 15:49:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:34.333 15:49:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:34.333 15:49:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:34.333 15:49:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:34.333 15:49:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:34.333 15:49:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:34.333 15:49:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:34.333 15:49:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:34.333 15:49:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:34.333 15:49:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:34.333 15:49:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:34.333 15:49:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:34.333 15:49:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:34.333 15:49:02 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:34.333 15:49:02 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:34.333 15:49:02 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:34.333 00:07:34.333 real 0m1.333s 00:07:34.333 user 0m1.235s 00:07:34.333 sys 0m0.110s 00:07:34.333 15:49:02 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.333 15:49:02 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:34.333 ************************************ 00:07:34.333 END TEST accel_fill 00:07:34.333 ************************************ 00:07:34.333 15:49:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:34.333 15:49:02 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:34.333 15:49:02 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:34.333 15:49:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.333 15:49:02 accel -- common/autotest_common.sh@10 -- # set +x 00:07:34.333 ************************************ 00:07:34.333 START TEST accel_copy_crc32c 00:07:34.333 ************************************ 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:34.333 [2024-07-15 15:49:03.041389] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:34.333 [2024-07-15 15:49:03.041439] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3599328 ] 00:07:34.333 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.333 [2024-07-15 15:49:03.094853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.333 [2024-07-15 15:49:03.168797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:34.333 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.334 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.334 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:34.334 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.334 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.334 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.334 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:34.334 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.334 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.334 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.334 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:34.334 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.334 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.334 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.334 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:34.334 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.334 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.334 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.334 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:34.334 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.334 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.334 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.334 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:34.334 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.334 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.334 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:34.334 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:34.334 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:34.334 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:34.334 15:49:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.710 15:49:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:35.710 15:49:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.710 15:49:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.710 15:49:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.710 15:49:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:35.710 15:49:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.710 15:49:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.710 15:49:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.710 15:49:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:35.710 15:49:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.710 15:49:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.710 15:49:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.710 15:49:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:35.710 15:49:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.710 15:49:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.710 15:49:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.710 15:49:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:35.710 15:49:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.710 15:49:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.710 15:49:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.710 15:49:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:35.710 15:49:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.710 15:49:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.710 15:49:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.710 15:49:04 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:35.710 15:49:04 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:35.710 15:49:04 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:35.710 00:07:35.710 real 0m1.334s 00:07:35.710 user 0m1.239s 00:07:35.710 sys 0m0.107s 00:07:35.710 15:49:04 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.710 15:49:04 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:35.710 ************************************ 00:07:35.710 END TEST accel_copy_crc32c 00:07:35.710 ************************************ 00:07:35.710 15:49:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:35.710 15:49:04 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:35.710 15:49:04 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:35.710 15:49:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.710 15:49:04 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.710 ************************************ 00:07:35.710 START TEST accel_copy_crc32c_C2 00:07:35.710 ************************************ 00:07:35.710 15:49:04 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:35.710 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:35.710 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:35.710 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:35.710 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.710 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.710 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:35.710 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:35.710 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.710 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.710 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.710 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.710 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.710 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:35.710 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:35.710 [2024-07-15 15:49:04.417439] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:35.710 [2024-07-15 15:49:04.417486] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3599577 ] 00:07:35.710 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.710 [2024-07-15 15:49:04.469244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.710 [2024-07-15 15:49:04.543020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.710 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:35.710 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.710 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.710 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.710 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:35.710 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.710 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.711 15:49:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:37.089 15:49:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:37.089 15:49:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.089 15:49:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:37.089 15:49:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:37.089 15:49:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:37.089 15:49:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.089 15:49:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:37.089 15:49:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:37.089 15:49:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:37.089 15:49:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.089 15:49:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:37.089 15:49:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:37.089 15:49:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:37.089 15:49:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.089 15:49:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:37.089 15:49:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:37.089 15:49:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:37.089 15:49:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.089 15:49:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:37.089 15:49:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:37.089 15:49:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:37.089 15:49:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.089 15:49:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:37.089 15:49:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:37.089 15:49:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:37.089 15:49:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:37.089 15:49:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.089 00:07:37.089 real 0m1.323s 00:07:37.089 user 0m1.226s 00:07:37.089 sys 0m0.110s 00:07:37.089 15:49:05 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.089 15:49:05 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:37.089 ************************************ 00:07:37.089 END TEST accel_copy_crc32c_C2 00:07:37.089 ************************************ 00:07:37.089 15:49:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:37.089 15:49:05 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:37.089 15:49:05 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:37.089 15:49:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.089 15:49:05 accel -- common/autotest_common.sh@10 -- # set +x 00:07:37.089 ************************************ 00:07:37.089 START TEST accel_dualcast 00:07:37.089 ************************************ 00:07:37.089 15:49:05 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:37.089 [2024-07-15 15:49:05.807489] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:37.089 [2024-07-15 15:49:05.807557] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3599828 ] 00:07:37.089 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.089 [2024-07-15 15:49:05.863230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.089 [2024-07-15 15:49:05.937834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.089 15:49:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:37.090 15:49:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.090 15:49:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.090 15:49:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.469 15:49:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:38.469 15:49:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.469 15:49:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.469 15:49:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.469 15:49:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:38.469 15:49:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.469 15:49:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.469 15:49:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.469 15:49:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:38.469 15:49:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.469 15:49:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.469 15:49:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.469 15:49:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:38.469 15:49:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.469 15:49:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.469 15:49:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.469 15:49:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:38.469 15:49:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.469 15:49:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.469 15:49:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.469 15:49:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:38.469 15:49:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.469 15:49:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.469 15:49:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.469 15:49:07 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:38.469 15:49:07 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:38.469 15:49:07 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:38.469 00:07:38.469 real 0m1.337s 00:07:38.469 user 0m1.236s 00:07:38.469 sys 0m0.114s 00:07:38.469 15:49:07 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.469 15:49:07 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:38.469 ************************************ 00:07:38.469 END TEST accel_dualcast 00:07:38.469 ************************************ 00:07:38.469 15:49:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:38.469 15:49:07 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:38.469 15:49:07 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:38.469 15:49:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.469 15:49:07 accel -- common/autotest_common.sh@10 -- # set +x 00:07:38.469 ************************************ 00:07:38.469 START TEST accel_compare 00:07:38.469 ************************************ 00:07:38.469 15:49:07 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:38.469 [2024-07-15 15:49:07.193872] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:38.469 [2024-07-15 15:49:07.193910] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3600077 ] 00:07:38.469 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.469 [2024-07-15 15:49:07.246242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.469 [2024-07-15 15:49:07.319737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.469 15:49:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.848 15:49:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:39.848 15:49:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.848 15:49:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.848 15:49:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.848 15:49:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:39.848 15:49:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.848 15:49:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.848 15:49:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.848 15:49:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:39.848 15:49:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.848 15:49:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.848 15:49:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.848 15:49:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:39.848 15:49:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.848 15:49:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.848 15:49:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.848 15:49:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:39.848 15:49:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.848 15:49:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.848 15:49:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.848 15:49:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:39.848 15:49:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.848 15:49:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.848 15:49:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.848 15:49:08 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:39.848 15:49:08 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:39.848 15:49:08 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:39.848 00:07:39.848 real 0m1.321s 00:07:39.848 user 0m1.228s 00:07:39.848 sys 0m0.106s 00:07:39.848 15:49:08 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.848 15:49:08 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:39.848 ************************************ 00:07:39.848 END TEST accel_compare 00:07:39.848 ************************************ 00:07:39.848 15:49:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:39.848 15:49:08 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:39.848 15:49:08 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:39.848 15:49:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.848 15:49:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:39.848 ************************************ 00:07:39.848 START TEST accel_xor 00:07:39.848 ************************************ 00:07:39.848 15:49:08 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:39.848 15:49:08 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:39.848 15:49:08 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:39.848 15:49:08 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:39.848 15:49:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:39.848 15:49:08 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:39.848 15:49:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:39.848 15:49:08 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:39.848 15:49:08 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:39.848 15:49:08 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:39.848 15:49:08 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.848 15:49:08 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.848 15:49:08 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:39.848 15:49:08 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:39.848 15:49:08 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:39.848 [2024-07-15 15:49:08.572464] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:39.848 [2024-07-15 15:49:08.572502] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3600327 ] 00:07:39.848 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.848 [2024-07-15 15:49:08.625073] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.848 [2024-07-15 15:49:08.698529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:39.849 15:49:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.224 15:49:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.225 15:49:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.225 15:49:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.225 15:49:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.225 15:49:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.225 15:49:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.225 15:49:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.225 15:49:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.225 15:49:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.225 15:49:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.225 15:49:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.225 15:49:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.225 15:49:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.225 15:49:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.225 15:49:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.225 15:49:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.225 15:49:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.225 15:49:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.225 15:49:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.225 15:49:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.225 15:49:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.225 15:49:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.225 15:49:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.225 15:49:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.225 15:49:09 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:41.225 15:49:09 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:41.225 15:49:09 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:41.225 00:07:41.225 real 0m1.319s 00:07:41.225 user 0m1.221s 00:07:41.225 sys 0m0.110s 00:07:41.225 15:49:09 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.225 15:49:09 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:41.225 ************************************ 00:07:41.225 END TEST accel_xor 00:07:41.225 ************************************ 00:07:41.225 15:49:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:41.225 15:49:09 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:41.225 15:49:09 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:41.225 15:49:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.225 15:49:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:41.225 ************************************ 00:07:41.225 START TEST accel_xor 00:07:41.225 ************************************ 00:07:41.225 15:49:09 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:41.225 15:49:09 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:41.225 15:49:09 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:41.225 15:49:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.225 15:49:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.225 15:49:09 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:41.225 15:49:09 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:41.225 15:49:09 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:41.225 15:49:09 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:41.225 15:49:09 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:41.225 15:49:09 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.225 15:49:09 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.225 15:49:09 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:41.225 15:49:09 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:41.225 15:49:09 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:41.225 [2024-07-15 15:49:09.967002] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:41.225 [2024-07-15 15:49:09.967061] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3600579 ] 00:07:41.225 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.225 [2024-07-15 15:49:10.025900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.225 [2024-07-15 15:49:10.110657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.225 15:49:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.225 15:49:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.225 15:49:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.225 15:49:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.225 15:49:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.225 15:49:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.225 15:49:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.225 15:49:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.225 15:49:10 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:41.225 15:49:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.225 15:49:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.225 15:49:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.225 15:49:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.225 15:49:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.225 15:49:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.225 15:49:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.225 15:49:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.225 15:49:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.225 15:49:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.225 15:49:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.225 15:49:10 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:41.225 15:49:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.225 15:49:10 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:41.485 15:49:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.485 15:49:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.485 15:49:10 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:41.485 15:49:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.485 15:49:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.485 15:49:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.485 15:49:10 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:41.485 15:49:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.485 15:49:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.485 15:49:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.485 15:49:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.485 15:49:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.485 15:49:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.485 15:49:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.485 15:49:10 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:41.485 15:49:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.485 15:49:10 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:41.485 15:49:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.485 15:49:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.485 15:49:10 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:41.485 15:49:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.485 15:49:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.485 15:49:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.485 15:49:10 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:41.485 15:49:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.485 15:49:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.485 15:49:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.485 15:49:10 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:41.485 15:49:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.485 15:49:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.485 15:49:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.485 15:49:10 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:41.485 15:49:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.485 15:49:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.485 15:49:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.485 15:49:10 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:41.485 15:49:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.485 15:49:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.485 15:49:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.485 15:49:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.485 15:49:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.485 15:49:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.485 15:49:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.485 15:49:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.485 15:49:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.485 15:49:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.485 15:49:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.418 15:49:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:42.418 15:49:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.418 15:49:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.418 15:49:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.418 15:49:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:42.418 15:49:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.418 15:49:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.418 15:49:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.418 15:49:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:42.418 15:49:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.418 15:49:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.418 15:49:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.418 15:49:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:42.418 15:49:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.418 15:49:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.418 15:49:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.418 15:49:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:42.418 15:49:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.418 15:49:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.418 15:49:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.418 15:49:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:42.418 15:49:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.418 15:49:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.418 15:49:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.418 15:49:11 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:42.418 15:49:11 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:42.418 15:49:11 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:42.418 00:07:42.418 real 0m1.351s 00:07:42.418 user 0m1.240s 00:07:42.418 sys 0m0.123s 00:07:42.418 15:49:11 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.418 15:49:11 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:42.418 ************************************ 00:07:42.418 END TEST accel_xor 00:07:42.418 ************************************ 00:07:42.418 15:49:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:42.418 15:49:11 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:42.418 15:49:11 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:42.418 15:49:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.418 15:49:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:42.418 ************************************ 00:07:42.418 START TEST accel_dif_verify 00:07:42.418 ************************************ 00:07:42.418 15:49:11 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:42.418 15:49:11 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:42.418 15:49:11 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:42.418 15:49:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:42.418 15:49:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:42.418 15:49:11 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:42.418 15:49:11 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:42.418 15:49:11 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:42.418 15:49:11 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:42.418 15:49:11 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:42.675 15:49:11 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.675 15:49:11 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:42.676 [2024-07-15 15:49:11.372377] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:42.676 [2024-07-15 15:49:11.372437] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3600828 ] 00:07:42.676 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.676 [2024-07-15 15:49:11.429720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.676 [2024-07-15 15:49:11.502708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:42.676 15:49:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:44.049 15:49:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:44.049 15:49:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:44.049 15:49:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:44.049 15:49:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:44.049 15:49:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:44.049 15:49:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:44.049 15:49:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:44.049 15:49:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:44.049 15:49:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:44.049 15:49:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:44.049 15:49:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:44.049 15:49:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:44.049 15:49:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:44.049 15:49:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:44.049 15:49:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:44.049 15:49:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:44.049 15:49:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:44.049 15:49:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:44.049 15:49:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:44.049 15:49:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:44.049 15:49:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:44.049 15:49:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:44.049 15:49:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:44.049 15:49:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:44.049 15:49:12 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:44.049 15:49:12 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:44.049 15:49:12 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:44.049 00:07:44.049 real 0m1.336s 00:07:44.049 user 0m1.223s 00:07:44.049 sys 0m0.129s 00:07:44.049 15:49:12 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.049 15:49:12 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:44.049 ************************************ 00:07:44.049 END TEST accel_dif_verify 00:07:44.049 ************************************ 00:07:44.049 15:49:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:44.049 15:49:12 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:44.049 15:49:12 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:44.049 15:49:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.049 15:49:12 accel -- common/autotest_common.sh@10 -- # set +x 00:07:44.049 ************************************ 00:07:44.049 START TEST accel_dif_generate 00:07:44.049 ************************************ 00:07:44.049 15:49:12 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:44.049 15:49:12 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:44.049 15:49:12 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:44.049 15:49:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.049 15:49:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.049 15:49:12 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:44.049 15:49:12 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:44.049 15:49:12 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:44.049 15:49:12 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:44.049 15:49:12 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:44.049 15:49:12 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.049 15:49:12 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.049 15:49:12 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:44.049 15:49:12 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:44.049 15:49:12 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:44.049 [2024-07-15 15:49:12.773686] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:44.049 [2024-07-15 15:49:12.773746] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3601080 ] 00:07:44.049 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.049 [2024-07-15 15:49:12.831330] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.049 [2024-07-15 15:49:12.905347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.049 15:49:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:44.049 15:49:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.049 15:49:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.049 15:49:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.049 15:49:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:44.049 15:49:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.049 15:49:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.049 15:49:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.049 15:49:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:44.049 15:49:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.049 15:49:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.049 15:49:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.049 15:49:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:44.049 15:49:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.049 15:49:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.049 15:49:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.049 15:49:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:44.049 15:49:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.049 15:49:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.049 15:49:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.049 15:49:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:44.049 15:49:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.049 15:49:12 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:44.049 15:49:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.049 15:49:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.049 15:49:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:44.049 15:49:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.049 15:49:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.049 15:49:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.049 15:49:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:44.049 15:49:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.049 15:49:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.049 15:49:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.049 15:49:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:44.049 15:49:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.050 15:49:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.050 15:49:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.050 15:49:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:44.050 15:49:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.050 15:49:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.050 15:49:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.050 15:49:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:44.050 15:49:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.050 15:49:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.050 15:49:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.050 15:49:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:44.050 15:49:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.050 15:49:12 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:44.050 15:49:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.050 15:49:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.050 15:49:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:44.050 15:49:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.050 15:49:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.050 15:49:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.050 15:49:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:44.050 15:49:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.050 15:49:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.050 15:49:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.050 15:49:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:44.050 15:49:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.050 15:49:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.050 15:49:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.050 15:49:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:44.050 15:49:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.050 15:49:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.050 15:49:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.050 15:49:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:44.050 15:49:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.050 15:49:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.050 15:49:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.050 15:49:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:44.050 15:49:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.050 15:49:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.050 15:49:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.050 15:49:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:44.050 15:49:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.050 15:49:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.050 15:49:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:45.425 15:49:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:45.425 15:49:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:45.425 15:49:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:45.425 15:49:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:45.425 15:49:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:45.425 15:49:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:45.425 15:49:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:45.425 15:49:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:45.425 15:49:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:45.425 15:49:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:45.425 15:49:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:45.425 15:49:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:45.425 15:49:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:45.425 15:49:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:45.425 15:49:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:45.425 15:49:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:45.425 15:49:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:45.425 15:49:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:45.425 15:49:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:45.425 15:49:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:45.425 15:49:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:45.425 15:49:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:45.425 15:49:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:45.425 15:49:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:45.425 15:49:14 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:45.425 15:49:14 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:45.425 15:49:14 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:45.425 00:07:45.425 real 0m1.339s 00:07:45.425 user 0m1.242s 00:07:45.425 sys 0m0.111s 00:07:45.425 15:49:14 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.425 15:49:14 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:45.425 ************************************ 00:07:45.425 END TEST accel_dif_generate 00:07:45.425 ************************************ 00:07:45.425 15:49:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:45.425 15:49:14 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:45.425 15:49:14 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:45.425 15:49:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.425 15:49:14 accel -- common/autotest_common.sh@10 -- # set +x 00:07:45.425 ************************************ 00:07:45.425 START TEST accel_dif_generate_copy 00:07:45.425 ************************************ 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:45.425 [2024-07-15 15:49:14.171591] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:45.425 [2024-07-15 15:49:14.171643] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3601344 ] 00:07:45.425 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.425 [2024-07-15 15:49:14.225177] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.425 [2024-07-15 15:49:14.299927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.425 15:49:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.837 15:49:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:46.837 15:49:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.837 15:49:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.837 15:49:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.837 15:49:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:46.837 15:49:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.837 15:49:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.837 15:49:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.837 15:49:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:46.837 15:49:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.837 15:49:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.837 15:49:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.837 15:49:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:46.837 15:49:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.837 15:49:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.837 15:49:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.837 15:49:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:46.837 15:49:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.837 15:49:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.837 15:49:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.837 15:49:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:46.837 15:49:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.837 15:49:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.837 15:49:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.837 15:49:15 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:46.837 15:49:15 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:46.837 15:49:15 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:46.837 00:07:46.837 real 0m1.334s 00:07:46.837 user 0m1.238s 00:07:46.837 sys 0m0.109s 00:07:46.837 15:49:15 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.837 15:49:15 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:46.837 ************************************ 00:07:46.837 END TEST accel_dif_generate_copy 00:07:46.837 ************************************ 00:07:46.837 15:49:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:46.837 15:49:15 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:46.837 15:49:15 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:46.837 15:49:15 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:46.837 15:49:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.837 15:49:15 accel -- common/autotest_common.sh@10 -- # set +x 00:07:46.837 ************************************ 00:07:46.837 START TEST accel_comp 00:07:46.837 ************************************ 00:07:46.837 15:49:15 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:46.837 15:49:15 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:46.837 15:49:15 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:46.837 15:49:15 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:46.837 15:49:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.837 15:49:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.837 15:49:15 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:46.837 15:49:15 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:46.837 15:49:15 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:46.837 15:49:15 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:46.837 15:49:15 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.837 15:49:15 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.837 15:49:15 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:46.837 15:49:15 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:46.837 15:49:15 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:46.837 [2024-07-15 15:49:15.559530] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:46.837 [2024-07-15 15:49:15.559574] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3601609 ] 00:07:46.837 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.837 [2024-07-15 15:49:15.611967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.837 [2024-07-15 15:49:15.685276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.837 15:49:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:46.837 15:49:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.837 15:49:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.837 15:49:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.837 15:49:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:46.837 15:49:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.837 15:49:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.837 15:49:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.837 15:49:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:46.837 15:49:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.837 15:49:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.837 15:49:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.837 15:49:15 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:46.837 15:49:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.837 15:49:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.837 15:49:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.837 15:49:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:46.837 15:49:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.838 15:49:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:48.213 15:49:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:48.213 15:49:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.213 15:49:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:48.213 15:49:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:48.213 15:49:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:48.213 15:49:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.213 15:49:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:48.213 15:49:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:48.213 15:49:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:48.213 15:49:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.213 15:49:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:48.213 15:49:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:48.213 15:49:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:48.213 15:49:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.213 15:49:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:48.213 15:49:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:48.213 15:49:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:48.213 15:49:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.213 15:49:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:48.213 15:49:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:48.213 15:49:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:48.213 15:49:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.213 15:49:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:48.213 15:49:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:48.213 15:49:16 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:48.213 15:49:16 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:48.213 15:49:16 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:48.213 00:07:48.213 real 0m1.325s 00:07:48.213 user 0m1.223s 00:07:48.213 sys 0m0.117s 00:07:48.213 15:49:16 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:48.213 15:49:16 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:48.213 ************************************ 00:07:48.213 END TEST accel_comp 00:07:48.213 ************************************ 00:07:48.213 15:49:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:48.213 15:49:16 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:48.213 15:49:16 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:48.213 15:49:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.213 15:49:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:48.213 ************************************ 00:07:48.213 START TEST accel_decomp 00:07:48.213 ************************************ 00:07:48.213 15:49:16 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:48.213 15:49:16 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:48.213 15:49:16 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:48.213 15:49:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.213 15:49:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.213 15:49:16 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:48.213 15:49:16 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:48.213 15:49:16 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:48.213 15:49:16 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:48.213 15:49:16 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:48.213 15:49:16 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:48.213 15:49:16 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:48.213 15:49:16 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:48.213 15:49:16 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:48.213 15:49:16 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:48.213 [2024-07-15 15:49:16.960592] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:48.213 [2024-07-15 15:49:16.960643] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3601868 ] 00:07:48.213 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.213 [2024-07-15 15:49:17.016285] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.213 [2024-07-15 15:49:17.089764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.213 15:49:17 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:48.213 15:49:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.213 15:49:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.213 15:49:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.213 15:49:17 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:48.213 15:49:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.213 15:49:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.213 15:49:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.213 15:49:17 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:48.213 15:49:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.213 15:49:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.213 15:49:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.213 15:49:17 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:48.213 15:49:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.213 15:49:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.213 15:49:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.213 15:49:17 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:48.213 15:49:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.213 15:49:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.214 15:49:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.214 15:49:17 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:48.214 15:49:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.214 15:49:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.214 15:49:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.214 15:49:17 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:48.214 15:49:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.214 15:49:17 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:48.214 15:49:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.214 15:49:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.214 15:49:17 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:48.214 15:49:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.214 15:49:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.214 15:49:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.214 15:49:17 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:48.214 15:49:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.214 15:49:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.214 15:49:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.214 15:49:17 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:48.214 15:49:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.214 15:49:17 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:48.214 15:49:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.214 15:49:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.214 15:49:17 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:48.214 15:49:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.214 15:49:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.214 15:49:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.214 15:49:17 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:48.214 15:49:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.214 15:49:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.214 15:49:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.472 15:49:17 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:48.472 15:49:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.472 15:49:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.472 15:49:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.472 15:49:17 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:48.472 15:49:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.472 15:49:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.472 15:49:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.472 15:49:17 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:48.472 15:49:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.472 15:49:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.472 15:49:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.472 15:49:17 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:48.472 15:49:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.472 15:49:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.472 15:49:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.472 15:49:17 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:48.472 15:49:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.472 15:49:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.472 15:49:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.472 15:49:17 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:48.472 15:49:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.472 15:49:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.472 15:49:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:49.406 15:49:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:49.406 15:49:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:49.406 15:49:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:49.406 15:49:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:49.406 15:49:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:49.406 15:49:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:49.406 15:49:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:49.406 15:49:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:49.406 15:49:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:49.406 15:49:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:49.406 15:49:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:49.406 15:49:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:49.406 15:49:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:49.406 15:49:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:49.406 15:49:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:49.406 15:49:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:49.406 15:49:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:49.406 15:49:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:49.406 15:49:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:49.406 15:49:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:49.406 15:49:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:49.406 15:49:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:49.406 15:49:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:49.406 15:49:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:49.406 15:49:18 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:49.406 15:49:18 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:49.406 15:49:18 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:49.406 00:07:49.406 real 0m1.341s 00:07:49.406 user 0m1.244s 00:07:49.406 sys 0m0.113s 00:07:49.406 15:49:18 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.406 15:49:18 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:49.406 ************************************ 00:07:49.406 END TEST accel_decomp 00:07:49.406 ************************************ 00:07:49.406 15:49:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:49.406 15:49:18 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:49.406 15:49:18 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:49.406 15:49:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.406 15:49:18 accel -- common/autotest_common.sh@10 -- # set +x 00:07:49.406 ************************************ 00:07:49.406 START TEST accel_decomp_full 00:07:49.406 ************************************ 00:07:49.406 15:49:18 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:49.406 15:49:18 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:49.406 15:49:18 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:49.406 15:49:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.406 15:49:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.406 15:49:18 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:49.406 15:49:18 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:49.406 15:49:18 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:49.406 15:49:18 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:49.406 15:49:18 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:49.406 15:49:18 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.406 15:49:18 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.406 15:49:18 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:49.406 15:49:18 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:49.406 15:49:18 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:49.665 [2024-07-15 15:49:18.353106] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:49.665 [2024-07-15 15:49:18.353174] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3602133 ] 00:07:49.665 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.665 [2024-07-15 15:49:18.408944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.665 [2024-07-15 15:49:18.485995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.665 15:49:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:51.038 15:49:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:51.038 15:49:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:51.038 15:49:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:51.038 15:49:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:51.038 15:49:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:51.038 15:49:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:51.038 15:49:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:51.038 15:49:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:51.038 15:49:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:51.038 15:49:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:51.038 15:49:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:51.038 15:49:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:51.038 15:49:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:51.038 15:49:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:51.039 15:49:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:51.039 15:49:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:51.039 15:49:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:51.039 15:49:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:51.039 15:49:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:51.039 15:49:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:51.039 15:49:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:51.039 15:49:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:51.039 15:49:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:51.039 15:49:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:51.039 15:49:19 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:51.039 15:49:19 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:51.039 15:49:19 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:51.039 00:07:51.039 real 0m1.354s 00:07:51.039 user 0m1.248s 00:07:51.039 sys 0m0.121s 00:07:51.039 15:49:19 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.039 15:49:19 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:51.039 ************************************ 00:07:51.039 END TEST accel_decomp_full 00:07:51.039 ************************************ 00:07:51.039 15:49:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:51.039 15:49:19 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:51.039 15:49:19 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:51.039 15:49:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.039 15:49:19 accel -- common/autotest_common.sh@10 -- # set +x 00:07:51.039 ************************************ 00:07:51.039 START TEST accel_decomp_mcore 00:07:51.039 ************************************ 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:51.039 [2024-07-15 15:49:19.742125] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:51.039 [2024-07-15 15:49:19.742163] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3602393 ] 00:07:51.039 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.039 [2024-07-15 15:49:19.795029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:51.039 [2024-07-15 15:49:19.875683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.039 [2024-07-15 15:49:19.875781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.039 [2024-07-15 15:49:19.875856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:51.039 [2024-07-15 15:49:19.875857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.039 15:49:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.408 15:49:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:52.408 15:49:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.408 15:49:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.408 15:49:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.408 15:49:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:52.408 15:49:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.408 15:49:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.408 15:49:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.408 15:49:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:52.408 15:49:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.408 15:49:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.408 15:49:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.408 15:49:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:52.408 15:49:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.408 15:49:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.408 15:49:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.408 15:49:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:52.408 15:49:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.408 15:49:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.408 15:49:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.408 15:49:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:52.408 15:49:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.408 15:49:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.408 15:49:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.408 15:49:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:52.408 15:49:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.408 15:49:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.408 15:49:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.408 15:49:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:52.408 15:49:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.408 15:49:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.408 15:49:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.408 15:49:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:52.408 15:49:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.408 15:49:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.408 15:49:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.408 15:49:21 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:52.408 15:49:21 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:52.408 15:49:21 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:52.408 00:07:52.408 real 0m1.341s 00:07:52.408 user 0m4.569s 00:07:52.408 sys 0m0.116s 00:07:52.408 15:49:21 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.408 15:49:21 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:52.408 ************************************ 00:07:52.408 END TEST accel_decomp_mcore 00:07:52.408 ************************************ 00:07:52.408 15:49:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:52.408 15:49:21 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:52.408 15:49:21 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:52.408 15:49:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.408 15:49:21 accel -- common/autotest_common.sh@10 -- # set +x 00:07:52.408 ************************************ 00:07:52.408 START TEST accel_decomp_full_mcore 00:07:52.408 ************************************ 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:52.408 [2024-07-15 15:49:21.157114] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:52.408 [2024-07-15 15:49:21.157163] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3602667 ] 00:07:52.408 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.408 [2024-07-15 15:49:21.211057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:52.408 [2024-07-15 15:49:21.286770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.408 [2024-07-15 15:49:21.286869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:52.408 [2024-07-15 15:49:21.286940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:52.408 [2024-07-15 15:49:21.286942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.408 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.666 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:52.666 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.666 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.666 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.666 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:52.666 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.666 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.666 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.666 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:52.666 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.666 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.666 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.666 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:52.666 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.666 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.666 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.666 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:52.666 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.666 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.666 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.666 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:52.666 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.666 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.666 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.666 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:52.667 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.667 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.667 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.667 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:52.667 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.667 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.667 15:49:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.600 15:49:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:53.600 15:49:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.600 15:49:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.600 15:49:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.600 15:49:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:53.600 15:49:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.600 15:49:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.600 15:49:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.600 15:49:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:53.600 15:49:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.600 15:49:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.600 15:49:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.600 15:49:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:53.600 15:49:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.600 15:49:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.600 15:49:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.600 15:49:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:53.600 15:49:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.600 15:49:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.600 15:49:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.600 15:49:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:53.600 15:49:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.600 15:49:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.600 15:49:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.600 15:49:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:53.600 15:49:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.600 15:49:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.600 15:49:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.600 15:49:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:53.600 15:49:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.600 15:49:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.600 15:49:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.600 15:49:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:53.600 15:49:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.600 15:49:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.600 15:49:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.600 15:49:22 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:53.600 15:49:22 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:53.600 15:49:22 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:53.600 00:07:53.600 real 0m1.355s 00:07:53.600 user 0m4.606s 00:07:53.600 sys 0m0.118s 00:07:53.600 15:49:22 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:53.600 15:49:22 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:53.600 ************************************ 00:07:53.600 END TEST accel_decomp_full_mcore 00:07:53.600 ************************************ 00:07:53.600 15:49:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:53.600 15:49:22 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:53.600 15:49:22 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:53.600 15:49:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.600 15:49:22 accel -- common/autotest_common.sh@10 -- # set +x 00:07:53.859 ************************************ 00:07:53.859 START TEST accel_decomp_mthread 00:07:53.859 ************************************ 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:53.859 [2024-07-15 15:49:22.576409] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:53.859 [2024-07-15 15:49:22.576489] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3602943 ] 00:07:53.859 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.859 [2024-07-15 15:49:22.633564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.859 [2024-07-15 15:49:22.706213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.859 15:49:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.234 15:49:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:55.234 15:49:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.234 15:49:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.234 15:49:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.234 15:49:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:55.234 15:49:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.234 15:49:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.234 15:49:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.234 15:49:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:55.234 15:49:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.234 15:49:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.234 15:49:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.234 15:49:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:55.234 15:49:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.234 15:49:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.234 15:49:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.234 15:49:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:55.234 15:49:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.234 15:49:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.234 15:49:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.234 15:49:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:55.234 15:49:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.234 15:49:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.234 15:49:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.234 15:49:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:55.234 15:49:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.234 15:49:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.234 15:49:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.234 15:49:23 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:55.234 15:49:23 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:55.234 15:49:23 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:55.234 00:07:55.234 real 0m1.345s 00:07:55.234 user 0m1.235s 00:07:55.234 sys 0m0.124s 00:07:55.234 15:49:23 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:55.234 15:49:23 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:55.234 ************************************ 00:07:55.234 END TEST accel_decomp_mthread 00:07:55.234 ************************************ 00:07:55.234 15:49:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:55.234 15:49:23 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:55.234 15:49:23 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:55.234 15:49:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.234 15:49:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:55.234 ************************************ 00:07:55.234 START TEST accel_decomp_full_mthread 00:07:55.234 ************************************ 00:07:55.234 15:49:23 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:55.234 15:49:23 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:55.234 15:49:23 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:55.234 15:49:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.234 15:49:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.234 15:49:23 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:55.234 15:49:23 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:55.234 15:49:23 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:55.234 15:49:23 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:55.234 15:49:23 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:55.234 15:49:23 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:55.234 15:49:23 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:55.234 15:49:23 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:55.234 15:49:23 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:55.234 15:49:23 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:55.234 [2024-07-15 15:49:23.984389] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:55.234 [2024-07-15 15:49:23.984439] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3603209 ] 00:07:55.234 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.234 [2024-07-15 15:49:24.041843] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.234 [2024-07-15 15:49:24.120601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.234 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:55.234 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.234 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.234 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.234 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:55.234 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.234 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.234 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.234 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:55.234 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.234 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.234 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.491 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:55.491 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.491 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.491 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.491 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:55.491 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.491 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.491 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.491 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:55.491 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.491 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.491 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.491 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:55.491 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.491 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:55.491 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.491 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.491 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:55.491 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.491 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.491 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.491 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:55.491 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.491 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.491 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.491 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:55.491 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.491 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:55.491 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.491 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.491 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:55.491 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.491 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.491 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.491 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:55.491 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.491 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.491 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.491 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:55.491 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.491 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.491 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.491 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:55.492 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.492 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.492 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.492 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:55.492 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.492 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.492 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.492 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:55.492 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.492 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.492 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.492 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:55.492 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.492 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.492 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.492 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:55.492 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.492 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.492 15:49:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.425 15:49:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:56.425 15:49:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.425 15:49:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.425 15:49:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.425 15:49:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:56.425 15:49:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.425 15:49:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.425 15:49:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.425 15:49:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:56.425 15:49:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.425 15:49:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.425 15:49:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.425 15:49:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:56.425 15:49:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.425 15:49:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.425 15:49:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.425 15:49:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:56.425 15:49:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.425 15:49:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.425 15:49:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.425 15:49:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:56.425 15:49:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.425 15:49:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.425 15:49:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.425 15:49:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:56.425 15:49:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.425 15:49:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.425 15:49:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.425 15:49:25 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:56.425 15:49:25 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:56.425 15:49:25 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:56.425 00:07:56.425 real 0m1.371s 00:07:56.425 user 0m1.276s 00:07:56.425 sys 0m0.110s 00:07:56.425 15:49:25 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:56.425 15:49:25 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:56.425 ************************************ 00:07:56.425 END TEST accel_decomp_full_mthread 00:07:56.425 ************************************ 00:07:56.425 15:49:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:56.425 15:49:25 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:56.425 15:49:25 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:56.682 15:49:25 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:56.682 15:49:25 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:56.682 15:49:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.682 15:49:25 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:56.682 15:49:25 accel -- common/autotest_common.sh@10 -- # set +x 00:07:56.682 15:49:25 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:56.682 15:49:25 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:56.682 15:49:25 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:56.683 15:49:25 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:56.683 15:49:25 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:56.683 15:49:25 accel -- accel/accel.sh@41 -- # jq -r . 00:07:56.683 ************************************ 00:07:56.683 START TEST accel_dif_functional_tests 00:07:56.683 ************************************ 00:07:56.683 15:49:25 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:56.683 [2024-07-15 15:49:25.438139] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:56.683 [2024-07-15 15:49:25.438175] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3603512 ] 00:07:56.683 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.683 [2024-07-15 15:49:25.491150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:56.683 [2024-07-15 15:49:25.564929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:56.683 [2024-07-15 15:49:25.565025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.683 [2024-07-15 15:49:25.565025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:56.941 00:07:56.941 00:07:56.941 CUnit - A unit testing framework for C - Version 2.1-3 00:07:56.941 http://cunit.sourceforge.net/ 00:07:56.941 00:07:56.941 00:07:56.941 Suite: accel_dif 00:07:56.941 Test: verify: DIF generated, GUARD check ...passed 00:07:56.941 Test: verify: DIF generated, APPTAG check ...passed 00:07:56.941 Test: verify: DIF generated, REFTAG check ...passed 00:07:56.941 Test: verify: DIF not generated, GUARD check ...[2024-07-15 15:49:25.634094] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:56.941 passed 00:07:56.941 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 15:49:25.634138] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:56.941 passed 00:07:56.941 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 15:49:25.634157] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:56.941 passed 00:07:56.941 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:56.941 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 15:49:25.634198] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:56.941 passed 00:07:56.941 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:56.941 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:56.941 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:56.941 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 15:49:25.634319] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:56.941 passed 00:07:56.941 Test: verify copy: DIF generated, GUARD check ...passed 00:07:56.941 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:56.941 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:56.941 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 15:49:25.634429] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:56.941 passed 00:07:56.941 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 15:49:25.634451] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:56.941 passed 00:07:56.941 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 15:49:25.634469] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:56.941 passed 00:07:56.941 Test: generate copy: DIF generated, GUARD check ...passed 00:07:56.941 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:56.941 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:56.941 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:56.941 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:56.941 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:56.941 Test: generate copy: iovecs-len validate ...[2024-07-15 15:49:25.634634] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:56.941 passed 00:07:56.941 Test: generate copy: buffer alignment validate ...passed 00:07:56.941 00:07:56.941 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.941 suites 1 1 n/a 0 0 00:07:56.941 tests 26 26 26 0 0 00:07:56.941 asserts 115 115 115 0 n/a 00:07:56.941 00:07:56.941 Elapsed time = 0.002 seconds 00:07:56.941 00:07:56.941 real 0m0.412s 00:07:56.941 user 0m0.627s 00:07:56.941 sys 0m0.142s 00:07:56.941 15:49:25 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:56.941 15:49:25 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:56.941 ************************************ 00:07:56.941 END TEST accel_dif_functional_tests 00:07:56.941 ************************************ 00:07:56.941 15:49:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:56.941 00:07:56.941 real 0m30.814s 00:07:56.941 user 0m34.763s 00:07:56.941 sys 0m4.068s 00:07:56.941 15:49:25 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:56.941 15:49:25 accel -- common/autotest_common.sh@10 -- # set +x 00:07:56.941 ************************************ 00:07:56.941 END TEST accel 00:07:56.942 ************************************ 00:07:56.942 15:49:25 -- common/autotest_common.sh@1142 -- # return 0 00:07:56.942 15:49:25 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:56.942 15:49:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:56.942 15:49:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.942 15:49:25 -- common/autotest_common.sh@10 -- # set +x 00:07:57.201 ************************************ 00:07:57.201 START TEST accel_rpc 00:07:57.201 ************************************ 00:07:57.201 15:49:25 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:57.201 * Looking for test storage... 00:07:57.201 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:57.201 15:49:25 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:57.201 15:49:25 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3603614 00:07:57.201 15:49:25 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3603614 00:07:57.201 15:49:25 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:57.201 15:49:25 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 3603614 ']' 00:07:57.201 15:49:25 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.201 15:49:25 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:57.201 15:49:25 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.201 15:49:25 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:57.201 15:49:25 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:57.201 [2024-07-15 15:49:26.041513] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:57.201 [2024-07-15 15:49:26.041563] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3603614 ] 00:07:57.201 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.201 [2024-07-15 15:49:26.094442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.460 [2024-07-15 15:49:26.169816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.026 15:49:26 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:58.026 15:49:26 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:58.026 15:49:26 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:58.026 15:49:26 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:58.026 15:49:26 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:58.026 15:49:26 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:58.026 15:49:26 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:58.026 15:49:26 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:58.026 15:49:26 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.026 15:49:26 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.026 ************************************ 00:07:58.026 START TEST accel_assign_opcode 00:07:58.026 ************************************ 00:07:58.026 15:49:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:58.026 15:49:26 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:58.026 15:49:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.026 15:49:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:58.026 [2024-07-15 15:49:26.875909] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:58.026 15:49:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.026 15:49:26 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:58.026 15:49:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.026 15:49:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:58.026 [2024-07-15 15:49:26.883921] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:58.026 15:49:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.026 15:49:26 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:58.027 15:49:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.027 15:49:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:58.285 15:49:27 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.285 15:49:27 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:58.285 15:49:27 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.285 15:49:27 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:58.285 15:49:27 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:58.285 15:49:27 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:58.285 15:49:27 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.285 software 00:07:58.285 00:07:58.285 real 0m0.234s 00:07:58.285 user 0m0.042s 00:07:58.285 sys 0m0.014s 00:07:58.285 15:49:27 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:58.285 15:49:27 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:58.285 ************************************ 00:07:58.285 END TEST accel_assign_opcode 00:07:58.285 ************************************ 00:07:58.285 15:49:27 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:58.285 15:49:27 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3603614 00:07:58.285 15:49:27 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 3603614 ']' 00:07:58.285 15:49:27 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 3603614 00:07:58.285 15:49:27 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:58.285 15:49:27 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:58.285 15:49:27 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3603614 00:07:58.285 15:49:27 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:58.285 15:49:27 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:58.285 15:49:27 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3603614' 00:07:58.285 killing process with pid 3603614 00:07:58.285 15:49:27 accel_rpc -- common/autotest_common.sh@967 -- # kill 3603614 00:07:58.285 15:49:27 accel_rpc -- common/autotest_common.sh@972 -- # wait 3603614 00:07:58.853 00:07:58.853 real 0m1.582s 00:07:58.853 user 0m1.664s 00:07:58.853 sys 0m0.419s 00:07:58.853 15:49:27 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:58.853 15:49:27 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.853 ************************************ 00:07:58.853 END TEST accel_rpc 00:07:58.853 ************************************ 00:07:58.853 15:49:27 -- common/autotest_common.sh@1142 -- # return 0 00:07:58.853 15:49:27 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:58.853 15:49:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:58.853 15:49:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.853 15:49:27 -- common/autotest_common.sh@10 -- # set +x 00:07:58.853 ************************************ 00:07:58.853 START TEST app_cmdline 00:07:58.853 ************************************ 00:07:58.853 15:49:27 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:58.853 * Looking for test storage... 00:07:58.853 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:58.853 15:49:27 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:58.853 15:49:27 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:58.853 15:49:27 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3603921 00:07:58.853 15:49:27 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3603921 00:07:58.853 15:49:27 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 3603921 ']' 00:07:58.853 15:49:27 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.853 15:49:27 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:58.853 15:49:27 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.853 15:49:27 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:58.853 15:49:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:58.853 [2024-07-15 15:49:27.680730] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:07:58.853 [2024-07-15 15:49:27.680781] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3603921 ] 00:07:58.853 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.853 [2024-07-15 15:49:27.733999] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.111 [2024-07-15 15:49:27.816049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.676 15:49:28 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:59.676 15:49:28 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:59.676 15:49:28 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:59.933 { 00:07:59.933 "version": "SPDK v24.09-pre git sha1 a95bbf233", 00:07:59.933 "fields": { 00:07:59.933 "major": 24, 00:07:59.933 "minor": 9, 00:07:59.933 "patch": 0, 00:07:59.933 "suffix": "-pre", 00:07:59.933 "commit": "a95bbf233" 00:07:59.933 } 00:07:59.933 } 00:07:59.933 15:49:28 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:59.933 15:49:28 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:59.933 15:49:28 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:59.933 15:49:28 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:59.933 15:49:28 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:59.933 15:49:28 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:59.933 15:49:28 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:59.933 15:49:28 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.933 15:49:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:59.933 15:49:28 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.933 15:49:28 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:59.933 15:49:28 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:59.933 15:49:28 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:59.933 15:49:28 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:59.933 15:49:28 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:59.933 15:49:28 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:59.933 15:49:28 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:59.933 15:49:28 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:59.933 15:49:28 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:59.933 15:49:28 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:59.933 15:49:28 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:59.933 15:49:28 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:59.933 15:49:28 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:59.933 15:49:28 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:59.933 request: 00:07:59.933 { 00:07:59.933 "method": "env_dpdk_get_mem_stats", 00:07:59.933 "req_id": 1 00:07:59.933 } 00:07:59.933 Got JSON-RPC error response 00:07:59.933 response: 00:07:59.933 { 00:07:59.933 "code": -32601, 00:07:59.933 "message": "Method not found" 00:07:59.933 } 00:07:59.933 15:49:28 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:59.933 15:49:28 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:59.933 15:49:28 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:59.933 15:49:28 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:59.933 15:49:28 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3603921 00:07:59.933 15:49:28 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 3603921 ']' 00:07:59.933 15:49:28 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 3603921 00:07:59.933 15:49:28 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:59.933 15:49:28 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:59.933 15:49:28 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3603921 00:08:00.191 15:49:28 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:00.191 15:49:28 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:00.191 15:49:28 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3603921' 00:08:00.191 killing process with pid 3603921 00:08:00.191 15:49:28 app_cmdline -- common/autotest_common.sh@967 -- # kill 3603921 00:08:00.191 15:49:28 app_cmdline -- common/autotest_common.sh@972 -- # wait 3603921 00:08:00.449 00:08:00.449 real 0m1.632s 00:08:00.449 user 0m1.932s 00:08:00.449 sys 0m0.403s 00:08:00.449 15:49:29 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:00.449 15:49:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:00.449 ************************************ 00:08:00.449 END TEST app_cmdline 00:08:00.449 ************************************ 00:08:00.449 15:49:29 -- common/autotest_common.sh@1142 -- # return 0 00:08:00.449 15:49:29 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:00.449 15:49:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:00.449 15:49:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.449 15:49:29 -- common/autotest_common.sh@10 -- # set +x 00:08:00.449 ************************************ 00:08:00.449 START TEST version 00:08:00.449 ************************************ 00:08:00.449 15:49:29 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:00.449 * Looking for test storage... 00:08:00.449 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:00.449 15:49:29 version -- app/version.sh@17 -- # get_header_version major 00:08:00.449 15:49:29 version -- app/version.sh@14 -- # tr -d '"' 00:08:00.449 15:49:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:00.449 15:49:29 version -- app/version.sh@14 -- # cut -f2 00:08:00.449 15:49:29 version -- app/version.sh@17 -- # major=24 00:08:00.449 15:49:29 version -- app/version.sh@18 -- # get_header_version minor 00:08:00.449 15:49:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:00.449 15:49:29 version -- app/version.sh@14 -- # cut -f2 00:08:00.449 15:49:29 version -- app/version.sh@14 -- # tr -d '"' 00:08:00.449 15:49:29 version -- app/version.sh@18 -- # minor=9 00:08:00.449 15:49:29 version -- app/version.sh@19 -- # get_header_version patch 00:08:00.449 15:49:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:00.449 15:49:29 version -- app/version.sh@14 -- # cut -f2 00:08:00.449 15:49:29 version -- app/version.sh@14 -- # tr -d '"' 00:08:00.449 15:49:29 version -- app/version.sh@19 -- # patch=0 00:08:00.449 15:49:29 version -- app/version.sh@20 -- # get_header_version suffix 00:08:00.449 15:49:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:00.449 15:49:29 version -- app/version.sh@14 -- # tr -d '"' 00:08:00.449 15:49:29 version -- app/version.sh@14 -- # cut -f2 00:08:00.449 15:49:29 version -- app/version.sh@20 -- # suffix=-pre 00:08:00.449 15:49:29 version -- app/version.sh@22 -- # version=24.9 00:08:00.449 15:49:29 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:00.449 15:49:29 version -- app/version.sh@28 -- # version=24.9rc0 00:08:00.449 15:49:29 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:00.449 15:49:29 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:00.707 15:49:29 version -- app/version.sh@30 -- # py_version=24.9rc0 00:08:00.707 15:49:29 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:08:00.707 00:08:00.707 real 0m0.151s 00:08:00.707 user 0m0.086s 00:08:00.707 sys 0m0.095s 00:08:00.707 15:49:29 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:00.707 15:49:29 version -- common/autotest_common.sh@10 -- # set +x 00:08:00.707 ************************************ 00:08:00.707 END TEST version 00:08:00.707 ************************************ 00:08:00.707 15:49:29 -- common/autotest_common.sh@1142 -- # return 0 00:08:00.707 15:49:29 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:08:00.707 15:49:29 -- spdk/autotest.sh@198 -- # uname -s 00:08:00.707 15:49:29 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:08:00.707 15:49:29 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:00.707 15:49:29 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:00.707 15:49:29 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:08:00.707 15:49:29 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:00.707 15:49:29 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:00.707 15:49:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:00.707 15:49:29 -- common/autotest_common.sh@10 -- # set +x 00:08:00.707 15:49:29 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:08:00.707 15:49:29 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:08:00.707 15:49:29 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:08:00.707 15:49:29 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:08:00.707 15:49:29 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:08:00.707 15:49:29 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:08:00.707 15:49:29 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:00.707 15:49:29 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:00.707 15:49:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.707 15:49:29 -- common/autotest_common.sh@10 -- # set +x 00:08:00.707 ************************************ 00:08:00.707 START TEST nvmf_tcp 00:08:00.707 ************************************ 00:08:00.707 15:49:29 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:00.707 * Looking for test storage... 00:08:00.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:00.707 15:49:29 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:00.707 15:49:29 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:00.707 15:49:29 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:00.707 15:49:29 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:08:00.707 15:49:29 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:00.707 15:49:29 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:00.707 15:49:29 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:00.707 15:49:29 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:00.707 15:49:29 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:00.707 15:49:29 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:00.707 15:49:29 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:00.707 15:49:29 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:00.707 15:49:29 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:00.707 15:49:29 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:00.707 15:49:29 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:00.707 15:49:29 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:00.707 15:49:29 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:00.707 15:49:29 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:00.707 15:49:29 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:00.707 15:49:29 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:00.707 15:49:29 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:00.707 15:49:29 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:00.707 15:49:29 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:00.707 15:49:29 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:00.707 15:49:29 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.707 15:49:29 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.707 15:49:29 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.707 15:49:29 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:08:00.707 15:49:29 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.707 15:49:29 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:08:00.707 15:49:29 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:00.707 15:49:29 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:00.707 15:49:29 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:00.707 15:49:29 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:00.707 15:49:29 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:00.707 15:49:29 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:00.707 15:49:29 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:00.707 15:49:29 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:00.708 15:49:29 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:00.708 15:49:29 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:00.708 15:49:29 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:00.708 15:49:29 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:00.708 15:49:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:00.708 15:49:29 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:08:00.708 15:49:29 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:00.708 15:49:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:00.708 15:49:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.708 15:49:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:00.965 ************************************ 00:08:00.965 START TEST nvmf_example 00:08:00.965 ************************************ 00:08:00.965 15:49:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:00.965 * Looking for test storage... 00:08:00.965 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:00.965 15:49:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:00.965 15:49:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:08:00.965 15:49:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:00.965 15:49:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:00.965 15:49:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:00.965 15:49:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:00.965 15:49:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:00.965 15:49:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:00.966 15:49:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:00.966 15:49:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:00.966 15:49:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:00.966 15:49:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:00.966 15:49:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:00.966 15:49:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:00.966 15:49:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:00.966 15:49:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:00.966 15:49:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:00.966 15:49:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:00.966 15:49:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:00.966 15:49:29 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:00.966 15:49:29 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:00.966 15:49:29 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:00.966 15:49:29 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.966 15:49:29 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.966 15:49:29 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.966 15:49:29 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:08:00.966 15:49:29 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.966 15:49:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:08:00.966 15:49:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:00.966 15:49:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:00.966 15:49:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:00.966 15:49:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:00.966 15:49:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:00.966 15:49:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:00.966 15:49:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:00.966 15:49:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:00.966 15:49:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:08:00.966 15:49:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:00.966 15:49:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:00.966 15:49:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:08:00.966 15:49:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:08:00.966 15:49:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:08:00.966 15:49:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:08:00.966 15:49:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:08:00.966 15:49:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:00.966 15:49:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:00.966 15:49:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:08:00.966 15:49:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:00.966 15:49:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:00.966 15:49:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:00.966 15:49:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:00.966 15:49:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:00.966 15:49:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.966 15:49:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:00.966 15:49:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.966 15:49:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:00.966 15:49:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:00.966 15:49:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:08:00.966 15:49:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:06.264 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:06.264 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:06.264 Found net devices under 0000:86:00.0: cvl_0_0 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:06.264 Found net devices under 0000:86:00.1: cvl_0_1 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:06.264 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:06.264 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:06.264 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:08:06.264 00:08:06.264 --- 10.0.0.2 ping statistics --- 00:08:06.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.264 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:08:06.265 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:06.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:06.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:08:06.265 00:08:06.265 --- 10.0.0.1 ping statistics --- 00:08:06.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.265 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:08:06.265 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:06.265 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:08:06.265 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:06.265 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:06.265 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:06.265 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:06.265 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:06.265 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:06.265 15:49:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:06.265 15:49:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:06.265 15:49:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:06.265 15:49:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:06.265 15:49:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:06.265 15:49:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:08:06.265 15:49:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:08:06.265 15:49:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3607507 00:08:06.265 15:49:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:06.265 15:49:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:06.265 15:49:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3607507 00:08:06.265 15:49:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 3607507 ']' 00:08:06.265 15:49:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.265 15:49:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:06.265 15:49:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.265 15:49:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:06.265 15:49:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:06.265 EAL: No free 2048 kB hugepages reported on node 1 00:08:06.832 15:49:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:06.832 15:49:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:08:06.832 15:49:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:06.832 15:49:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:06.832 15:49:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:07.090 15:49:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:07.090 15:49:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.090 15:49:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:07.090 15:49:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.090 15:49:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:07.090 15:49:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.090 15:49:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:07.090 15:49:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.090 15:49:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:07.090 15:49:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:07.090 15:49:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.090 15:49:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:07.090 15:49:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.090 15:49:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:07.090 15:49:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:07.090 15:49:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.090 15:49:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:07.090 15:49:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.091 15:49:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:07.091 15:49:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.091 15:49:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:07.091 15:49:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.091 15:49:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:07.091 15:49:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:07.091 EAL: No free 2048 kB hugepages reported on node 1 00:08:19.293 Initializing NVMe Controllers 00:08:19.293 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:19.293 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:19.293 Initialization complete. Launching workers. 00:08:19.293 ======================================================== 00:08:19.293 Latency(us) 00:08:19.293 Device Information : IOPS MiB/s Average min max 00:08:19.293 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18176.10 71.00 3521.76 705.20 15541.23 00:08:19.293 ======================================================== 00:08:19.293 Total : 18176.10 71.00 3521.76 705.20 15541.23 00:08:19.293 00:08:19.293 15:49:46 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:19.293 15:49:46 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:19.293 15:49:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:19.293 15:49:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:08:19.293 15:49:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:19.293 15:49:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:08:19.293 15:49:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:19.293 15:49:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:19.293 rmmod nvme_tcp 00:08:19.293 rmmod nvme_fabrics 00:08:19.293 rmmod nvme_keyring 00:08:19.293 15:49:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:19.293 15:49:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:08:19.293 15:49:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:08:19.293 15:49:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 3607507 ']' 00:08:19.293 15:49:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 3607507 00:08:19.293 15:49:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 3607507 ']' 00:08:19.293 15:49:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 3607507 00:08:19.293 15:49:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:08:19.293 15:49:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:19.293 15:49:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3607507 00:08:19.293 15:49:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:08:19.293 15:49:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:08:19.293 15:49:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3607507' 00:08:19.293 killing process with pid 3607507 00:08:19.293 15:49:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 3607507 00:08:19.293 15:49:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 3607507 00:08:19.293 nvmf threads initialize successfully 00:08:19.293 bdev subsystem init successfully 00:08:19.293 created a nvmf target service 00:08:19.293 create targets's poll groups done 00:08:19.293 all subsystems of target started 00:08:19.293 nvmf target is running 00:08:19.293 all subsystems of target stopped 00:08:19.293 destroy targets's poll groups done 00:08:19.293 destroyed the nvmf target service 00:08:19.293 bdev subsystem finish successfully 00:08:19.294 nvmf threads destroy successfully 00:08:19.294 15:49:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:19.294 15:49:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:19.294 15:49:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:19.294 15:49:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:19.294 15:49:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:19.294 15:49:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.294 15:49:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:19.294 15:49:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.863 15:49:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:19.863 15:49:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:19.863 15:49:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:19.863 15:49:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:19.863 00:08:19.863 real 0m18.912s 00:08:19.863 user 0m45.997s 00:08:19.863 sys 0m5.195s 00:08:19.863 15:49:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:19.863 15:49:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:19.863 ************************************ 00:08:19.863 END TEST nvmf_example 00:08:19.863 ************************************ 00:08:19.863 15:49:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:19.863 15:49:48 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:19.863 15:49:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:19.863 15:49:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.863 15:49:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:19.863 ************************************ 00:08:19.863 START TEST nvmf_filesystem 00:08:19.863 ************************************ 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:19.863 * Looking for test storage... 00:08:19.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:19.863 15:49:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:19.864 #define SPDK_CONFIG_H 00:08:19.864 #define SPDK_CONFIG_APPS 1 00:08:19.864 #define SPDK_CONFIG_ARCH native 00:08:19.864 #undef SPDK_CONFIG_ASAN 00:08:19.864 #undef SPDK_CONFIG_AVAHI 00:08:19.864 #undef SPDK_CONFIG_CET 00:08:19.864 #define SPDK_CONFIG_COVERAGE 1 00:08:19.864 #define SPDK_CONFIG_CROSS_PREFIX 00:08:19.864 #undef SPDK_CONFIG_CRYPTO 00:08:19.864 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:19.864 #undef SPDK_CONFIG_CUSTOMOCF 00:08:19.864 #undef SPDK_CONFIG_DAOS 00:08:19.864 #define SPDK_CONFIG_DAOS_DIR 00:08:19.864 #define SPDK_CONFIG_DEBUG 1 00:08:19.864 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:19.864 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:08:19.864 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:19.864 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:19.864 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:19.864 #undef SPDK_CONFIG_DPDK_UADK 00:08:19.864 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:19.864 #define SPDK_CONFIG_EXAMPLES 1 00:08:19.864 #undef SPDK_CONFIG_FC 00:08:19.864 #define SPDK_CONFIG_FC_PATH 00:08:19.864 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:19.864 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:19.864 #undef SPDK_CONFIG_FUSE 00:08:19.864 #undef SPDK_CONFIG_FUZZER 00:08:19.864 #define SPDK_CONFIG_FUZZER_LIB 00:08:19.864 #undef SPDK_CONFIG_GOLANG 00:08:19.864 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:19.864 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:19.864 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:19.864 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:08:19.864 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:19.864 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:19.864 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:19.864 #define SPDK_CONFIG_IDXD 1 00:08:19.864 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:19.864 #undef SPDK_CONFIG_IPSEC_MB 00:08:19.864 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:19.864 #define SPDK_CONFIG_ISAL 1 00:08:19.864 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:19.864 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:19.864 #define SPDK_CONFIG_LIBDIR 00:08:19.864 #undef SPDK_CONFIG_LTO 00:08:19.864 #define SPDK_CONFIG_MAX_LCORES 128 00:08:19.864 #define SPDK_CONFIG_NVME_CUSE 1 00:08:19.864 #undef SPDK_CONFIG_OCF 00:08:19.864 #define SPDK_CONFIG_OCF_PATH 00:08:19.864 #define SPDK_CONFIG_OPENSSL_PATH 00:08:19.864 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:19.864 #define SPDK_CONFIG_PGO_DIR 00:08:19.864 #undef SPDK_CONFIG_PGO_USE 00:08:19.864 #define SPDK_CONFIG_PREFIX /usr/local 00:08:19.864 #undef SPDK_CONFIG_RAID5F 00:08:19.864 #undef SPDK_CONFIG_RBD 00:08:19.864 #define SPDK_CONFIG_RDMA 1 00:08:19.864 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:19.864 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:19.864 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:19.864 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:19.864 #define SPDK_CONFIG_SHARED 1 00:08:19.864 #undef SPDK_CONFIG_SMA 00:08:19.864 #define SPDK_CONFIG_TESTS 1 00:08:19.864 #undef SPDK_CONFIG_TSAN 00:08:19.864 #define SPDK_CONFIG_UBLK 1 00:08:19.864 #define SPDK_CONFIG_UBSAN 1 00:08:19.864 #undef SPDK_CONFIG_UNIT_TESTS 00:08:19.864 #undef SPDK_CONFIG_URING 00:08:19.864 #define SPDK_CONFIG_URING_PATH 00:08:19.864 #undef SPDK_CONFIG_URING_ZNS 00:08:19.864 #undef SPDK_CONFIG_USDT 00:08:19.864 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:19.864 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:19.864 #define SPDK_CONFIG_VFIO_USER 1 00:08:19.864 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:19.864 #define SPDK_CONFIG_VHOST 1 00:08:19.864 #define SPDK_CONFIG_VIRTIO 1 00:08:19.864 #undef SPDK_CONFIG_VTUNE 00:08:19.864 #define SPDK_CONFIG_VTUNE_DIR 00:08:19.864 #define SPDK_CONFIG_WERROR 1 00:08:19.864 #define SPDK_CONFIG_WPDK_DIR 00:08:19.864 #undef SPDK_CONFIG_XNVME 00:08:19.864 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:19.864 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:19.865 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:19.866 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:19.866 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:19.866 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:08:19.866 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:08:19.866 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:08:19.866 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:19.866 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:19.866 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:19.866 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:19.866 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:08:19.866 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:08:19.866 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:19.866 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:19.866 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:19.866 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:19.866 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:19.866 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:19.866 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:19.866 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:19.866 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:19.866 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:19.866 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:19.866 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:19.866 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:08:19.866 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:08:19.866 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:08:19.866 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:08:19.866 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:08:19.866 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:08:19.866 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:08:19.866 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:08:19.866 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:19.866 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:19.866 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:08:19.866 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j96 00:08:19.866 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:08:19.866 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:08:19.866 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:08:19.866 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:08:19.866 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:08:19.866 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:08:19.866 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:08:19.866 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 3609864 ]] 00:08:19.866 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 3609864 00:08:20.126 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:08:20.126 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:08:20.126 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:08:20.126 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:08:20.126 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:08:20.126 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:08:20.126 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:08:20.126 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:08:20.126 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.MfbRdX 00:08:20.126 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.MfbRdX/tests/target /tmp/spdk.MfbRdX 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=950202368 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4334227456 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=189573844992 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=195974299648 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6400454656 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97983774720 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987149824 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=39185485824 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=39194861568 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9375744 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97986461696 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987149824 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=688128 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=19597422592 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=19597426688 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:08:20.127 * Looking for test storage... 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=189573844992 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8615047168 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:20.127 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.127 15:49:48 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.128 15:49:48 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.128 15:49:48 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:20.128 15:49:48 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.128 15:49:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:08:20.128 15:49:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:20.128 15:49:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:20.128 15:49:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:20.128 15:49:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:20.128 15:49:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:20.128 15:49:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:20.128 15:49:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:20.128 15:49:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:20.128 15:49:48 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:20.128 15:49:48 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:20.128 15:49:48 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:08:20.128 15:49:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:20.128 15:49:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:20.128 15:49:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:20.128 15:49:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:20.128 15:49:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:20.128 15:49:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.128 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:20.128 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.128 15:49:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:20.128 15:49:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:20.128 15:49:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:20.128 15:49:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:25.397 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:25.397 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:25.397 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:25.397 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:25.397 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:25.397 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:25.397 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:25.397 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:25.397 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:25.397 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:08:25.397 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:25.397 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:08:25.397 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:25.397 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:08:25.397 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:25.397 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:25.397 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:25.397 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:25.397 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:25.397 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:25.397 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:25.397 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:25.397 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:25.397 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:25.397 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:25.397 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:25.397 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:25.397 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:25.397 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:25.397 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:25.397 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:25.397 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:25.397 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:25.397 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:25.397 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:25.397 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:25.397 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:25.397 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.397 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.397 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:25.397 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:25.397 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:25.397 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:25.397 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:25.397 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:25.397 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.397 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.397 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:25.397 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:25.397 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:25.397 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:25.398 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:25.398 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.398 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:25.398 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.398 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:25.398 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:25.398 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.398 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:25.398 Found net devices under 0000:86:00.0: cvl_0_0 00:08:25.398 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.398 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:25.398 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.398 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:25.398 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.398 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:25.398 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:25.398 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.398 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:25.398 Found net devices under 0000:86:00.1: cvl_0_1 00:08:25.398 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.398 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:25.398 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:25.398 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:25.398 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:25.398 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:25.398 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:25.398 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:25.398 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:25.398 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:25.398 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:25.398 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:25.398 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:25.398 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:25.398 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:25.398 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:25.398 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:25.398 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:25.398 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:25.398 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:25.398 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:25.398 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:25.398 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:25.398 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:25.398 15:49:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:25.398 15:49:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:25.398 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:25.398 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:08:25.398 00:08:25.398 --- 10.0.0.2 ping statistics --- 00:08:25.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.398 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:08:25.398 15:49:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:25.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:25.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:08:25.398 00:08:25.398 --- 10.0.0.1 ping statistics --- 00:08:25.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.398 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:08:25.398 15:49:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:25.398 15:49:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:08:25.398 15:49:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:25.398 15:49:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:25.398 15:49:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:25.398 15:49:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:25.398 15:49:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:25.398 15:49:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:25.398 15:49:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:25.398 15:49:54 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:25.398 15:49:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:25.398 15:49:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.398 15:49:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:25.398 ************************************ 00:08:25.398 START TEST nvmf_filesystem_no_in_capsule 00:08:25.398 ************************************ 00:08:25.398 15:49:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:08:25.398 15:49:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:08:25.398 15:49:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:25.398 15:49:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:25.398 15:49:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:25.398 15:49:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:25.398 15:49:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3612828 00:08:25.398 15:49:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3612828 00:08:25.398 15:49:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:25.398 15:49:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 3612828 ']' 00:08:25.398 15:49:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.398 15:49:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:25.398 15:49:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.398 15:49:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:25.398 15:49:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:25.398 [2024-07-15 15:49:54.133766] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:08:25.398 [2024-07-15 15:49:54.133811] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.398 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.398 [2024-07-15 15:49:54.194829] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:25.398 [2024-07-15 15:49:54.276958] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:25.398 [2024-07-15 15:49:54.276995] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:25.398 [2024-07-15 15:49:54.277002] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:25.398 [2024-07-15 15:49:54.277008] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:25.399 [2024-07-15 15:49:54.277013] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:25.399 [2024-07-15 15:49:54.277056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.399 [2024-07-15 15:49:54.277152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:25.399 [2024-07-15 15:49:54.277443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:25.399 [2024-07-15 15:49:54.277445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.329 15:49:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:26.329 15:49:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:08:26.329 15:49:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:26.329 15:49:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:26.329 15:49:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:26.329 15:49:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:26.329 15:49:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:26.329 15:49:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:26.329 15:49:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.329 15:49:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:26.329 [2024-07-15 15:49:54.976017] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:26.329 15:49:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.329 15:49:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:26.329 15:49:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.329 15:49:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:26.329 Malloc1 00:08:26.329 15:49:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.329 15:49:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:26.329 15:49:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.329 15:49:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:26.329 15:49:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.329 15:49:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:26.329 15:49:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.329 15:49:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:26.329 15:49:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.329 15:49:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:26.329 15:49:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.329 15:49:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:26.329 [2024-07-15 15:49:55.127398] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:26.329 15:49:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.329 15:49:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:26.329 15:49:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:08:26.329 15:49:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:26.329 15:49:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:08:26.329 15:49:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:08:26.329 15:49:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:26.329 15:49:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.329 15:49:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:26.329 15:49:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.329 15:49:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:26.329 { 00:08:26.329 "name": "Malloc1", 00:08:26.329 "aliases": [ 00:08:26.329 "f3f93ee1-be2a-4e2d-a8a2-134f2979c16c" 00:08:26.329 ], 00:08:26.329 "product_name": "Malloc disk", 00:08:26.329 "block_size": 512, 00:08:26.329 "num_blocks": 1048576, 00:08:26.329 "uuid": "f3f93ee1-be2a-4e2d-a8a2-134f2979c16c", 00:08:26.329 "assigned_rate_limits": { 00:08:26.329 "rw_ios_per_sec": 0, 00:08:26.329 "rw_mbytes_per_sec": 0, 00:08:26.329 "r_mbytes_per_sec": 0, 00:08:26.329 "w_mbytes_per_sec": 0 00:08:26.329 }, 00:08:26.329 "claimed": true, 00:08:26.329 "claim_type": "exclusive_write", 00:08:26.329 "zoned": false, 00:08:26.329 "supported_io_types": { 00:08:26.329 "read": true, 00:08:26.329 "write": true, 00:08:26.329 "unmap": true, 00:08:26.329 "flush": true, 00:08:26.329 "reset": true, 00:08:26.329 "nvme_admin": false, 00:08:26.329 "nvme_io": false, 00:08:26.329 "nvme_io_md": false, 00:08:26.329 "write_zeroes": true, 00:08:26.329 "zcopy": true, 00:08:26.329 "get_zone_info": false, 00:08:26.329 "zone_management": false, 00:08:26.329 "zone_append": false, 00:08:26.329 "compare": false, 00:08:26.329 "compare_and_write": false, 00:08:26.329 "abort": true, 00:08:26.329 "seek_hole": false, 00:08:26.329 "seek_data": false, 00:08:26.329 "copy": true, 00:08:26.329 "nvme_iov_md": false 00:08:26.329 }, 00:08:26.329 "memory_domains": [ 00:08:26.329 { 00:08:26.329 "dma_device_id": "system", 00:08:26.329 "dma_device_type": 1 00:08:26.329 }, 00:08:26.329 { 00:08:26.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.329 "dma_device_type": 2 00:08:26.329 } 00:08:26.329 ], 00:08:26.329 "driver_specific": {} 00:08:26.329 } 00:08:26.329 ]' 00:08:26.329 15:49:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:26.329 15:49:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:08:26.329 15:49:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:26.329 15:49:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:08:26.329 15:49:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:08:26.329 15:49:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:08:26.329 15:49:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:26.329 15:49:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:27.698 15:49:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:27.698 15:49:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:27.698 15:49:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:27.698 15:49:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:27.698 15:49:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:29.594 15:49:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:29.594 15:49:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:29.594 15:49:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:29.594 15:49:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:29.594 15:49:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:29.594 15:49:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:29.594 15:49:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:29.594 15:49:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:29.594 15:49:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:29.594 15:49:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:29.594 15:49:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:29.594 15:49:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:29.594 15:49:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:29.594 15:49:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:29.594 15:49:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:29.594 15:49:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:29.594 15:49:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:29.851 15:49:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:30.783 15:49:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:31.717 15:50:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:31.717 15:50:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:31.717 15:50:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:31.717 15:50:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.718 15:50:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:31.718 ************************************ 00:08:31.718 START TEST filesystem_ext4 00:08:31.718 ************************************ 00:08:31.718 15:50:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:31.718 15:50:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:31.718 15:50:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:31.718 15:50:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:31.718 15:50:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:31.718 15:50:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:31.718 15:50:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:31.718 15:50:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:31.718 15:50:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:31.718 15:50:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:31.718 15:50:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:31.718 mke2fs 1.46.5 (30-Dec-2021) 00:08:31.718 Discarding device blocks: 0/522240 done 00:08:31.718 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:31.718 Filesystem UUID: b46fddc9-c682-461c-aabe-38fdf4917e7b 00:08:31.718 Superblock backups stored on blocks: 00:08:31.718 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:31.718 00:08:31.718 Allocating group tables: 0/64 done 00:08:31.718 Writing inode tables: 0/64 done 00:08:35.053 Creating journal (8192 blocks): done 00:08:35.053 Writing superblocks and filesystem accounting information: 0/64 done 00:08:35.053 00:08:35.053 15:50:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:35.053 15:50:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:35.310 15:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:35.310 15:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:08:35.310 15:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:35.310 15:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:08:35.310 15:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:35.310 15:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:35.310 15:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3612828 00:08:35.310 15:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:35.310 15:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:35.310 15:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:35.310 15:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:35.310 00:08:35.310 real 0m3.763s 00:08:35.310 user 0m0.020s 00:08:35.310 sys 0m0.070s 00:08:35.310 15:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:35.310 15:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:35.310 ************************************ 00:08:35.310 END TEST filesystem_ext4 00:08:35.310 ************************************ 00:08:35.310 15:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:35.310 15:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:35.310 15:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:35.310 15:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:35.310 15:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:35.568 ************************************ 00:08:35.568 START TEST filesystem_btrfs 00:08:35.568 ************************************ 00:08:35.568 15:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:35.568 15:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:35.568 15:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:35.568 15:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:35.568 15:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:35.568 15:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:35.568 15:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:35.568 15:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:35.568 15:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:35.568 15:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:35.568 15:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:35.826 btrfs-progs v6.6.2 00:08:35.826 See https://btrfs.readthedocs.io for more information. 00:08:35.826 00:08:35.826 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:35.826 NOTE: several default settings have changed in version 5.15, please make sure 00:08:35.826 this does not affect your deployments: 00:08:35.826 - DUP for metadata (-m dup) 00:08:35.826 - enabled no-holes (-O no-holes) 00:08:35.826 - enabled free-space-tree (-R free-space-tree) 00:08:35.826 00:08:35.826 Label: (null) 00:08:35.826 UUID: 6a8c265e-3a65-448d-8b07-a9b99bbfab94 00:08:35.826 Node size: 16384 00:08:35.826 Sector size: 4096 00:08:35.826 Filesystem size: 510.00MiB 00:08:35.826 Block group profiles: 00:08:35.826 Data: single 8.00MiB 00:08:35.826 Metadata: DUP 32.00MiB 00:08:35.826 System: DUP 8.00MiB 00:08:35.826 SSD detected: yes 00:08:35.826 Zoned device: no 00:08:35.826 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:35.826 Runtime features: free-space-tree 00:08:35.826 Checksum: crc32c 00:08:35.826 Number of devices: 1 00:08:35.826 Devices: 00:08:35.826 ID SIZE PATH 00:08:35.826 1 510.00MiB /dev/nvme0n1p1 00:08:35.826 00:08:35.826 15:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:35.826 15:50:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:36.757 15:50:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:36.757 15:50:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:08:36.758 15:50:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:36.758 15:50:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:08:36.758 15:50:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:36.758 15:50:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:36.758 15:50:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3612828 00:08:36.758 15:50:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:36.758 15:50:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:36.758 15:50:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:36.758 15:50:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:36.758 00:08:36.758 real 0m1.253s 00:08:36.758 user 0m0.023s 00:08:36.758 sys 0m0.130s 00:08:36.758 15:50:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:36.758 15:50:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:36.758 ************************************ 00:08:36.758 END TEST filesystem_btrfs 00:08:36.758 ************************************ 00:08:36.758 15:50:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:36.758 15:50:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:36.758 15:50:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:36.758 15:50:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:36.758 15:50:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:36.758 ************************************ 00:08:36.758 START TEST filesystem_xfs 00:08:36.758 ************************************ 00:08:36.758 15:50:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:36.758 15:50:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:36.758 15:50:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:36.758 15:50:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:36.758 15:50:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:36.758 15:50:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:36.758 15:50:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:36.758 15:50:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:08:36.758 15:50:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:36.758 15:50:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:36.758 15:50:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:36.758 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:36.758 = sectsz=512 attr=2, projid32bit=1 00:08:36.758 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:36.758 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:36.758 data = bsize=4096 blocks=130560, imaxpct=25 00:08:36.758 = sunit=0 swidth=0 blks 00:08:36.758 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:36.758 log =internal log bsize=4096 blocks=16384, version=2 00:08:36.758 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:36.758 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:38.147 Discarding blocks...Done. 00:08:38.147 15:50:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:38.147 15:50:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:40.671 15:50:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:40.671 15:50:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:08:40.671 15:50:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:40.671 15:50:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:08:40.671 15:50:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:08:40.671 15:50:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:40.671 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3612828 00:08:40.671 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:40.671 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:40.671 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:40.671 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:40.671 00:08:40.671 real 0m3.470s 00:08:40.671 user 0m0.035s 00:08:40.671 sys 0m0.061s 00:08:40.671 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:40.671 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:40.671 ************************************ 00:08:40.671 END TEST filesystem_xfs 00:08:40.671 ************************************ 00:08:40.671 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:40.671 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:40.671 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:40.671 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:40.671 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.671 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:40.671 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:40.671 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:40.671 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:40.671 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:40.671 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:40.671 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:40.671 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:40.671 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.671 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:40.671 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.671 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:40.671 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3612828 00:08:40.671 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 3612828 ']' 00:08:40.671 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 3612828 00:08:40.671 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:40.671 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:40.671 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3612828 00:08:40.671 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:40.671 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:40.671 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3612828' 00:08:40.671 killing process with pid 3612828 00:08:40.671 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 3612828 00:08:40.671 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 3612828 00:08:40.929 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:40.929 00:08:40.929 real 0m15.583s 00:08:40.929 user 1m1.311s 00:08:40.929 sys 0m1.284s 00:08:40.929 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:40.929 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:40.929 ************************************ 00:08:40.929 END TEST nvmf_filesystem_no_in_capsule 00:08:40.929 ************************************ 00:08:40.929 15:50:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:40.929 15:50:09 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:40.929 15:50:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:40.929 15:50:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:40.929 15:50:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:40.929 ************************************ 00:08:40.929 START TEST nvmf_filesystem_in_capsule 00:08:40.929 ************************************ 00:08:40.929 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:08:40.929 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:40.929 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:40.929 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:40.929 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:40.929 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:40.929 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3615729 00:08:40.929 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3615729 00:08:40.929 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:40.929 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 3615729 ']' 00:08:40.929 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.929 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:40.929 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.929 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:40.929 15:50:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:40.929 [2024-07-15 15:50:09.792507] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:08:40.929 [2024-07-15 15:50:09.792556] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:40.929 EAL: No free 2048 kB hugepages reported on node 1 00:08:40.929 [2024-07-15 15:50:09.849760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:41.187 [2024-07-15 15:50:09.921984] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:41.187 [2024-07-15 15:50:09.922039] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:41.187 [2024-07-15 15:50:09.922046] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:41.187 [2024-07-15 15:50:09.922052] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:41.187 [2024-07-15 15:50:09.922057] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:41.187 [2024-07-15 15:50:09.922103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.187 [2024-07-15 15:50:09.922201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:41.187 [2024-07-15 15:50:09.922299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:41.187 [2024-07-15 15:50:09.922302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.750 15:50:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:41.750 15:50:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:08:41.750 15:50:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:41.750 15:50:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:41.751 15:50:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:41.751 15:50:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:41.751 15:50:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:41.751 15:50:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:41.751 15:50:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.751 15:50:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:41.751 [2024-07-15 15:50:10.633179] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:41.751 15:50:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.751 15:50:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:41.751 15:50:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.751 15:50:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:42.008 Malloc1 00:08:42.008 15:50:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.008 15:50:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:42.008 15:50:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.008 15:50:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:42.008 15:50:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.008 15:50:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:42.008 15:50:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.008 15:50:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:42.008 15:50:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.008 15:50:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:42.008 15:50:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.008 15:50:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:42.008 [2024-07-15 15:50:10.788921] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:42.008 15:50:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.008 15:50:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:42.008 15:50:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:08:42.008 15:50:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:42.008 15:50:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:08:42.008 15:50:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:08:42.008 15:50:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:42.008 15:50:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.008 15:50:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:42.008 15:50:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.008 15:50:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:42.008 { 00:08:42.008 "name": "Malloc1", 00:08:42.008 "aliases": [ 00:08:42.008 "d2fee260-45f4-467d-9c2a-fa0b142bd65d" 00:08:42.008 ], 00:08:42.008 "product_name": "Malloc disk", 00:08:42.008 "block_size": 512, 00:08:42.008 "num_blocks": 1048576, 00:08:42.008 "uuid": "d2fee260-45f4-467d-9c2a-fa0b142bd65d", 00:08:42.008 "assigned_rate_limits": { 00:08:42.008 "rw_ios_per_sec": 0, 00:08:42.008 "rw_mbytes_per_sec": 0, 00:08:42.008 "r_mbytes_per_sec": 0, 00:08:42.008 "w_mbytes_per_sec": 0 00:08:42.008 }, 00:08:42.008 "claimed": true, 00:08:42.008 "claim_type": "exclusive_write", 00:08:42.008 "zoned": false, 00:08:42.008 "supported_io_types": { 00:08:42.008 "read": true, 00:08:42.008 "write": true, 00:08:42.008 "unmap": true, 00:08:42.008 "flush": true, 00:08:42.008 "reset": true, 00:08:42.008 "nvme_admin": false, 00:08:42.008 "nvme_io": false, 00:08:42.008 "nvme_io_md": false, 00:08:42.008 "write_zeroes": true, 00:08:42.008 "zcopy": true, 00:08:42.008 "get_zone_info": false, 00:08:42.008 "zone_management": false, 00:08:42.008 "zone_append": false, 00:08:42.008 "compare": false, 00:08:42.008 "compare_and_write": false, 00:08:42.008 "abort": true, 00:08:42.008 "seek_hole": false, 00:08:42.008 "seek_data": false, 00:08:42.008 "copy": true, 00:08:42.008 "nvme_iov_md": false 00:08:42.008 }, 00:08:42.008 "memory_domains": [ 00:08:42.008 { 00:08:42.008 "dma_device_id": "system", 00:08:42.008 "dma_device_type": 1 00:08:42.008 }, 00:08:42.008 { 00:08:42.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.008 "dma_device_type": 2 00:08:42.008 } 00:08:42.008 ], 00:08:42.009 "driver_specific": {} 00:08:42.009 } 00:08:42.009 ]' 00:08:42.009 15:50:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:42.009 15:50:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:08:42.009 15:50:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:42.009 15:50:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:08:42.009 15:50:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:08:42.009 15:50:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:08:42.009 15:50:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:42.009 15:50:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:43.373 15:50:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:43.373 15:50:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:43.373 15:50:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:43.373 15:50:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:43.373 15:50:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:45.263 15:50:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:45.263 15:50:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:45.263 15:50:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:45.263 15:50:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:45.263 15:50:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:45.263 15:50:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:45.263 15:50:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:45.263 15:50:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:45.263 15:50:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:45.263 15:50:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:45.263 15:50:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:45.263 15:50:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:45.263 15:50:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:45.263 15:50:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:45.263 15:50:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:45.263 15:50:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:45.263 15:50:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:45.520 15:50:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:46.452 15:50:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:47.385 15:50:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:47.385 15:50:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:47.385 15:50:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:47.385 15:50:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:47.385 15:50:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:47.385 ************************************ 00:08:47.385 START TEST filesystem_in_capsule_ext4 00:08:47.385 ************************************ 00:08:47.385 15:50:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:47.385 15:50:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:47.385 15:50:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:47.385 15:50:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:47.385 15:50:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:47.385 15:50:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:47.385 15:50:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:47.385 15:50:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:47.385 15:50:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:47.385 15:50:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:47.385 15:50:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:47.385 mke2fs 1.46.5 (30-Dec-2021) 00:08:47.385 Discarding device blocks: 0/522240 done 00:08:47.385 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:47.385 Filesystem UUID: d5946f27-62d0-4ebe-9b35-6046552d10a1 00:08:47.385 Superblock backups stored on blocks: 00:08:47.385 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:47.385 00:08:47.385 Allocating group tables: 0/64 done 00:08:47.385 Writing inode tables: 0/64 done 00:08:48.759 Creating journal (8192 blocks): done 00:08:48.759 Writing superblocks and filesystem accounting information: 0/64 done 00:08:48.759 00:08:48.759 15:50:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:48.759 15:50:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:49.017 15:50:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:49.017 15:50:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:49.017 15:50:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:49.017 15:50:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:49.017 15:50:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:49.017 15:50:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:49.017 15:50:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3615729 00:08:49.017 15:50:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:49.017 15:50:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:49.017 15:50:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:49.017 15:50:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:49.017 00:08:49.017 real 0m1.668s 00:08:49.017 user 0m0.027s 00:08:49.017 sys 0m0.064s 00:08:49.017 15:50:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:49.017 15:50:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:49.017 ************************************ 00:08:49.017 END TEST filesystem_in_capsule_ext4 00:08:49.017 ************************************ 00:08:49.017 15:50:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:49.017 15:50:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:49.017 15:50:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:49.017 15:50:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:49.017 15:50:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:49.017 ************************************ 00:08:49.017 START TEST filesystem_in_capsule_btrfs 00:08:49.017 ************************************ 00:08:49.017 15:50:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:49.017 15:50:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:49.017 15:50:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:49.017 15:50:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:49.017 15:50:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:49.017 15:50:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:49.017 15:50:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:49.017 15:50:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:49.017 15:50:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:49.017 15:50:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:49.017 15:50:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:49.582 btrfs-progs v6.6.2 00:08:49.582 See https://btrfs.readthedocs.io for more information. 00:08:49.582 00:08:49.582 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:49.582 NOTE: several default settings have changed in version 5.15, please make sure 00:08:49.582 this does not affect your deployments: 00:08:49.582 - DUP for metadata (-m dup) 00:08:49.582 - enabled no-holes (-O no-holes) 00:08:49.582 - enabled free-space-tree (-R free-space-tree) 00:08:49.582 00:08:49.582 Label: (null) 00:08:49.582 UUID: ac305d47-3db9-4cb7-8afa-ef7180cf2530 00:08:49.582 Node size: 16384 00:08:49.582 Sector size: 4096 00:08:49.582 Filesystem size: 510.00MiB 00:08:49.582 Block group profiles: 00:08:49.582 Data: single 8.00MiB 00:08:49.582 Metadata: DUP 32.00MiB 00:08:49.582 System: DUP 8.00MiB 00:08:49.582 SSD detected: yes 00:08:49.582 Zoned device: no 00:08:49.582 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:49.582 Runtime features: free-space-tree 00:08:49.582 Checksum: crc32c 00:08:49.582 Number of devices: 1 00:08:49.582 Devices: 00:08:49.582 ID SIZE PATH 00:08:49.582 1 510.00MiB /dev/nvme0n1p1 00:08:49.582 00:08:49.582 15:50:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:49.582 15:50:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:50.149 15:50:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:50.149 15:50:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:50.149 15:50:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:50.149 15:50:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:50.149 15:50:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:50.149 15:50:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:50.149 15:50:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3615729 00:08:50.149 15:50:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:50.149 15:50:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:50.149 15:50:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:50.149 15:50:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:50.149 00:08:50.149 real 0m1.045s 00:08:50.149 user 0m0.033s 00:08:50.149 sys 0m0.122s 00:08:50.149 15:50:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:50.149 15:50:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:50.149 ************************************ 00:08:50.149 END TEST filesystem_in_capsule_btrfs 00:08:50.149 ************************************ 00:08:50.149 15:50:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:50.149 15:50:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:50.149 15:50:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:50.149 15:50:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:50.149 15:50:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:50.149 ************************************ 00:08:50.149 START TEST filesystem_in_capsule_xfs 00:08:50.149 ************************************ 00:08:50.149 15:50:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:50.149 15:50:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:50.149 15:50:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:50.149 15:50:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:50.149 15:50:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:50.149 15:50:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:50.149 15:50:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:50.149 15:50:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:08:50.149 15:50:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:50.149 15:50:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:50.149 15:50:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:50.437 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:50.437 = sectsz=512 attr=2, projid32bit=1 00:08:50.437 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:50.437 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:50.437 data = bsize=4096 blocks=130560, imaxpct=25 00:08:50.437 = sunit=0 swidth=0 blks 00:08:50.437 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:50.437 log =internal log bsize=4096 blocks=16384, version=2 00:08:50.437 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:50.437 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:51.377 Discarding blocks...Done. 00:08:51.377 15:50:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:51.377 15:50:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:53.276 15:50:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:53.276 15:50:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:53.276 15:50:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:53.276 15:50:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:53.276 15:50:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:53.276 15:50:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:53.276 15:50:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3615729 00:08:53.276 15:50:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:53.276 15:50:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:53.276 15:50:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:53.276 15:50:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:53.276 00:08:53.276 real 0m2.929s 00:08:53.276 user 0m0.028s 00:08:53.276 sys 0m0.068s 00:08:53.276 15:50:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:53.276 15:50:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:53.276 ************************************ 00:08:53.276 END TEST filesystem_in_capsule_xfs 00:08:53.276 ************************************ 00:08:53.276 15:50:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:53.276 15:50:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:53.276 15:50:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:53.276 15:50:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:53.276 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.276 15:50:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:53.276 15:50:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:53.276 15:50:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:53.276 15:50:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:53.276 15:50:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:53.276 15:50:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:53.276 15:50:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:53.276 15:50:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:53.276 15:50:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.276 15:50:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:53.276 15:50:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.276 15:50:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:53.276 15:50:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3615729 00:08:53.276 15:50:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 3615729 ']' 00:08:53.276 15:50:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 3615729 00:08:53.276 15:50:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:53.276 15:50:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:53.276 15:50:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3615729 00:08:53.535 15:50:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:53.535 15:50:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:53.535 15:50:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3615729' 00:08:53.535 killing process with pid 3615729 00:08:53.535 15:50:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 3615729 00:08:53.535 15:50:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 3615729 00:08:53.794 15:50:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:53.794 00:08:53.794 real 0m12.825s 00:08:53.794 user 0m50.393s 00:08:53.794 sys 0m1.237s 00:08:53.794 15:50:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:53.794 15:50:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:53.794 ************************************ 00:08:53.794 END TEST nvmf_filesystem_in_capsule 00:08:53.794 ************************************ 00:08:53.794 15:50:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:53.794 15:50:22 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:53.794 15:50:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:53.794 15:50:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:53.794 15:50:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:53.794 15:50:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:53.794 15:50:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:53.794 15:50:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:53.794 rmmod nvme_tcp 00:08:53.794 rmmod nvme_fabrics 00:08:53.794 rmmod nvme_keyring 00:08:53.794 15:50:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:53.794 15:50:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:53.794 15:50:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:53.794 15:50:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:53.794 15:50:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:53.794 15:50:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:53.794 15:50:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:53.794 15:50:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:53.794 15:50:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:53.794 15:50:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.794 15:50:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:53.794 15:50:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.801 15:50:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:55.801 00:08:55.801 real 0m36.099s 00:08:55.801 user 1m53.395s 00:08:55.801 sys 0m6.533s 00:08:55.801 15:50:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:55.801 15:50:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:55.801 ************************************ 00:08:55.801 END TEST nvmf_filesystem 00:08:55.801 ************************************ 00:08:56.059 15:50:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:56.059 15:50:24 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:56.059 15:50:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:56.059 15:50:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:56.059 15:50:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:56.059 ************************************ 00:08:56.059 START TEST nvmf_target_discovery 00:08:56.059 ************************************ 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:56.059 * Looking for test storage... 00:08:56.059 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:56.059 15:50:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:01.323 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:01.323 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:01.324 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:01.324 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:01.324 Found net devices under 0000:86:00.0: cvl_0_0 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:01.324 Found net devices under 0000:86:00.1: cvl_0_1 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:01.324 15:50:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:01.324 15:50:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:01.324 15:50:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:01.324 15:50:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:01.324 15:50:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:01.324 15:50:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:01.324 15:50:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:01.324 15:50:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:01.324 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:01.324 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:09:01.324 00:09:01.324 --- 10.0.0.2 ping statistics --- 00:09:01.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.324 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:09:01.324 15:50:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:01.324 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:01.324 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:09:01.324 00:09:01.324 --- 10.0.0.1 ping statistics --- 00:09:01.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.324 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:09:01.324 15:50:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:01.324 15:50:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:09:01.324 15:50:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:01.324 15:50:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:01.324 15:50:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:01.324 15:50:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:01.324 15:50:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:01.324 15:50:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:01.324 15:50:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:01.324 15:50:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:09:01.324 15:50:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:01.324 15:50:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:01.324 15:50:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:01.324 15:50:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=3621331 00:09:01.324 15:50:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 3621331 00:09:01.324 15:50:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:01.324 15:50:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 3621331 ']' 00:09:01.324 15:50:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.324 15:50:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:01.324 15:50:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.324 15:50:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:01.324 15:50:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:01.583 [2024-07-15 15:50:30.257625] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:09:01.583 [2024-07-15 15:50:30.257672] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:01.583 EAL: No free 2048 kB hugepages reported on node 1 00:09:01.583 [2024-07-15 15:50:30.317252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:01.583 [2024-07-15 15:50:30.397500] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:01.583 [2024-07-15 15:50:30.397536] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:01.583 [2024-07-15 15:50:30.397543] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:01.583 [2024-07-15 15:50:30.397549] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:01.583 [2024-07-15 15:50:30.397554] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:01.583 [2024-07-15 15:50:30.397594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.583 [2024-07-15 15:50:30.397690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:01.583 [2024-07-15 15:50:30.397775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:01.583 [2024-07-15 15:50:30.397776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.148 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:02.148 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:09:02.148 15:50:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:02.148 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:02.148 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:02.407 [2024-07-15 15:50:31.107292] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:02.407 Null1 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:02.407 [2024-07-15 15:50:31.152736] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:02.407 Null2 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:02.407 Null3 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:02.407 Null4 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:02.407 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.408 15:50:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:09:02.408 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.408 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:02.408 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.408 15:50:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:09:02.666 00:09:02.666 Discovery Log Number of Records 6, Generation counter 6 00:09:02.666 =====Discovery Log Entry 0====== 00:09:02.666 trtype: tcp 00:09:02.666 adrfam: ipv4 00:09:02.666 subtype: current discovery subsystem 00:09:02.666 treq: not required 00:09:02.666 portid: 0 00:09:02.666 trsvcid: 4420 00:09:02.666 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:02.666 traddr: 10.0.0.2 00:09:02.666 eflags: explicit discovery connections, duplicate discovery information 00:09:02.666 sectype: none 00:09:02.666 =====Discovery Log Entry 1====== 00:09:02.666 trtype: tcp 00:09:02.666 adrfam: ipv4 00:09:02.666 subtype: nvme subsystem 00:09:02.666 treq: not required 00:09:02.666 portid: 0 00:09:02.666 trsvcid: 4420 00:09:02.666 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:02.666 traddr: 10.0.0.2 00:09:02.666 eflags: none 00:09:02.666 sectype: none 00:09:02.666 =====Discovery Log Entry 2====== 00:09:02.666 trtype: tcp 00:09:02.666 adrfam: ipv4 00:09:02.666 subtype: nvme subsystem 00:09:02.666 treq: not required 00:09:02.666 portid: 0 00:09:02.666 trsvcid: 4420 00:09:02.666 subnqn: nqn.2016-06.io.spdk:cnode2 00:09:02.666 traddr: 10.0.0.2 00:09:02.666 eflags: none 00:09:02.666 sectype: none 00:09:02.666 =====Discovery Log Entry 3====== 00:09:02.666 trtype: tcp 00:09:02.666 adrfam: ipv4 00:09:02.666 subtype: nvme subsystem 00:09:02.666 treq: not required 00:09:02.666 portid: 0 00:09:02.666 trsvcid: 4420 00:09:02.666 subnqn: nqn.2016-06.io.spdk:cnode3 00:09:02.666 traddr: 10.0.0.2 00:09:02.666 eflags: none 00:09:02.666 sectype: none 00:09:02.666 =====Discovery Log Entry 4====== 00:09:02.666 trtype: tcp 00:09:02.666 adrfam: ipv4 00:09:02.666 subtype: nvme subsystem 00:09:02.666 treq: not required 00:09:02.666 portid: 0 00:09:02.666 trsvcid: 4420 00:09:02.666 subnqn: nqn.2016-06.io.spdk:cnode4 00:09:02.666 traddr: 10.0.0.2 00:09:02.666 eflags: none 00:09:02.666 sectype: none 00:09:02.666 =====Discovery Log Entry 5====== 00:09:02.666 trtype: tcp 00:09:02.666 adrfam: ipv4 00:09:02.666 subtype: discovery subsystem referral 00:09:02.666 treq: not required 00:09:02.666 portid: 0 00:09:02.666 trsvcid: 4430 00:09:02.666 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:02.666 traddr: 10.0.0.2 00:09:02.666 eflags: none 00:09:02.666 sectype: none 00:09:02.666 15:50:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:09:02.666 Perform nvmf subsystem discovery via RPC 00:09:02.666 15:50:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:09:02.666 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.666 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:02.666 [ 00:09:02.666 { 00:09:02.666 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:02.666 "subtype": "Discovery", 00:09:02.666 "listen_addresses": [ 00:09:02.666 { 00:09:02.666 "trtype": "TCP", 00:09:02.666 "adrfam": "IPv4", 00:09:02.666 "traddr": "10.0.0.2", 00:09:02.666 "trsvcid": "4420" 00:09:02.666 } 00:09:02.666 ], 00:09:02.666 "allow_any_host": true, 00:09:02.666 "hosts": [] 00:09:02.666 }, 00:09:02.666 { 00:09:02.666 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:02.666 "subtype": "NVMe", 00:09:02.666 "listen_addresses": [ 00:09:02.666 { 00:09:02.666 "trtype": "TCP", 00:09:02.666 "adrfam": "IPv4", 00:09:02.666 "traddr": "10.0.0.2", 00:09:02.666 "trsvcid": "4420" 00:09:02.666 } 00:09:02.666 ], 00:09:02.666 "allow_any_host": true, 00:09:02.666 "hosts": [], 00:09:02.666 "serial_number": "SPDK00000000000001", 00:09:02.666 "model_number": "SPDK bdev Controller", 00:09:02.666 "max_namespaces": 32, 00:09:02.666 "min_cntlid": 1, 00:09:02.666 "max_cntlid": 65519, 00:09:02.666 "namespaces": [ 00:09:02.666 { 00:09:02.666 "nsid": 1, 00:09:02.666 "bdev_name": "Null1", 00:09:02.666 "name": "Null1", 00:09:02.666 "nguid": "613163955B294552931686F31BBD0C43", 00:09:02.666 "uuid": "61316395-5b29-4552-9316-86f31bbd0c43" 00:09:02.666 } 00:09:02.666 ] 00:09:02.666 }, 00:09:02.666 { 00:09:02.666 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:02.666 "subtype": "NVMe", 00:09:02.666 "listen_addresses": [ 00:09:02.666 { 00:09:02.666 "trtype": "TCP", 00:09:02.666 "adrfam": "IPv4", 00:09:02.666 "traddr": "10.0.0.2", 00:09:02.666 "trsvcid": "4420" 00:09:02.666 } 00:09:02.666 ], 00:09:02.666 "allow_any_host": true, 00:09:02.666 "hosts": [], 00:09:02.666 "serial_number": "SPDK00000000000002", 00:09:02.666 "model_number": "SPDK bdev Controller", 00:09:02.666 "max_namespaces": 32, 00:09:02.666 "min_cntlid": 1, 00:09:02.666 "max_cntlid": 65519, 00:09:02.667 "namespaces": [ 00:09:02.667 { 00:09:02.667 "nsid": 1, 00:09:02.667 "bdev_name": "Null2", 00:09:02.667 "name": "Null2", 00:09:02.667 "nguid": "A3EC01FBB5FC4085B30FB3AD56DAB8FB", 00:09:02.667 "uuid": "a3ec01fb-b5fc-4085-b30f-b3ad56dab8fb" 00:09:02.667 } 00:09:02.667 ] 00:09:02.667 }, 00:09:02.667 { 00:09:02.667 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:09:02.667 "subtype": "NVMe", 00:09:02.667 "listen_addresses": [ 00:09:02.667 { 00:09:02.667 "trtype": "TCP", 00:09:02.667 "adrfam": "IPv4", 00:09:02.667 "traddr": "10.0.0.2", 00:09:02.667 "trsvcid": "4420" 00:09:02.667 } 00:09:02.667 ], 00:09:02.667 "allow_any_host": true, 00:09:02.667 "hosts": [], 00:09:02.667 "serial_number": "SPDK00000000000003", 00:09:02.667 "model_number": "SPDK bdev Controller", 00:09:02.667 "max_namespaces": 32, 00:09:02.667 "min_cntlid": 1, 00:09:02.667 "max_cntlid": 65519, 00:09:02.667 "namespaces": [ 00:09:02.667 { 00:09:02.667 "nsid": 1, 00:09:02.667 "bdev_name": "Null3", 00:09:02.667 "name": "Null3", 00:09:02.667 "nguid": "672FE3B089074223BC08BA8C24FF19B9", 00:09:02.667 "uuid": "672fe3b0-8907-4223-bc08-ba8c24ff19b9" 00:09:02.667 } 00:09:02.667 ] 00:09:02.667 }, 00:09:02.667 { 00:09:02.667 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:09:02.667 "subtype": "NVMe", 00:09:02.667 "listen_addresses": [ 00:09:02.667 { 00:09:02.667 "trtype": "TCP", 00:09:02.667 "adrfam": "IPv4", 00:09:02.667 "traddr": "10.0.0.2", 00:09:02.667 "trsvcid": "4420" 00:09:02.667 } 00:09:02.667 ], 00:09:02.667 "allow_any_host": true, 00:09:02.667 "hosts": [], 00:09:02.667 "serial_number": "SPDK00000000000004", 00:09:02.667 "model_number": "SPDK bdev Controller", 00:09:02.667 "max_namespaces": 32, 00:09:02.667 "min_cntlid": 1, 00:09:02.667 "max_cntlid": 65519, 00:09:02.667 "namespaces": [ 00:09:02.667 { 00:09:02.667 "nsid": 1, 00:09:02.667 "bdev_name": "Null4", 00:09:02.667 "name": "Null4", 00:09:02.667 "nguid": "0E953BAB12C54341B3172E07EBD8B4F5", 00:09:02.667 "uuid": "0e953bab-12c5-4341-b317-2e07ebd8b4f5" 00:09:02.667 } 00:09:02.667 ] 00:09:02.667 } 00:09:02.667 ] 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:02.667 rmmod nvme_tcp 00:09:02.667 rmmod nvme_fabrics 00:09:02.667 rmmod nvme_keyring 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 3621331 ']' 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 3621331 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 3621331 ']' 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 3621331 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:02.667 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3621331 00:09:02.926 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:02.926 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:02.926 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3621331' 00:09:02.926 killing process with pid 3621331 00:09:02.926 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 3621331 00:09:02.926 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 3621331 00:09:02.926 15:50:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:02.926 15:50:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:02.926 15:50:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:02.926 15:50:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:02.926 15:50:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:02.926 15:50:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:02.926 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:02.926 15:50:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.457 15:50:33 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:05.457 00:09:05.457 real 0m9.070s 00:09:05.457 user 0m7.235s 00:09:05.457 sys 0m4.297s 00:09:05.457 15:50:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:05.457 15:50:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:05.457 ************************************ 00:09:05.457 END TEST nvmf_target_discovery 00:09:05.457 ************************************ 00:09:05.457 15:50:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:05.457 15:50:33 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:05.457 15:50:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:05.457 15:50:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:05.457 15:50:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:05.457 ************************************ 00:09:05.457 START TEST nvmf_referrals 00:09:05.457 ************************************ 00:09:05.458 15:50:33 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:05.458 * Looking for test storage... 00:09:05.458 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:09:05.458 15:50:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:10.722 15:50:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:10.722 15:50:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:09:10.722 15:50:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:10.722 15:50:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:10.722 15:50:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:10.722 15:50:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:10.722 15:50:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:10.722 15:50:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:09:10.722 15:50:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:10.722 15:50:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:09:10.722 15:50:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:09:10.722 15:50:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:09:10.722 15:50:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:09:10.722 15:50:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:09:10.722 15:50:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:09:10.722 15:50:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:10.722 15:50:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:10.722 15:50:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:10.722 15:50:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:10.722 15:50:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:10.722 15:50:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:10.722 15:50:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:10.722 15:50:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:10.722 15:50:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:10.722 15:50:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:10.722 15:50:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:10.722 15:50:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:10.722 15:50:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:10.722 15:50:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:10.722 15:50:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:10.722 15:50:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:10.722 15:50:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:10.722 15:50:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:10.722 15:50:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:10.722 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:10.722 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:10.722 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:10.723 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:10.723 Found net devices under 0000:86:00.0: cvl_0_0 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:10.723 Found net devices under 0000:86:00.1: cvl_0_1 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:10.723 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:10.723 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:09:10.723 00:09:10.723 --- 10.0.0.2 ping statistics --- 00:09:10.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.723 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:10.723 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:10.723 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:09:10.723 00:09:10.723 --- 10.0.0.1 ping statistics --- 00:09:10.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.723 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=3625097 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 3625097 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 3625097 ']' 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:10.723 15:50:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:10.723 [2024-07-15 15:50:39.346036] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:09:10.723 [2024-07-15 15:50:39.346082] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:10.723 EAL: No free 2048 kB hugepages reported on node 1 00:09:10.723 [2024-07-15 15:50:39.404474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:10.723 [2024-07-15 15:50:39.486249] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:10.723 [2024-07-15 15:50:39.486283] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:10.723 [2024-07-15 15:50:39.486290] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:10.723 [2024-07-15 15:50:39.486296] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:10.723 [2024-07-15 15:50:39.486301] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:10.723 [2024-07-15 15:50:39.486341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:10.723 [2024-07-15 15:50:39.486436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:10.723 [2024-07-15 15:50:39.486522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:10.723 [2024-07-15 15:50:39.486523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.289 15:50:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:11.289 15:50:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:09:11.289 15:50:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:11.289 15:50:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:11.289 15:50:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:11.289 15:50:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:11.289 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:11.289 15:50:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.289 15:50:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:11.289 [2024-07-15 15:50:40.214367] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:11.289 15:50:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.289 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:09:11.289 15:50:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.289 15:50:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:11.547 [2024-07-15 15:50:40.227778] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:09:11.547 15:50:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.547 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:09:11.547 15:50:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.547 15:50:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:11.547 15:50:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.547 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:09:11.547 15:50:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.547 15:50:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:11.547 15:50:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.547 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:09:11.547 15:50:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.547 15:50:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:11.547 15:50:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.547 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:11.547 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:09:11.547 15:50:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.547 15:50:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:11.547 15:50:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.547 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:09:11.547 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:09:11.547 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:11.547 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:11.547 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:11.547 15:50:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.547 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:11.547 15:50:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:11.547 15:50:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.547 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:11.547 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:11.547 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:09:11.548 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:11.548 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:11.548 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:11.548 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:11.548 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:11.805 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:11.805 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:11.805 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:09:11.805 15:50:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.805 15:50:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:11.805 15:50:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.805 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:09:11.805 15:50:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.805 15:50:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:11.805 15:50:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.805 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:09:11.805 15:50:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.805 15:50:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:11.805 15:50:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.805 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:11.805 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:09:11.805 15:50:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.805 15:50:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:11.805 15:50:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.805 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:09:11.805 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:09:11.805 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:11.805 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:11.805 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:11.805 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:11.805 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:11.805 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:11.805 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:09:11.805 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:09:11.805 15:50:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.805 15:50:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:11.805 15:50:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.805 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:11.805 15:50:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.805 15:50:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:11.805 15:50:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.805 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:09:11.805 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:11.806 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:11.806 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:11.806 15:50:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.806 15:50:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:11.806 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:11.806 15:50:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.064 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:09:12.064 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:12.064 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:09:12.064 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:12.064 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:12.064 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:12.064 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:12.064 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:12.064 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:09:12.064 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:12.064 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:09:12.064 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:09:12.064 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:12.064 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:12.064 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:12.064 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:12.064 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:09:12.064 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:12.064 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:09:12.064 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:12.064 15:50:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:12.322 15:50:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:12.322 15:50:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:12.322 15:50:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.322 15:50:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:12.322 15:50:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.322 15:50:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:09:12.322 15:50:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:12.322 15:50:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:12.322 15:50:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:12.322 15:50:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.322 15:50:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:12.322 15:50:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:12.322 15:50:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.322 15:50:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:09:12.322 15:50:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:12.322 15:50:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:09:12.322 15:50:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:12.322 15:50:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:12.322 15:50:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:12.322 15:50:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:12.322 15:50:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:12.322 15:50:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:09:12.322 15:50:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:12.322 15:50:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:09:12.322 15:50:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:09:12.322 15:50:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:12.322 15:50:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:12.322 15:50:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:12.580 15:50:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:09:12.580 15:50:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:09:12.580 15:50:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:09:12.580 15:50:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:12.580 15:50:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:12.580 15:50:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:12.580 15:50:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:12.580 15:50:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:09:12.580 15:50:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.580 15:50:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:12.580 15:50:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.580 15:50:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:12.580 15:50:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:09:12.580 15:50:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.580 15:50:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:12.580 15:50:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.580 15:50:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:09:12.580 15:50:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:09:12.580 15:50:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:12.580 15:50:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:12.580 15:50:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:12.580 15:50:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:12.580 15:50:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:12.838 15:50:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:12.838 15:50:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:09:12.838 15:50:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:09:12.838 15:50:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:09:12.838 15:50:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:12.838 15:50:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:09:12.838 15:50:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:12.838 15:50:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:09:12.838 15:50:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:12.838 15:50:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:12.838 rmmod nvme_tcp 00:09:12.838 rmmod nvme_fabrics 00:09:12.838 rmmod nvme_keyring 00:09:12.838 15:50:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:12.838 15:50:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:09:12.838 15:50:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:09:12.838 15:50:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 3625097 ']' 00:09:12.838 15:50:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 3625097 00:09:12.838 15:50:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 3625097 ']' 00:09:12.838 15:50:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 3625097 00:09:12.838 15:50:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:09:12.838 15:50:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:12.838 15:50:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3625097 00:09:12.838 15:50:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:12.838 15:50:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:12.838 15:50:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3625097' 00:09:12.838 killing process with pid 3625097 00:09:12.838 15:50:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 3625097 00:09:12.838 15:50:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 3625097 00:09:13.097 15:50:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:13.097 15:50:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:13.097 15:50:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:13.097 15:50:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:13.097 15:50:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:13.097 15:50:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.097 15:50:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:13.097 15:50:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.631 15:50:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:15.631 00:09:15.631 real 0m10.059s 00:09:15.631 user 0m12.065s 00:09:15.631 sys 0m4.593s 00:09:15.631 15:50:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:15.631 15:50:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:15.631 ************************************ 00:09:15.631 END TEST nvmf_referrals 00:09:15.631 ************************************ 00:09:15.631 15:50:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:15.631 15:50:44 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:15.631 15:50:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:15.631 15:50:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:15.631 15:50:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:15.631 ************************************ 00:09:15.631 START TEST nvmf_connect_disconnect 00:09:15.631 ************************************ 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:15.631 * Looking for test storage... 00:09:15.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:09:15.631 15:50:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:20.898 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:20.898 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:20.898 Found net devices under 0000:86:00.0: cvl_0_0 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:20.898 Found net devices under 0000:86:00.1: cvl_0_1 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:20.898 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:20.898 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:09:20.898 00:09:20.898 --- 10.0.0.2 ping statistics --- 00:09:20.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.898 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:09:20.898 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:20.898 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:20.898 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:09:20.898 00:09:20.898 --- 10.0.0.1 ping statistics --- 00:09:20.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.899 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:09:20.899 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:20.899 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:09:20.899 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:20.899 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:20.899 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:20.899 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:20.899 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:20.899 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:20.899 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:20.899 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:20.899 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:20.899 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:20.899 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:20.899 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=3628977 00:09:20.899 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 3628977 00:09:20.899 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:20.899 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 3628977 ']' 00:09:20.899 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.899 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:20.899 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.899 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:20.899 15:50:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:20.899 [2024-07-15 15:50:49.626832] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:09:20.899 [2024-07-15 15:50:49.626882] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:20.899 EAL: No free 2048 kB hugepages reported on node 1 00:09:20.899 [2024-07-15 15:50:49.686275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:20.899 [2024-07-15 15:50:49.767762] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:20.899 [2024-07-15 15:50:49.767798] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:20.899 [2024-07-15 15:50:49.767805] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:20.899 [2024-07-15 15:50:49.767812] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:20.899 [2024-07-15 15:50:49.767818] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:20.899 [2024-07-15 15:50:49.767863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.899 [2024-07-15 15:50:49.767956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:20.899 [2024-07-15 15:50:49.768017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:20.899 [2024-07-15 15:50:49.768019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.831 15:50:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:21.831 15:50:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:09:21.831 15:50:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:21.831 15:50:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:21.831 15:50:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:21.831 15:50:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:21.831 15:50:50 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:21.831 15:50:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.831 15:50:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:21.831 [2024-07-15 15:50:50.479276] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:21.831 15:50:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.831 15:50:50 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:21.831 15:50:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.831 15:50:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:21.831 15:50:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.831 15:50:50 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:21.831 15:50:50 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:21.831 15:50:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.831 15:50:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:21.831 15:50:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.831 15:50:50 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:21.831 15:50:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.831 15:50:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:21.831 15:50:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.831 15:50:50 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:21.831 15:50:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:21.831 15:50:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:21.831 [2024-07-15 15:50:50.531184] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:21.831 15:50:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:21.831 15:50:50 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:09:21.831 15:50:50 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:09:21.831 15:50:50 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:09:25.102 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.375 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.653 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.210 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.211 15:51:06 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:09:38.211 15:51:06 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:09:38.211 15:51:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:38.211 15:51:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:09:38.211 15:51:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:38.211 15:51:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:09:38.211 15:51:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:38.211 15:51:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:38.211 rmmod nvme_tcp 00:09:38.211 rmmod nvme_fabrics 00:09:38.211 rmmod nvme_keyring 00:09:38.211 15:51:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:38.211 15:51:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:09:38.211 15:51:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:09:38.211 15:51:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 3628977 ']' 00:09:38.211 15:51:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 3628977 00:09:38.211 15:51:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 3628977 ']' 00:09:38.211 15:51:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 3628977 00:09:38.211 15:51:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:09:38.211 15:51:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:38.211 15:51:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3628977 00:09:38.211 15:51:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:38.211 15:51:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:38.211 15:51:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3628977' 00:09:38.211 killing process with pid 3628977 00:09:38.211 15:51:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 3628977 00:09:38.211 15:51:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 3628977 00:09:38.211 15:51:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:38.211 15:51:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:38.211 15:51:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:38.211 15:51:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:38.211 15:51:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:38.211 15:51:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.211 15:51:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:38.211 15:51:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.829 15:51:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:40.829 00:09:40.829 real 0m25.104s 00:09:40.829 user 1m10.422s 00:09:40.829 sys 0m5.204s 00:09:40.830 15:51:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:40.830 15:51:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:40.830 ************************************ 00:09:40.830 END TEST nvmf_connect_disconnect 00:09:40.830 ************************************ 00:09:40.830 15:51:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:40.830 15:51:09 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:09:40.830 15:51:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:40.830 15:51:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:40.830 15:51:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:40.830 ************************************ 00:09:40.830 START TEST nvmf_multitarget 00:09:40.830 ************************************ 00:09:40.830 15:51:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:09:40.830 * Looking for test storage... 00:09:40.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:40.830 15:51:09 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:40.830 15:51:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:09:40.830 15:51:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:40.830 15:51:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:40.830 15:51:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:40.830 15:51:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:40.830 15:51:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:40.830 15:51:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:40.830 15:51:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:40.830 15:51:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:40.830 15:51:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:40.830 15:51:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:40.830 15:51:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:40.830 15:51:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:40.830 15:51:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:40.830 15:51:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:40.830 15:51:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:40.830 15:51:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:40.830 15:51:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:40.830 15:51:09 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:40.830 15:51:09 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:40.830 15:51:09 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:40.830 15:51:09 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.830 15:51:09 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.830 15:51:09 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.830 15:51:09 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:09:40.830 15:51:09 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.830 15:51:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:09:40.830 15:51:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:40.830 15:51:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:40.830 15:51:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:40.830 15:51:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:40.830 15:51:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:40.830 15:51:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:40.830 15:51:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:40.830 15:51:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:40.830 15:51:09 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:40.830 15:51:09 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:09:40.830 15:51:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:40.830 15:51:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:40.830 15:51:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:40.830 15:51:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:40.830 15:51:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:40.830 15:51:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.830 15:51:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:40.830 15:51:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.830 15:51:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:40.830 15:51:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:40.830 15:51:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:09:40.830 15:51:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:46.101 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:46.101 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:09:46.101 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:46.101 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:46.101 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:46.101 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:46.101 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:46.101 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:09:46.101 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:46.101 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:46.102 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:46.102 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:46.102 Found net devices under 0000:86:00.0: cvl_0_0 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:46.102 Found net devices under 0000:86:00.1: cvl_0_1 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:46.102 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:46.102 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:09:46.102 00:09:46.102 --- 10.0.0.2 ping statistics --- 00:09:46.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.102 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:46.102 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:46.102 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:09:46.102 00:09:46.102 --- 10.0.0.1 ping statistics --- 00:09:46.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.102 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=3635904 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 3635904 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 3635904 ']' 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:46.102 15:51:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:46.102 [2024-07-15 15:51:14.815995] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:09:46.102 [2024-07-15 15:51:14.816038] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:46.102 EAL: No free 2048 kB hugepages reported on node 1 00:09:46.102 [2024-07-15 15:51:14.872404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:46.102 [2024-07-15 15:51:14.955641] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:46.102 [2024-07-15 15:51:14.955677] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:46.102 [2024-07-15 15:51:14.955684] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:46.102 [2024-07-15 15:51:14.955691] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:46.102 [2024-07-15 15:51:14.955705] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:46.102 [2024-07-15 15:51:14.955742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:46.102 [2024-07-15 15:51:14.955853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:46.102 [2024-07-15 15:51:14.955937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:46.102 [2024-07-15 15:51:14.955938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.039 15:51:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:47.039 15:51:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:09:47.039 15:51:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:47.039 15:51:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:47.039 15:51:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:47.039 15:51:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:47.039 15:51:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:47.039 15:51:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:47.039 15:51:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:09:47.039 15:51:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:09:47.039 15:51:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:09:47.039 "nvmf_tgt_1" 00:09:47.039 15:51:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:09:47.039 "nvmf_tgt_2" 00:09:47.298 15:51:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:47.298 15:51:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:09:47.298 15:51:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:09:47.298 15:51:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:09:47.298 true 00:09:47.298 15:51:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:09:47.556 true 00:09:47.556 15:51:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:47.556 15:51:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:09:47.556 15:51:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:09:47.556 15:51:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:47.556 15:51:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:09:47.556 15:51:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:47.556 15:51:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:09:47.556 15:51:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:47.556 15:51:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:09:47.556 15:51:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:47.556 15:51:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:47.556 rmmod nvme_tcp 00:09:47.556 rmmod nvme_fabrics 00:09:47.556 rmmod nvme_keyring 00:09:47.556 15:51:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:47.556 15:51:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:09:47.556 15:51:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:09:47.556 15:51:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 3635904 ']' 00:09:47.556 15:51:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 3635904 00:09:47.556 15:51:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 3635904 ']' 00:09:47.556 15:51:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 3635904 00:09:47.556 15:51:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:09:47.556 15:51:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:47.556 15:51:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3635904 00:09:47.816 15:51:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:47.816 15:51:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:47.816 15:51:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3635904' 00:09:47.816 killing process with pid 3635904 00:09:47.816 15:51:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 3635904 00:09:47.816 15:51:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 3635904 00:09:47.816 15:51:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:47.816 15:51:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:47.816 15:51:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:47.816 15:51:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:47.816 15:51:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:47.816 15:51:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.816 15:51:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:47.816 15:51:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.346 15:51:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:50.346 00:09:50.346 real 0m9.543s 00:09:50.346 user 0m9.244s 00:09:50.346 sys 0m4.526s 00:09:50.346 15:51:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:50.346 15:51:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:50.346 ************************************ 00:09:50.346 END TEST nvmf_multitarget 00:09:50.346 ************************************ 00:09:50.346 15:51:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:50.346 15:51:18 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:50.346 15:51:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:50.346 15:51:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:50.346 15:51:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:50.346 ************************************ 00:09:50.346 START TEST nvmf_rpc 00:09:50.346 ************************************ 00:09:50.346 15:51:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:50.346 * Looking for test storage... 00:09:50.346 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:50.346 15:51:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:50.346 15:51:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:09:50.346 15:51:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:50.346 15:51:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:50.346 15:51:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:50.346 15:51:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:50.346 15:51:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:50.346 15:51:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:50.346 15:51:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:50.346 15:51:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:50.346 15:51:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:50.346 15:51:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:50.346 15:51:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:50.346 15:51:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:50.346 15:51:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:50.346 15:51:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:50.346 15:51:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:50.346 15:51:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:50.346 15:51:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:50.346 15:51:18 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:50.346 15:51:18 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:50.346 15:51:18 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:50.346 15:51:18 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.346 15:51:18 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.346 15:51:18 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.346 15:51:18 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:09:50.346 15:51:18 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.346 15:51:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:09:50.346 15:51:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:50.346 15:51:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:50.346 15:51:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:50.346 15:51:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:50.346 15:51:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:50.346 15:51:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:50.346 15:51:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:50.346 15:51:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:50.346 15:51:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:09:50.346 15:51:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:09:50.346 15:51:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:50.346 15:51:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:50.346 15:51:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:50.346 15:51:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:50.346 15:51:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:50.346 15:51:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.346 15:51:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:50.346 15:51:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.346 15:51:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:50.346 15:51:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:50.346 15:51:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:09:50.346 15:51:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:55.649 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:55.649 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:55.649 Found net devices under 0000:86:00.0: cvl_0_0 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:55.649 Found net devices under 0000:86:00.1: cvl_0_1 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:55.649 15:51:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:55.649 15:51:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:55.649 15:51:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:55.649 15:51:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:55.649 15:51:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:55.649 15:51:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:55.649 15:51:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:55.649 15:51:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:55.649 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:55.649 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:09:55.649 00:09:55.649 --- 10.0.0.2 ping statistics --- 00:09:55.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.649 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:09:55.649 15:51:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:55.649 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:55.650 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:09:55.650 00:09:55.650 --- 10.0.0.1 ping statistics --- 00:09:55.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.650 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:09:55.650 15:51:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:55.650 15:51:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:09:55.650 15:51:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:55.650 15:51:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:55.650 15:51:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:55.650 15:51:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:55.650 15:51:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:55.650 15:51:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:55.650 15:51:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:55.650 15:51:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:09:55.650 15:51:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:55.650 15:51:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:55.650 15:51:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:55.650 15:51:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=3639671 00:09:55.650 15:51:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 3639671 00:09:55.650 15:51:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:55.650 15:51:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 3639671 ']' 00:09:55.650 15:51:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.650 15:51:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:55.650 15:51:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.650 15:51:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:55.650 15:51:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:55.650 [2024-07-15 15:51:24.284121] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:09:55.650 [2024-07-15 15:51:24.284168] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:55.650 EAL: No free 2048 kB hugepages reported on node 1 00:09:55.650 [2024-07-15 15:51:24.341163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:55.650 [2024-07-15 15:51:24.421502] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:55.650 [2024-07-15 15:51:24.421538] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:55.650 [2024-07-15 15:51:24.421545] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:55.650 [2024-07-15 15:51:24.421554] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:55.650 [2024-07-15 15:51:24.421559] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:55.650 [2024-07-15 15:51:24.421601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:55.650 [2024-07-15 15:51:24.421696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:55.650 [2024-07-15 15:51:24.421780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:55.650 [2024-07-15 15:51:24.421781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.215 15:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:56.215 15:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:56.215 15:51:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:56.215 15:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:56.215 15:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:56.215 15:51:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:56.215 15:51:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:09:56.215 15:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.215 15:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:56.474 15:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.474 15:51:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:09:56.474 "tick_rate": 2300000000, 00:09:56.474 "poll_groups": [ 00:09:56.474 { 00:09:56.474 "name": "nvmf_tgt_poll_group_000", 00:09:56.474 "admin_qpairs": 0, 00:09:56.474 "io_qpairs": 0, 00:09:56.474 "current_admin_qpairs": 0, 00:09:56.474 "current_io_qpairs": 0, 00:09:56.474 "pending_bdev_io": 0, 00:09:56.474 "completed_nvme_io": 0, 00:09:56.474 "transports": [] 00:09:56.474 }, 00:09:56.474 { 00:09:56.474 "name": "nvmf_tgt_poll_group_001", 00:09:56.474 "admin_qpairs": 0, 00:09:56.474 "io_qpairs": 0, 00:09:56.474 "current_admin_qpairs": 0, 00:09:56.474 "current_io_qpairs": 0, 00:09:56.474 "pending_bdev_io": 0, 00:09:56.474 "completed_nvme_io": 0, 00:09:56.474 "transports": [] 00:09:56.474 }, 00:09:56.474 { 00:09:56.474 "name": "nvmf_tgt_poll_group_002", 00:09:56.474 "admin_qpairs": 0, 00:09:56.474 "io_qpairs": 0, 00:09:56.474 "current_admin_qpairs": 0, 00:09:56.474 "current_io_qpairs": 0, 00:09:56.474 "pending_bdev_io": 0, 00:09:56.475 "completed_nvme_io": 0, 00:09:56.475 "transports": [] 00:09:56.475 }, 00:09:56.475 { 00:09:56.475 "name": "nvmf_tgt_poll_group_003", 00:09:56.475 "admin_qpairs": 0, 00:09:56.475 "io_qpairs": 0, 00:09:56.475 "current_admin_qpairs": 0, 00:09:56.475 "current_io_qpairs": 0, 00:09:56.475 "pending_bdev_io": 0, 00:09:56.475 "completed_nvme_io": 0, 00:09:56.475 "transports": [] 00:09:56.475 } 00:09:56.475 ] 00:09:56.475 }' 00:09:56.475 15:51:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:09:56.475 15:51:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:09:56.475 15:51:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:09:56.475 15:51:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:09:56.475 15:51:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:09:56.475 15:51:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:09:56.475 15:51:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:09:56.475 15:51:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:56.475 15:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.475 15:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:56.475 [2024-07-15 15:51:25.248555] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:56.475 15:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.475 15:51:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:09:56.475 15:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.475 15:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:56.475 15:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.475 15:51:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:09:56.475 "tick_rate": 2300000000, 00:09:56.475 "poll_groups": [ 00:09:56.475 { 00:09:56.475 "name": "nvmf_tgt_poll_group_000", 00:09:56.475 "admin_qpairs": 0, 00:09:56.475 "io_qpairs": 0, 00:09:56.475 "current_admin_qpairs": 0, 00:09:56.475 "current_io_qpairs": 0, 00:09:56.475 "pending_bdev_io": 0, 00:09:56.475 "completed_nvme_io": 0, 00:09:56.475 "transports": [ 00:09:56.475 { 00:09:56.475 "trtype": "TCP" 00:09:56.475 } 00:09:56.475 ] 00:09:56.475 }, 00:09:56.475 { 00:09:56.475 "name": "nvmf_tgt_poll_group_001", 00:09:56.475 "admin_qpairs": 0, 00:09:56.475 "io_qpairs": 0, 00:09:56.475 "current_admin_qpairs": 0, 00:09:56.475 "current_io_qpairs": 0, 00:09:56.475 "pending_bdev_io": 0, 00:09:56.475 "completed_nvme_io": 0, 00:09:56.475 "transports": [ 00:09:56.475 { 00:09:56.475 "trtype": "TCP" 00:09:56.475 } 00:09:56.475 ] 00:09:56.475 }, 00:09:56.475 { 00:09:56.475 "name": "nvmf_tgt_poll_group_002", 00:09:56.475 "admin_qpairs": 0, 00:09:56.475 "io_qpairs": 0, 00:09:56.475 "current_admin_qpairs": 0, 00:09:56.475 "current_io_qpairs": 0, 00:09:56.475 "pending_bdev_io": 0, 00:09:56.475 "completed_nvme_io": 0, 00:09:56.475 "transports": [ 00:09:56.475 { 00:09:56.475 "trtype": "TCP" 00:09:56.475 } 00:09:56.475 ] 00:09:56.475 }, 00:09:56.475 { 00:09:56.475 "name": "nvmf_tgt_poll_group_003", 00:09:56.475 "admin_qpairs": 0, 00:09:56.475 "io_qpairs": 0, 00:09:56.475 "current_admin_qpairs": 0, 00:09:56.475 "current_io_qpairs": 0, 00:09:56.475 "pending_bdev_io": 0, 00:09:56.475 "completed_nvme_io": 0, 00:09:56.475 "transports": [ 00:09:56.475 { 00:09:56.475 "trtype": "TCP" 00:09:56.475 } 00:09:56.475 ] 00:09:56.475 } 00:09:56.475 ] 00:09:56.475 }' 00:09:56.475 15:51:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:09:56.475 15:51:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:56.475 15:51:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:56.475 15:51:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:56.475 15:51:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:09:56.475 15:51:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:09:56.475 15:51:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:56.475 15:51:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:56.475 15:51:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:56.475 15:51:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:09:56.475 15:51:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:09:56.475 15:51:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:09:56.475 15:51:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:09:56.475 15:51:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:56.475 15:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.475 15:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:56.475 Malloc1 00:09:56.475 15:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.475 15:51:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:56.475 15:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.475 15:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:56.475 15:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.475 15:51:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:56.475 15:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.475 15:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:56.475 15:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.475 15:51:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:09:56.475 15:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.475 15:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:56.735 15:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.735 15:51:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:56.735 15:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.735 15:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:56.735 [2024-07-15 15:51:25.420608] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:56.735 15:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.735 15:51:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:09:56.735 15:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:56.735 15:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:09:56.735 15:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:56.735 15:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:56.735 15:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:56.735 15:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:56.735 15:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:56.735 15:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:56.735 15:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:56.735 15:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:56.735 15:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:09:56.735 [2024-07-15 15:51:25.445151] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:09:56.735 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:56.735 could not add new controller: failed to write to nvme-fabrics device 00:09:56.735 15:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:56.735 15:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:56.735 15:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:56.735 15:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:56.735 15:51:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:56.735 15:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.735 15:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:56.735 15:51:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.735 15:51:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:57.672 15:51:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:09:57.672 15:51:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:57.672 15:51:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:57.672 15:51:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:57.672 15:51:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:00.205 15:51:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:00.205 15:51:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:00.205 15:51:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:00.205 15:51:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:00.205 15:51:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:00.206 15:51:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:00.206 15:51:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:00.206 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.206 15:51:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:00.206 15:51:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:00.206 15:51:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:00.206 15:51:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:00.206 15:51:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:00.206 15:51:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:00.206 15:51:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:00.206 15:51:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:00.206 15:51:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.206 15:51:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:00.206 15:51:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.206 15:51:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:00.206 15:51:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:10:00.206 15:51:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:00.206 15:51:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:10:00.206 15:51:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:00.206 15:51:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:10:00.206 15:51:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:00.206 15:51:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:10:00.206 15:51:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:00.206 15:51:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:10:00.206 15:51:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:10:00.206 15:51:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:00.206 [2024-07-15 15:51:28.747554] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:10:00.206 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:00.206 could not add new controller: failed to write to nvme-fabrics device 00:10:00.206 15:51:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:10:00.206 15:51:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:00.206 15:51:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:00.206 15:51:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:00.206 15:51:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:10:00.206 15:51:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.206 15:51:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:00.206 15:51:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.206 15:51:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:01.193 15:51:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:10:01.193 15:51:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:01.193 15:51:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:01.193 15:51:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:01.193 15:51:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:03.095 15:51:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:03.095 15:51:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:03.095 15:51:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:03.095 15:51:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:03.095 15:51:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:03.095 15:51:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:03.095 15:51:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:03.354 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.354 15:51:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:03.354 15:51:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:03.354 15:51:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:03.354 15:51:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:03.354 15:51:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:03.354 15:51:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:03.354 15:51:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:03.354 15:51:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:03.354 15:51:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.354 15:51:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:03.354 15:51:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.354 15:51:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:10:03.354 15:51:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:03.354 15:51:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:03.354 15:51:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.354 15:51:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:03.354 15:51:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.354 15:51:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:03.354 15:51:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.354 15:51:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:03.354 [2024-07-15 15:51:32.231508] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:03.354 15:51:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.354 15:51:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:03.354 15:51:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.354 15:51:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:03.354 15:51:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.354 15:51:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:03.354 15:51:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.354 15:51:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:03.354 15:51:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.354 15:51:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:04.731 15:51:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:04.731 15:51:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:04.731 15:51:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:04.731 15:51:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:04.731 15:51:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:06.633 15:51:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:06.633 15:51:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:06.633 15:51:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:06.633 15:51:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:06.633 15:51:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:06.633 15:51:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:06.633 15:51:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:06.633 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.633 15:51:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:06.633 15:51:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:06.633 15:51:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:06.633 15:51:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:06.633 15:51:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:06.633 15:51:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:06.633 15:51:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:06.633 15:51:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:06.633 15:51:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.633 15:51:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.633 15:51:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.633 15:51:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:06.633 15:51:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.633 15:51:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.633 15:51:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.633 15:51:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:06.633 15:51:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:06.633 15:51:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.633 15:51:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.633 15:51:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.633 15:51:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:06.633 15:51:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.633 15:51:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.633 [2024-07-15 15:51:35.558912] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:06.633 15:51:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.633 15:51:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:06.633 15:51:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.633 15:51:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.891 15:51:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.891 15:51:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:06.891 15:51:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.891 15:51:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.891 15:51:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.891 15:51:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:08.264 15:51:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:08.264 15:51:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:08.264 15:51:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:08.264 15:51:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:08.264 15:51:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:10.163 15:51:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:10.163 15:51:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:10.163 15:51:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:10.163 15:51:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:10.163 15:51:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:10.163 15:51:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:10.163 15:51:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:10.163 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.163 15:51:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:10.163 15:51:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:10.163 15:51:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:10.163 15:51:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:10.163 15:51:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:10.163 15:51:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:10.163 15:51:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:10.163 15:51:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:10.163 15:51:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.163 15:51:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.163 15:51:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.163 15:51:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:10.163 15:51:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.163 15:51:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.163 15:51:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.163 15:51:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:10.163 15:51:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:10.163 15:51:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.163 15:51:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.163 15:51:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.163 15:51:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:10.163 15:51:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.163 15:51:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.163 [2024-07-15 15:51:38.907453] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:10.163 15:51:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.163 15:51:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:10.163 15:51:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.163 15:51:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.163 15:51:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.163 15:51:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:10.163 15:51:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.163 15:51:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.163 15:51:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.163 15:51:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:11.534 15:51:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:11.534 15:51:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:11.534 15:51:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:11.534 15:51:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:11.534 15:51:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:13.432 15:51:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:13.432 15:51:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:13.432 15:51:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:13.432 15:51:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:13.432 15:51:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:13.432 15:51:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:13.432 15:51:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:13.432 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.432 15:51:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:13.432 15:51:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:13.432 15:51:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:13.432 15:51:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:13.432 15:51:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:13.432 15:51:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:13.432 15:51:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:13.432 15:51:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:13.432 15:51:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.432 15:51:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:13.432 15:51:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.432 15:51:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:13.432 15:51:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.432 15:51:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:13.432 15:51:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.432 15:51:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:13.432 15:51:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:13.432 15:51:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.432 15:51:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:13.432 15:51:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.432 15:51:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:13.432 15:51:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.432 15:51:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:13.432 [2024-07-15 15:51:42.252840] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:13.432 15:51:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.432 15:51:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:13.432 15:51:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.432 15:51:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:13.432 15:51:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.432 15:51:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:13.432 15:51:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.432 15:51:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:13.432 15:51:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.432 15:51:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:14.808 15:51:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:14.808 15:51:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:14.808 15:51:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:14.808 15:51:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:14.808 15:51:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:16.714 15:51:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:16.714 15:51:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:16.714 15:51:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:16.714 15:51:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:16.714 15:51:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:16.714 15:51:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:16.714 15:51:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:16.714 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.714 15:51:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:16.714 15:51:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:16.714 15:51:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:16.714 15:51:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:16.714 15:51:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:16.714 15:51:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:16.714 15:51:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:16.714 15:51:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:16.714 15:51:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.714 15:51:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.714 15:51:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.714 15:51:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:16.714 15:51:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.714 15:51:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.714 15:51:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.714 15:51:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:16.714 15:51:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:16.714 15:51:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.714 15:51:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.714 15:51:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.714 15:51:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:16.714 15:51:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.714 15:51:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.714 [2024-07-15 15:51:45.573325] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:16.714 15:51:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.714 15:51:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:16.714 15:51:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.714 15:51:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.714 15:51:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.714 15:51:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:16.714 15:51:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.714 15:51:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.714 15:51:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.714 15:51:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:18.093 15:51:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:18.093 15:51:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:18.093 15:51:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:18.093 15:51:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:18.093 15:51:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:19.999 15:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:19.999 15:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:19.999 15:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:19.999 15:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:19.999 15:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:19.999 15:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:19.999 15:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:19.999 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.999 15:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:19.999 15:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:19.999 15:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:19.999 15:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:19.999 15:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:19.999 15:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:19.999 15:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:19.999 15:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:19.999 15:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.999 15:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:19.999 15:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.999 15:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:19.999 15:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.999 15:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:19.999 15:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.999 15:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:10:19.999 15:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:19.999 15:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:19.999 15:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.000 15:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.000 15:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.000 15:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:20.000 15:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.000 15:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.000 [2024-07-15 15:51:48.925216] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:20.000 15:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.000 15:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:20.000 15:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.000 15:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.259 15:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.259 15:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:20.259 15:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.259 15:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.260 15:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.260 15:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.260 15:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.260 15:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.260 15:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.260 15:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:20.260 15:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.260 15:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.260 15:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.260 15:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:20.260 15:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:20.260 15:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.260 15:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.260 15:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.260 15:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:20.260 15:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.260 15:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.260 [2024-07-15 15:51:48.973305] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:20.260 15:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.260 15:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:20.260 15:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.260 15:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.260 15:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.260 15:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:20.260 15:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.260 15:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.260 15:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.260 15:51:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.260 15:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.260 15:51:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.260 [2024-07-15 15:51:49.025456] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.260 [2024-07-15 15:51:49.073625] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.260 [2024-07-15 15:51:49.121779] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.260 15:51:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:10:20.260 "tick_rate": 2300000000, 00:10:20.260 "poll_groups": [ 00:10:20.260 { 00:10:20.260 "name": "nvmf_tgt_poll_group_000", 00:10:20.260 "admin_qpairs": 2, 00:10:20.260 "io_qpairs": 168, 00:10:20.260 "current_admin_qpairs": 0, 00:10:20.260 "current_io_qpairs": 0, 00:10:20.260 "pending_bdev_io": 0, 00:10:20.260 "completed_nvme_io": 316, 00:10:20.260 "transports": [ 00:10:20.260 { 00:10:20.260 "trtype": "TCP" 00:10:20.260 } 00:10:20.260 ] 00:10:20.260 }, 00:10:20.260 { 00:10:20.260 "name": "nvmf_tgt_poll_group_001", 00:10:20.260 "admin_qpairs": 2, 00:10:20.260 "io_qpairs": 168, 00:10:20.260 "current_admin_qpairs": 0, 00:10:20.260 "current_io_qpairs": 0, 00:10:20.260 "pending_bdev_io": 0, 00:10:20.260 "completed_nvme_io": 220, 00:10:20.260 "transports": [ 00:10:20.260 { 00:10:20.260 "trtype": "TCP" 00:10:20.260 } 00:10:20.260 ] 00:10:20.260 }, 00:10:20.260 { 00:10:20.260 "name": "nvmf_tgt_poll_group_002", 00:10:20.260 "admin_qpairs": 1, 00:10:20.260 "io_qpairs": 168, 00:10:20.260 "current_admin_qpairs": 0, 00:10:20.260 "current_io_qpairs": 0, 00:10:20.260 "pending_bdev_io": 0, 00:10:20.261 "completed_nvme_io": 267, 00:10:20.261 "transports": [ 00:10:20.261 { 00:10:20.261 "trtype": "TCP" 00:10:20.261 } 00:10:20.261 ] 00:10:20.261 }, 00:10:20.261 { 00:10:20.261 "name": "nvmf_tgt_poll_group_003", 00:10:20.261 "admin_qpairs": 2, 00:10:20.261 "io_qpairs": 168, 00:10:20.261 "current_admin_qpairs": 0, 00:10:20.261 "current_io_qpairs": 0, 00:10:20.261 "pending_bdev_io": 0, 00:10:20.261 "completed_nvme_io": 219, 00:10:20.261 "transports": [ 00:10:20.261 { 00:10:20.261 "trtype": "TCP" 00:10:20.261 } 00:10:20.261 ] 00:10:20.261 } 00:10:20.261 ] 00:10:20.261 }' 00:10:20.261 15:51:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:10:20.261 15:51:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:20.261 15:51:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:20.261 15:51:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:20.520 15:51:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:10:20.520 15:51:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:10:20.520 15:51:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:20.520 15:51:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:20.520 15:51:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:20.520 15:51:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:10:20.520 15:51:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:10:20.520 15:51:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:10:20.520 15:51:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:10:20.520 15:51:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:20.520 15:51:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:10:20.520 15:51:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:20.520 15:51:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:10:20.520 15:51:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:20.520 15:51:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:20.520 rmmod nvme_tcp 00:10:20.520 rmmod nvme_fabrics 00:10:20.520 rmmod nvme_keyring 00:10:20.520 15:51:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:20.520 15:51:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:10:20.520 15:51:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:10:20.520 15:51:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 3639671 ']' 00:10:20.520 15:51:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 3639671 00:10:20.520 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 3639671 ']' 00:10:20.520 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 3639671 00:10:20.520 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:10:20.520 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:20.520 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3639671 00:10:20.520 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:20.520 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:20.520 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3639671' 00:10:20.520 killing process with pid 3639671 00:10:20.520 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 3639671 00:10:20.520 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 3639671 00:10:20.780 15:51:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:20.780 15:51:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:20.780 15:51:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:20.780 15:51:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:20.780 15:51:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:20.780 15:51:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.780 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:20.780 15:51:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:23.316 15:51:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:23.316 00:10:23.316 real 0m32.789s 00:10:23.316 user 1m41.785s 00:10:23.316 sys 0m5.778s 00:10:23.316 15:51:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:23.316 15:51:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:23.316 ************************************ 00:10:23.316 END TEST nvmf_rpc 00:10:23.316 ************************************ 00:10:23.316 15:51:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:23.316 15:51:51 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:23.316 15:51:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:23.316 15:51:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:23.316 15:51:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:23.316 ************************************ 00:10:23.316 START TEST nvmf_invalid 00:10:23.316 ************************************ 00:10:23.316 15:51:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:23.316 * Looking for test storage... 00:10:23.316 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:23.316 15:51:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:23.316 15:51:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:10:23.316 15:51:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:23.316 15:51:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:23.316 15:51:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:23.316 15:51:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:23.316 15:51:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:23.316 15:51:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:23.316 15:51:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:23.316 15:51:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:23.316 15:51:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:23.316 15:51:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:23.316 15:51:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:23.316 15:51:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:23.316 15:51:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:23.316 15:51:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:23.316 15:51:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:23.316 15:51:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:23.316 15:51:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:23.316 15:51:51 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:23.317 15:51:51 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:23.317 15:51:51 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:23.317 15:51:51 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.317 15:51:51 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.317 15:51:51 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.317 15:51:51 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:10:23.317 15:51:51 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.317 15:51:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:10:23.317 15:51:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:23.317 15:51:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:23.317 15:51:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:23.317 15:51:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:23.317 15:51:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:23.317 15:51:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:23.317 15:51:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:23.317 15:51:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:23.317 15:51:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:23.317 15:51:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:23.317 15:51:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:10:23.317 15:51:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:10:23.317 15:51:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:10:23.317 15:51:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:10:23.317 15:51:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:23.317 15:51:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:23.317 15:51:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:23.317 15:51:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:23.317 15:51:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:23.317 15:51:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:23.317 15:51:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:23.317 15:51:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:23.317 15:51:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:23.317 15:51:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:23.317 15:51:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:10:23.317 15:51:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:28.614 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:28.614 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:10:28.614 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:28.614 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:28.614 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:28.614 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:28.614 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:28.614 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:10:28.614 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:28.614 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:10:28.614 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:10:28.614 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:10:28.614 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:10:28.614 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:10:28.614 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:10:28.614 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:28.614 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:28.614 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:28.614 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:28.614 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:28.614 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:28.614 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:28.614 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:28.614 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:28.614 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:28.614 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:28.614 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:28.614 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:28.614 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:28.614 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:28.614 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:28.614 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:28.614 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:28.615 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:28.615 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:28.615 Found net devices under 0000:86:00.0: cvl_0_0 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:28.615 Found net devices under 0000:86:00.1: cvl_0_1 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:28.615 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:28.615 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:10:28.615 00:10:28.615 --- 10.0.0.2 ping statistics --- 00:10:28.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.615 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:28.615 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:28.615 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:10:28.615 00:10:28.615 --- 10.0.0.1 ping statistics --- 00:10:28.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.615 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=3647501 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 3647501 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 3647501 ']' 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:28.615 15:51:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:28.615 [2024-07-15 15:51:57.458098] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:10:28.615 [2024-07-15 15:51:57.458141] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:28.615 EAL: No free 2048 kB hugepages reported on node 1 00:10:28.615 [2024-07-15 15:51:57.515984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:28.875 [2024-07-15 15:51:57.590733] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:28.875 [2024-07-15 15:51:57.590776] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:28.875 [2024-07-15 15:51:57.590782] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:28.875 [2024-07-15 15:51:57.590789] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:28.875 [2024-07-15 15:51:57.590794] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:28.875 [2024-07-15 15:51:57.590839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:28.875 [2024-07-15 15:51:57.590935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:28.875 [2024-07-15 15:51:57.591025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:28.875 [2024-07-15 15:51:57.591027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.443 15:51:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:29.443 15:51:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:10:29.443 15:51:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:29.443 15:51:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:29.443 15:51:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:29.443 15:51:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:29.443 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:29.443 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode3786 00:10:29.701 [2024-07-15 15:51:58.457632] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:10:29.701 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:10:29.701 { 00:10:29.701 "nqn": "nqn.2016-06.io.spdk:cnode3786", 00:10:29.701 "tgt_name": "foobar", 00:10:29.701 "method": "nvmf_create_subsystem", 00:10:29.701 "req_id": 1 00:10:29.701 } 00:10:29.701 Got JSON-RPC error response 00:10:29.701 response: 00:10:29.701 { 00:10:29.701 "code": -32603, 00:10:29.701 "message": "Unable to find target foobar" 00:10:29.701 }' 00:10:29.701 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:10:29.701 { 00:10:29.701 "nqn": "nqn.2016-06.io.spdk:cnode3786", 00:10:29.701 "tgt_name": "foobar", 00:10:29.701 "method": "nvmf_create_subsystem", 00:10:29.701 "req_id": 1 00:10:29.701 } 00:10:29.701 Got JSON-RPC error response 00:10:29.701 response: 00:10:29.701 { 00:10:29.701 "code": -32603, 00:10:29.701 "message": "Unable to find target foobar" 00:10:29.701 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:10:29.701 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:10:29.701 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode30933 00:10:29.960 [2024-07-15 15:51:58.638282] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30933: invalid serial number 'SPDKISFASTANDAWESOME' 00:10:29.960 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:10:29.960 { 00:10:29.960 "nqn": "nqn.2016-06.io.spdk:cnode30933", 00:10:29.960 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:29.960 "method": "nvmf_create_subsystem", 00:10:29.960 "req_id": 1 00:10:29.960 } 00:10:29.960 Got JSON-RPC error response 00:10:29.960 response: 00:10:29.960 { 00:10:29.960 "code": -32602, 00:10:29.960 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:29.960 }' 00:10:29.960 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:10:29.960 { 00:10:29.960 "nqn": "nqn.2016-06.io.spdk:cnode30933", 00:10:29.960 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:29.960 "method": "nvmf_create_subsystem", 00:10:29.960 "req_id": 1 00:10:29.960 } 00:10:29.960 Got JSON-RPC error response 00:10:29.960 response: 00:10:29.960 { 00:10:29.960 "code": -32602, 00:10:29.960 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:29.960 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:29.960 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:10:29.960 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode23725 00:10:29.960 [2024-07-15 15:51:58.818837] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23725: invalid model number 'SPDK_Controller' 00:10:29.960 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:10:29.960 { 00:10:29.960 "nqn": "nqn.2016-06.io.spdk:cnode23725", 00:10:29.960 "model_number": "SPDK_Controller\u001f", 00:10:29.960 "method": "nvmf_create_subsystem", 00:10:29.960 "req_id": 1 00:10:29.960 } 00:10:29.960 Got JSON-RPC error response 00:10:29.960 response: 00:10:29.960 { 00:10:29.960 "code": -32602, 00:10:29.960 "message": "Invalid MN SPDK_Controller\u001f" 00:10:29.960 }' 00:10:29.960 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:10:29.960 { 00:10:29.960 "nqn": "nqn.2016-06.io.spdk:cnode23725", 00:10:29.960 "model_number": "SPDK_Controller\u001f", 00:10:29.960 "method": "nvmf_create_subsystem", 00:10:29.960 "req_id": 1 00:10:29.960 } 00:10:29.960 Got JSON-RPC error response 00:10:29.960 response: 00:10:29.960 { 00:10:29.960 "code": -32602, 00:10:29.960 "message": "Invalid MN SPDK_Controller\u001f" 00:10:29.960 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:29.960 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:10:29.960 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:10:29.960 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:29.960 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:29.960 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:29.960 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:29.960 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.960 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:10:29.960 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:10:29.960 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:10:29.960 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.960 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.960 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:10:29.960 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:10:29.960 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:10:29.960 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.960 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.960 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:10:29.960 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:10:29.960 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:10:29.960 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.960 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.960 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:10:29.960 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:10:29.960 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:10:29.960 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.960 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.960 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:10:29.960 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:10:29.960 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:10:29.960 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.960 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.960 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:10:29.960 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:10:29.960 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:10:29.960 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.960 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:10:30.218 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.219 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.219 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:10:30.219 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:10:30.219 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:10:30.219 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.219 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.219 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:10:30.219 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:10:30.219 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:10:30.219 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.219 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.219 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:10:30.219 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:10:30.219 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:10:30.219 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.219 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.219 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:10:30.219 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:10:30.219 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:10:30.219 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.219 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.219 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ _ == \- ]] 00:10:30.219 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '_Dr[#6,LYmW4{\w;3_*HU' 00:10:30.219 15:51:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '_Dr[#6,LYmW4{\w;3_*HU' nqn.2016-06.io.spdk:cnode12193 00:10:30.219 [2024-07-15 15:51:59.139919] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12193: invalid serial number '_Dr[#6,LYmW4{\w;3_*HU' 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:10:30.478 { 00:10:30.478 "nqn": "nqn.2016-06.io.spdk:cnode12193", 00:10:30.478 "serial_number": "_Dr[#6,LYmW4{\\w;3_*HU", 00:10:30.478 "method": "nvmf_create_subsystem", 00:10:30.478 "req_id": 1 00:10:30.478 } 00:10:30.478 Got JSON-RPC error response 00:10:30.478 response: 00:10:30.478 { 00:10:30.478 "code": -32602, 00:10:30.478 "message": "Invalid SN _Dr[#6,LYmW4{\\w;3_*HU" 00:10:30.478 }' 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:10:30.478 { 00:10:30.478 "nqn": "nqn.2016-06.io.spdk:cnode12193", 00:10:30.478 "serial_number": "_Dr[#6,LYmW4{\\w;3_*HU", 00:10:30.478 "method": "nvmf_create_subsystem", 00:10:30.478 "req_id": 1 00:10:30.478 } 00:10:30.478 Got JSON-RPC error response 00:10:30.478 response: 00:10:30.478 { 00:10:30.478 "code": -32602, 00:10:30.478 "message": "Invalid SN _Dr[#6,LYmW4{\\w;3_*HU" 00:10:30.478 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.478 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.479 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:10:30.738 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:10:30.738 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:10:30.738 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.738 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.738 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:10:30.738 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:10:30.738 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:10:30.738 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.738 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.738 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:10:30.738 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:10:30.738 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:10:30.738 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:30.738 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:30.738 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ p == \- ]] 00:10:30.738 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'p$?^A?91Q,sC^0/j_i2-oFN_Kp(&: +"(( 4FwAlG' 00:10:30.738 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'p$?^A?91Q,sC^0/j_i2-oFN_Kp(&: +"(( 4FwAlG' nqn.2016-06.io.spdk:cnode15553 00:10:30.738 [2024-07-15 15:51:59.589427] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15553: invalid model number 'p$?^A?91Q,sC^0/j_i2-oFN_Kp(&: +"(( 4FwAlG' 00:10:30.738 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:10:30.738 { 00:10:30.738 "nqn": "nqn.2016-06.io.spdk:cnode15553", 00:10:30.738 "model_number": "p$?^A?91Q,sC^0/j_i2-oFN_Kp(&: +\"(( 4FwAlG", 00:10:30.738 "method": "nvmf_create_subsystem", 00:10:30.738 "req_id": 1 00:10:30.738 } 00:10:30.738 Got JSON-RPC error response 00:10:30.738 response: 00:10:30.738 { 00:10:30.738 "code": -32602, 00:10:30.738 "message": "Invalid MN p$?^A?91Q,sC^0/j_i2-oFN_Kp(&: +\"(( 4FwAlG" 00:10:30.738 }' 00:10:30.738 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:10:30.738 { 00:10:30.738 "nqn": "nqn.2016-06.io.spdk:cnode15553", 00:10:30.738 "model_number": "p$?^A?91Q,sC^0/j_i2-oFN_Kp(&: +\"(( 4FwAlG", 00:10:30.738 "method": "nvmf_create_subsystem", 00:10:30.738 "req_id": 1 00:10:30.738 } 00:10:30.738 Got JSON-RPC error response 00:10:30.738 response: 00:10:30.738 { 00:10:30.738 "code": -32602, 00:10:30.738 "message": "Invalid MN p$?^A?91Q,sC^0/j_i2-oFN_Kp(&: +\"(( 4FwAlG" 00:10:30.738 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:30.738 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:10:30.997 [2024-07-15 15:51:59.778142] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:30.997 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:10:31.255 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:10:31.255 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:10:31.255 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:10:31.255 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:10:31.255 15:51:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:10:31.255 [2024-07-15 15:52:00.147411] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:10:31.255 15:52:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:10:31.255 { 00:10:31.255 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:31.255 "listen_address": { 00:10:31.255 "trtype": "tcp", 00:10:31.255 "traddr": "", 00:10:31.255 "trsvcid": "4421" 00:10:31.255 }, 00:10:31.255 "method": "nvmf_subsystem_remove_listener", 00:10:31.255 "req_id": 1 00:10:31.255 } 00:10:31.255 Got JSON-RPC error response 00:10:31.255 response: 00:10:31.255 { 00:10:31.255 "code": -32602, 00:10:31.255 "message": "Invalid parameters" 00:10:31.255 }' 00:10:31.255 15:52:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:10:31.255 { 00:10:31.255 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:31.255 "listen_address": { 00:10:31.255 "trtype": "tcp", 00:10:31.255 "traddr": "", 00:10:31.255 "trsvcid": "4421" 00:10:31.255 }, 00:10:31.255 "method": "nvmf_subsystem_remove_listener", 00:10:31.255 "req_id": 1 00:10:31.255 } 00:10:31.255 Got JSON-RPC error response 00:10:31.255 response: 00:10:31.255 { 00:10:31.255 "code": -32602, 00:10:31.255 "message": "Invalid parameters" 00:10:31.255 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:10:31.255 15:52:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5726 -i 0 00:10:31.514 [2024-07-15 15:52:00.319947] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5726: invalid cntlid range [0-65519] 00:10:31.514 15:52:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:10:31.514 { 00:10:31.514 "nqn": "nqn.2016-06.io.spdk:cnode5726", 00:10:31.514 "min_cntlid": 0, 00:10:31.514 "method": "nvmf_create_subsystem", 00:10:31.514 "req_id": 1 00:10:31.514 } 00:10:31.514 Got JSON-RPC error response 00:10:31.514 response: 00:10:31.514 { 00:10:31.514 "code": -32602, 00:10:31.514 "message": "Invalid cntlid range [0-65519]" 00:10:31.514 }' 00:10:31.514 15:52:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:10:31.514 { 00:10:31.514 "nqn": "nqn.2016-06.io.spdk:cnode5726", 00:10:31.514 "min_cntlid": 0, 00:10:31.514 "method": "nvmf_create_subsystem", 00:10:31.514 "req_id": 1 00:10:31.514 } 00:10:31.514 Got JSON-RPC error response 00:10:31.514 response: 00:10:31.514 { 00:10:31.514 "code": -32602, 00:10:31.514 "message": "Invalid cntlid range [0-65519]" 00:10:31.514 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:31.514 15:52:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25907 -i 65520 00:10:31.772 [2024-07-15 15:52:00.500535] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25907: invalid cntlid range [65520-65519] 00:10:31.772 15:52:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:10:31.772 { 00:10:31.772 "nqn": "nqn.2016-06.io.spdk:cnode25907", 00:10:31.772 "min_cntlid": 65520, 00:10:31.772 "method": "nvmf_create_subsystem", 00:10:31.772 "req_id": 1 00:10:31.772 } 00:10:31.772 Got JSON-RPC error response 00:10:31.772 response: 00:10:31.772 { 00:10:31.772 "code": -32602, 00:10:31.773 "message": "Invalid cntlid range [65520-65519]" 00:10:31.773 }' 00:10:31.773 15:52:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:10:31.773 { 00:10:31.773 "nqn": "nqn.2016-06.io.spdk:cnode25907", 00:10:31.773 "min_cntlid": 65520, 00:10:31.773 "method": "nvmf_create_subsystem", 00:10:31.773 "req_id": 1 00:10:31.773 } 00:10:31.773 Got JSON-RPC error response 00:10:31.773 response: 00:10:31.773 { 00:10:31.773 "code": -32602, 00:10:31.773 "message": "Invalid cntlid range [65520-65519]" 00:10:31.773 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:31.773 15:52:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11782 -I 0 00:10:31.773 [2024-07-15 15:52:00.685147] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11782: invalid cntlid range [1-0] 00:10:32.031 15:52:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:10:32.031 { 00:10:32.031 "nqn": "nqn.2016-06.io.spdk:cnode11782", 00:10:32.031 "max_cntlid": 0, 00:10:32.031 "method": "nvmf_create_subsystem", 00:10:32.031 "req_id": 1 00:10:32.031 } 00:10:32.031 Got JSON-RPC error response 00:10:32.031 response: 00:10:32.031 { 00:10:32.031 "code": -32602, 00:10:32.031 "message": "Invalid cntlid range [1-0]" 00:10:32.031 }' 00:10:32.031 15:52:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:10:32.031 { 00:10:32.031 "nqn": "nqn.2016-06.io.spdk:cnode11782", 00:10:32.031 "max_cntlid": 0, 00:10:32.031 "method": "nvmf_create_subsystem", 00:10:32.031 "req_id": 1 00:10:32.031 } 00:10:32.031 Got JSON-RPC error response 00:10:32.031 response: 00:10:32.031 { 00:10:32.031 "code": -32602, 00:10:32.031 "message": "Invalid cntlid range [1-0]" 00:10:32.031 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:32.032 15:52:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9344 -I 65520 00:10:32.032 [2024-07-15 15:52:00.877822] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9344: invalid cntlid range [1-65520] 00:10:32.032 15:52:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:10:32.032 { 00:10:32.032 "nqn": "nqn.2016-06.io.spdk:cnode9344", 00:10:32.032 "max_cntlid": 65520, 00:10:32.032 "method": "nvmf_create_subsystem", 00:10:32.032 "req_id": 1 00:10:32.032 } 00:10:32.032 Got JSON-RPC error response 00:10:32.032 response: 00:10:32.032 { 00:10:32.032 "code": -32602, 00:10:32.032 "message": "Invalid cntlid range [1-65520]" 00:10:32.032 }' 00:10:32.032 15:52:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:10:32.032 { 00:10:32.032 "nqn": "nqn.2016-06.io.spdk:cnode9344", 00:10:32.032 "max_cntlid": 65520, 00:10:32.032 "method": "nvmf_create_subsystem", 00:10:32.032 "req_id": 1 00:10:32.032 } 00:10:32.032 Got JSON-RPC error response 00:10:32.032 response: 00:10:32.032 { 00:10:32.032 "code": -32602, 00:10:32.032 "message": "Invalid cntlid range [1-65520]" 00:10:32.032 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:32.032 15:52:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18180 -i 6 -I 5 00:10:32.290 [2024-07-15 15:52:01.070484] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18180: invalid cntlid range [6-5] 00:10:32.290 15:52:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:10:32.290 { 00:10:32.290 "nqn": "nqn.2016-06.io.spdk:cnode18180", 00:10:32.290 "min_cntlid": 6, 00:10:32.290 "max_cntlid": 5, 00:10:32.290 "method": "nvmf_create_subsystem", 00:10:32.290 "req_id": 1 00:10:32.290 } 00:10:32.290 Got JSON-RPC error response 00:10:32.290 response: 00:10:32.290 { 00:10:32.290 "code": -32602, 00:10:32.290 "message": "Invalid cntlid range [6-5]" 00:10:32.290 }' 00:10:32.290 15:52:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:10:32.290 { 00:10:32.290 "nqn": "nqn.2016-06.io.spdk:cnode18180", 00:10:32.290 "min_cntlid": 6, 00:10:32.290 "max_cntlid": 5, 00:10:32.290 "method": "nvmf_create_subsystem", 00:10:32.290 "req_id": 1 00:10:32.290 } 00:10:32.290 Got JSON-RPC error response 00:10:32.290 response: 00:10:32.290 { 00:10:32.290 "code": -32602, 00:10:32.290 "message": "Invalid cntlid range [6-5]" 00:10:32.290 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:32.290 15:52:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:10:32.290 15:52:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:10:32.290 { 00:10:32.290 "name": "foobar", 00:10:32.290 "method": "nvmf_delete_target", 00:10:32.290 "req_id": 1 00:10:32.290 } 00:10:32.290 Got JSON-RPC error response 00:10:32.290 response: 00:10:32.290 { 00:10:32.290 "code": -32602, 00:10:32.290 "message": "The specified target doesn'\''t exist, cannot delete it." 00:10:32.290 }' 00:10:32.290 15:52:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:10:32.290 { 00:10:32.290 "name": "foobar", 00:10:32.290 "method": "nvmf_delete_target", 00:10:32.290 "req_id": 1 00:10:32.290 } 00:10:32.290 Got JSON-RPC error response 00:10:32.290 response: 00:10:32.290 { 00:10:32.290 "code": -32602, 00:10:32.290 "message": "The specified target doesn't exist, cannot delete it." 00:10:32.290 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:10:32.290 15:52:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:10:32.290 15:52:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:10:32.290 15:52:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:32.290 15:52:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:10:32.290 15:52:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:32.290 15:52:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:10:32.290 15:52:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:32.290 15:52:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:32.290 rmmod nvme_tcp 00:10:32.548 rmmod nvme_fabrics 00:10:32.548 rmmod nvme_keyring 00:10:32.548 15:52:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:32.548 15:52:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:10:32.548 15:52:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:10:32.548 15:52:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 3647501 ']' 00:10:32.548 15:52:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 3647501 00:10:32.548 15:52:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 3647501 ']' 00:10:32.548 15:52:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 3647501 00:10:32.548 15:52:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:10:32.548 15:52:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:32.548 15:52:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3647501 00:10:32.548 15:52:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:32.548 15:52:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:32.548 15:52:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3647501' 00:10:32.548 killing process with pid 3647501 00:10:32.548 15:52:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 3647501 00:10:32.548 15:52:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 3647501 00:10:32.806 15:52:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:32.806 15:52:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:32.806 15:52:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:32.806 15:52:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:32.806 15:52:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:32.806 15:52:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.806 15:52:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:32.806 15:52:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:34.709 15:52:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:34.709 00:10:34.709 real 0m11.855s 00:10:34.709 user 0m19.465s 00:10:34.709 sys 0m5.081s 00:10:34.709 15:52:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:34.709 15:52:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:34.709 ************************************ 00:10:34.709 END TEST nvmf_invalid 00:10:34.709 ************************************ 00:10:34.709 15:52:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:34.709 15:52:03 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:34.709 15:52:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:34.709 15:52:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:34.709 15:52:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:34.709 ************************************ 00:10:34.709 START TEST nvmf_abort 00:10:34.709 ************************************ 00:10:34.709 15:52:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:34.969 * Looking for test storage... 00:10:34.969 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:34.969 15:52:03 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:34.969 15:52:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:10:34.969 15:52:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:34.969 15:52:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:34.969 15:52:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:34.969 15:52:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:34.969 15:52:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:34.969 15:52:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:34.969 15:52:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:34.969 15:52:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:34.969 15:52:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:34.969 15:52:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:34.969 15:52:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:34.969 15:52:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:34.969 15:52:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:34.969 15:52:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:34.969 15:52:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:34.969 15:52:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:34.969 15:52:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:34.969 15:52:03 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:34.969 15:52:03 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:34.969 15:52:03 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:34.969 15:52:03 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.969 15:52:03 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.969 15:52:03 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.969 15:52:03 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:10:34.969 15:52:03 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.969 15:52:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:10:34.969 15:52:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:34.969 15:52:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:34.969 15:52:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:34.969 15:52:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:34.969 15:52:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:34.969 15:52:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:34.969 15:52:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:34.969 15:52:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:34.969 15:52:03 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:34.969 15:52:03 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:10:34.969 15:52:03 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:10:34.969 15:52:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:34.969 15:52:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:34.969 15:52:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:34.969 15:52:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:34.969 15:52:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:34.969 15:52:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:34.969 15:52:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:34.969 15:52:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:34.969 15:52:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:34.969 15:52:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:34.969 15:52:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:10:34.969 15:52:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:40.245 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:40.245 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:40.245 Found net devices under 0000:86:00.0: cvl_0_0 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:40.245 Found net devices under 0000:86:00.1: cvl_0_1 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:40.245 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:10:40.246 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:40.246 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:40.246 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:40.246 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:40.246 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:40.246 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:40.246 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:40.246 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:40.246 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:40.246 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:40.246 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:40.246 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:40.246 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:40.246 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:40.246 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:40.246 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:40.246 15:52:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:40.246 15:52:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:40.246 15:52:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:40.246 15:52:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:40.246 15:52:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:40.246 15:52:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:40.246 15:52:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:40.246 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:40.246 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:10:40.246 00:10:40.246 --- 10.0.0.2 ping statistics --- 00:10:40.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.246 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:10:40.246 15:52:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:40.246 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:40.246 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:10:40.246 00:10:40.246 --- 10.0.0.1 ping statistics --- 00:10:40.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.246 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:10:40.246 15:52:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:40.246 15:52:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:10:40.246 15:52:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:40.246 15:52:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:40.246 15:52:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:40.246 15:52:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:40.246 15:52:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:40.246 15:52:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:40.246 15:52:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:40.246 15:52:09 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:10:40.246 15:52:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:40.246 15:52:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:40.246 15:52:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:40.246 15:52:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=3651718 00:10:40.246 15:52:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 3651718 00:10:40.246 15:52:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:40.246 15:52:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 3651718 ']' 00:10:40.246 15:52:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.246 15:52:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:40.246 15:52:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.246 15:52:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:40.246 15:52:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:40.505 [2024-07-15 15:52:09.202583] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:10:40.505 [2024-07-15 15:52:09.202627] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:40.505 EAL: No free 2048 kB hugepages reported on node 1 00:10:40.505 [2024-07-15 15:52:09.261829] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:40.505 [2024-07-15 15:52:09.343531] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:40.505 [2024-07-15 15:52:09.343567] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:40.505 [2024-07-15 15:52:09.343574] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:40.506 [2024-07-15 15:52:09.343580] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:40.506 [2024-07-15 15:52:09.343586] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:40.506 [2024-07-15 15:52:09.343629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:40.506 [2024-07-15 15:52:09.343715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:40.506 [2024-07-15 15:52:09.343716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:41.074 15:52:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:41.075 15:52:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:10:41.075 15:52:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:41.075 15:52:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:41.075 15:52:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:41.333 15:52:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:41.333 15:52:10 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:10:41.333 15:52:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.333 15:52:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:41.333 [2024-07-15 15:52:10.040541] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:41.333 15:52:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.333 15:52:10 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:10:41.333 15:52:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.333 15:52:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:41.333 Malloc0 00:10:41.333 15:52:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.333 15:52:10 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:41.333 15:52:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.333 15:52:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:41.333 Delay0 00:10:41.333 15:52:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.333 15:52:10 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:41.333 15:52:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.333 15:52:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:41.333 15:52:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.333 15:52:10 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:10:41.334 15:52:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.334 15:52:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:41.334 15:52:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.334 15:52:10 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:41.334 15:52:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.334 15:52:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:41.334 [2024-07-15 15:52:10.109690] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:41.334 15:52:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.334 15:52:10 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:41.334 15:52:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.334 15:52:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:41.334 15:52:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.334 15:52:10 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:10:41.334 EAL: No free 2048 kB hugepages reported on node 1 00:10:41.334 [2024-07-15 15:52:10.221312] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:43.887 [2024-07-15 15:52:12.292781] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed2010 is same with the state(5) to be set 00:10:43.887 Initializing NVMe Controllers 00:10:43.887 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:43.887 controller IO queue size 128 less than required 00:10:43.887 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:10:43.887 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:10:43.887 Initialization complete. Launching workers. 00:10:43.887 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 126, failed: 42768 00:10:43.887 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 42832, failed to submit 62 00:10:43.887 success 42772, unsuccess 60, failed 0 00:10:43.887 15:52:12 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:43.887 15:52:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.887 15:52:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:43.887 15:52:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.887 15:52:12 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:10:43.887 15:52:12 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:10:43.887 15:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:43.887 15:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:10:43.887 15:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:43.887 15:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:10:43.887 15:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:43.887 15:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:43.887 rmmod nvme_tcp 00:10:43.887 rmmod nvme_fabrics 00:10:43.887 rmmod nvme_keyring 00:10:43.887 15:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:43.887 15:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:10:43.887 15:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:10:43.887 15:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 3651718 ']' 00:10:43.887 15:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 3651718 00:10:43.887 15:52:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 3651718 ']' 00:10:43.887 15:52:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 3651718 00:10:43.888 15:52:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:10:43.888 15:52:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:43.888 15:52:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3651718 00:10:43.888 15:52:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:43.888 15:52:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:43.888 15:52:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3651718' 00:10:43.888 killing process with pid 3651718 00:10:43.888 15:52:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 3651718 00:10:43.888 15:52:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 3651718 00:10:43.888 15:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:43.888 15:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:43.888 15:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:43.888 15:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:43.888 15:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:43.888 15:52:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.888 15:52:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:43.888 15:52:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:45.789 15:52:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:45.789 00:10:45.789 real 0m11.066s 00:10:45.789 user 0m13.055s 00:10:45.789 sys 0m4.940s 00:10:45.789 15:52:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:45.789 15:52:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:45.789 ************************************ 00:10:45.789 END TEST nvmf_abort 00:10:45.789 ************************************ 00:10:45.789 15:52:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:45.789 15:52:14 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:45.789 15:52:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:45.789 15:52:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:45.789 15:52:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:46.049 ************************************ 00:10:46.049 START TEST nvmf_ns_hotplug_stress 00:10:46.049 ************************************ 00:10:46.049 15:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:46.049 * Looking for test storage... 00:10:46.049 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:46.049 15:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:46.049 15:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:10:46.049 15:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:46.049 15:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:46.049 15:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:46.049 15:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:46.049 15:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:46.049 15:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:46.049 15:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:46.049 15:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:46.049 15:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:46.049 15:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:46.049 15:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:46.049 15:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:46.049 15:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:46.049 15:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:46.049 15:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:46.049 15:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:46.049 15:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:46.049 15:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:46.049 15:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:46.049 15:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:46.049 15:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.049 15:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.049 15:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.049 15:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:10:46.049 15:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.049 15:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:10:46.049 15:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:46.049 15:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:46.049 15:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:46.049 15:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:46.049 15:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:46.049 15:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:46.049 15:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:46.049 15:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:46.049 15:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:46.049 15:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:10:46.049 15:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:46.049 15:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:46.049 15:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:46.049 15:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:46.049 15:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:46.049 15:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.049 15:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:46.049 15:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.049 15:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:46.049 15:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:46.049 15:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:46.050 15:52:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:51.351 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:51.351 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:51.351 Found net devices under 0000:86:00.0: cvl_0_0 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:51.351 Found net devices under 0000:86:00.1: cvl_0_1 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:51.351 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:51.611 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:51.611 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:51.611 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:10:51.611 00:10:51.611 --- 10.0.0.2 ping statistics --- 00:10:51.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.611 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:10:51.611 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:51.611 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:51.611 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:10:51.611 00:10:51.611 --- 10.0.0.1 ping statistics --- 00:10:51.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.611 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:10:51.611 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:51.611 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:10:51.611 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:51.611 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:51.611 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:51.611 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:51.611 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:51.611 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:51.611 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:51.611 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:10:51.611 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:51.611 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:51.611 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:51.611 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=3655725 00:10:51.611 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 3655725 00:10:51.611 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:51.611 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 3655725 ']' 00:10:51.611 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.611 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:51.611 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.611 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:51.611 15:52:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:51.611 [2024-07-15 15:52:20.391378] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:10:51.611 [2024-07-15 15:52:20.391420] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:51.611 EAL: No free 2048 kB hugepages reported on node 1 00:10:51.611 [2024-07-15 15:52:20.449776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:51.611 [2024-07-15 15:52:20.523691] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:51.611 [2024-07-15 15:52:20.523734] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:51.611 [2024-07-15 15:52:20.523741] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:51.611 [2024-07-15 15:52:20.523747] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:51.611 [2024-07-15 15:52:20.523752] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:51.611 [2024-07-15 15:52:20.523855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:51.611 [2024-07-15 15:52:20.523942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:51.611 [2024-07-15 15:52:20.523944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.549 15:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:52.549 15:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:10:52.549 15:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:52.549 15:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:52.549 15:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:52.549 15:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:52.549 15:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:10:52.549 15:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:52.549 [2024-07-15 15:52:21.389302] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:52.549 15:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:52.808 15:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:53.067 [2024-07-15 15:52:21.758604] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:53.067 15:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:53.067 15:52:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:53.326 Malloc0 00:10:53.326 15:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:53.585 Delay0 00:10:53.585 15:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:53.844 15:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:53.844 NULL1 00:10:53.844 15:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:54.103 15:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3656158 00:10:54.103 15:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:54.103 15:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:10:54.103 15:52:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:54.103 EAL: No free 2048 kB hugepages reported on node 1 00:10:54.363 15:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:54.363 15:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:10:54.363 15:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:54.622 true 00:10:54.622 15:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:10:54.622 15:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:54.881 15:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:55.140 15:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:10:55.140 15:52:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:55.140 true 00:10:55.140 15:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:10:55.140 15:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:55.398 15:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:55.656 15:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:10:55.656 15:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:55.915 true 00:10:55.915 15:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:10:55.915 15:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:55.915 15:52:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:56.174 15:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:10:56.174 15:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:56.432 true 00:10:56.432 15:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:10:56.432 15:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:56.692 15:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:56.951 15:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:10:56.951 15:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:56.951 true 00:10:56.951 15:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:10:56.951 15:52:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:57.210 15:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:57.469 15:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:10:57.469 15:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:57.469 true 00:10:57.728 15:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:10:57.728 15:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:57.728 15:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:57.987 15:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:10:57.987 15:52:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:58.246 true 00:10:58.246 15:52:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:10:58.246 15:52:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:58.505 15:52:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:58.505 15:52:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:58.505 15:52:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:58.764 true 00:10:58.764 15:52:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:10:58.764 15:52:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:59.023 15:52:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:59.281 15:52:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:10:59.281 15:52:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:59.281 true 00:10:59.281 15:52:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:10:59.281 15:52:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:59.539 15:52:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:59.797 15:52:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:10:59.797 15:52:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:11:00.055 true 00:11:00.055 15:52:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:11:00.055 15:52:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:00.055 15:52:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:00.312 15:52:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:11:00.312 15:52:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:11:00.594 true 00:11:00.594 15:52:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:11:00.594 15:52:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:00.853 15:52:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:01.111 15:52:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:11:01.111 15:52:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:11:01.111 true 00:11:01.111 15:52:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:11:01.111 15:52:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:01.369 15:52:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:01.628 15:52:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:11:01.628 15:52:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:11:01.887 true 00:11:01.887 15:52:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:11:01.887 15:52:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:01.887 15:52:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:02.153 15:52:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:11:02.153 15:52:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:11:02.412 true 00:11:02.412 15:52:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:11:02.412 15:52:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:02.672 15:52:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:02.672 15:52:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:11:02.672 15:52:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:11:02.931 true 00:11:02.931 15:52:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:11:02.931 15:52:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:03.189 15:52:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:03.449 15:52:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:11:03.449 15:52:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:11:03.449 true 00:11:03.449 15:52:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:11:03.449 15:52:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:03.708 15:52:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:03.966 15:52:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:11:03.966 15:52:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:11:04.225 true 00:11:04.225 15:52:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:11:04.226 15:52:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:04.226 15:52:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:04.485 15:52:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:11:04.485 15:52:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:11:04.743 true 00:11:04.743 15:52:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:11:04.743 15:52:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:05.001 15:52:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:05.259 15:52:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:11:05.259 15:52:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:11:05.259 true 00:11:05.259 15:52:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:11:05.259 15:52:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:05.518 15:52:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:05.777 15:52:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:11:05.777 15:52:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:11:06.037 true 00:11:06.037 15:52:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:11:06.037 15:52:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:06.037 15:52:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:06.297 15:52:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:11:06.297 15:52:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:11:06.556 true 00:11:06.556 15:52:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:11:06.556 15:52:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:06.815 15:52:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:06.815 15:52:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:11:06.815 15:52:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:11:07.074 true 00:11:07.074 15:52:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:11:07.074 15:52:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.332 15:52:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:07.631 15:52:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:11:07.631 15:52:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:11:07.631 true 00:11:07.631 15:52:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:11:07.631 15:52:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.890 15:52:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:08.149 15:52:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:11:08.149 15:52:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:11:08.408 true 00:11:08.408 15:52:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:11:08.408 15:52:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:08.408 15:52:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:08.667 15:52:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:11:08.667 15:52:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:11:08.926 true 00:11:08.926 15:52:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:11:08.926 15:52:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:09.185 15:52:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:09.185 15:52:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:11:09.185 15:52:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:11:09.444 true 00:11:09.444 15:52:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:11:09.444 15:52:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:09.702 15:52:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:09.961 15:52:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:11:09.961 15:52:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:11:09.961 true 00:11:09.961 15:52:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:11:09.961 15:52:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:10.219 15:52:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:10.477 15:52:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:11:10.477 15:52:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:11:10.735 true 00:11:10.735 15:52:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:11:10.735 15:52:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:10.735 15:52:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:10.992 15:52:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:11:10.992 15:52:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:11:11.249 true 00:11:11.249 15:52:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:11:11.249 15:52:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:11.507 15:52:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:11.765 15:52:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:11:11.765 15:52:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:11:11.765 true 00:11:11.765 15:52:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:11:11.765 15:52:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:12.022 15:52:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:12.280 15:52:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:11:12.280 15:52:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:11:12.538 true 00:11:12.538 15:52:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:11:12.538 15:52:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:12.538 15:52:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:12.796 15:52:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:11:12.796 15:52:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:11:13.053 true 00:11:13.054 15:52:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:11:13.054 15:52:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:13.312 15:52:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:13.571 15:52:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:11:13.571 15:52:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:11:13.571 true 00:11:13.571 15:52:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:11:13.571 15:52:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:13.830 15:52:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:14.089 15:52:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:11:14.089 15:52:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:11:14.348 true 00:11:14.348 15:52:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:11:14.348 15:52:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:14.348 15:52:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:14.607 15:52:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:11:14.607 15:52:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:11:14.866 true 00:11:14.866 15:52:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:11:14.866 15:52:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:15.125 15:52:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:15.125 15:52:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:11:15.125 15:52:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:11:15.382 true 00:11:15.382 15:52:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:11:15.382 15:52:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:15.640 15:52:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:15.898 15:52:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:11:15.899 15:52:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:11:15.899 true 00:11:16.157 15:52:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:11:16.157 15:52:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:16.157 15:52:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:16.416 15:52:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:11:16.416 15:52:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:11:16.674 true 00:11:16.674 15:52:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:11:16.674 15:52:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:16.933 15:52:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:16.933 15:52:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:11:16.933 15:52:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:11:17.192 true 00:11:17.192 15:52:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:11:17.192 15:52:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:17.450 15:52:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:17.707 15:52:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:11:17.707 15:52:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:11:17.707 true 00:11:17.707 15:52:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:11:17.708 15:52:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:17.966 15:52:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:18.225 15:52:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:11:18.225 15:52:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:11:18.484 true 00:11:18.484 15:52:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:11:18.484 15:52:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:18.484 15:52:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:18.742 15:52:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:11:18.742 15:52:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:11:19.002 true 00:11:19.002 15:52:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:11:19.002 15:52:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:19.261 15:52:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:19.261 15:52:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:11:19.261 15:52:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:11:19.519 true 00:11:19.519 15:52:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:11:19.519 15:52:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:19.778 15:52:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:20.036 15:52:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:11:20.036 15:52:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:11:20.036 true 00:11:20.294 15:52:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:11:20.294 15:52:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:20.294 15:52:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:20.553 15:52:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:11:20.553 15:52:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:11:20.810 true 00:11:20.810 15:52:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:11:20.810 15:52:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:21.069 15:52:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:21.069 15:52:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:11:21.069 15:52:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:11:21.328 true 00:11:21.328 15:52:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:11:21.328 15:52:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:21.586 15:52:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:21.843 15:52:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:11:21.843 15:52:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:11:21.843 true 00:11:22.101 15:52:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:11:22.101 15:52:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:22.101 15:52:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:22.359 15:52:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:11:22.359 15:52:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:11:22.617 true 00:11:22.618 15:52:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:11:22.618 15:52:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:22.875 15:52:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:22.875 15:52:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:11:22.875 15:52:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:11:23.133 true 00:11:23.133 15:52:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:11:23.133 15:52:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:23.417 15:52:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:23.697 15:52:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:11:23.697 15:52:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:11:23.697 true 00:11:23.697 15:52:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:11:23.697 15:52:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:23.955 15:52:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:24.214 15:52:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:11:24.214 15:52:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:11:24.214 Initializing NVMe Controllers 00:11:24.214 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:24.214 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:11:24.214 Controller IO queue size 128, less than required. 00:11:24.214 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:24.214 WARNING: Some requested NVMe devices were skipped 00:11:24.214 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:11:24.214 Initialization complete. Launching workers. 00:11:24.214 ======================================================== 00:11:24.214 Latency(us) 00:11:24.214 Device Information : IOPS MiB/s Average min max 00:11:24.214 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 27002.00 13.18 4740.26 2078.26 8110.98 00:11:24.214 ======================================================== 00:11:24.214 Total : 27002.00 13.18 4740.26 2078.26 8110.98 00:11:24.214 00:11:24.214 true 00:11:24.472 15:52:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3656158 00:11:24.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3656158) - No such process 00:11:24.472 15:52:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3656158 00:11:24.472 15:52:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:24.472 15:52:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:24.730 15:52:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:11:24.730 15:52:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:11:24.730 15:52:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:11:24.730 15:52:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:24.730 15:52:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:11:24.988 null0 00:11:24.988 15:52:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:24.988 15:52:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:24.988 15:52:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:11:24.988 null1 00:11:24.988 15:52:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:24.988 15:52:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:24.988 15:52:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:11:25.246 null2 00:11:25.246 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:25.246 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:25.246 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:11:25.503 null3 00:11:25.503 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:25.503 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:25.503 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:11:25.503 null4 00:11:25.503 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:25.503 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:25.503 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:11:25.761 null5 00:11:25.761 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:25.761 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:25.761 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:11:26.020 null6 00:11:26.020 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:26.020 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:26.020 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:11:26.020 null7 00:11:26.279 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:26.279 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:26.279 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:11:26.279 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:26.279 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:26.279 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3661666 3661668 3661672 3661674 3661677 3661680 3661684 3661687 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.280 15:52:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:26.280 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:26.280 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:26.280 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:26.280 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:26.280 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:26.280 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:26.280 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:26.280 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:26.539 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:26.539 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.539 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:26.539 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:26.539 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.539 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:26.539 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:26.539 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.539 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:26.539 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:26.539 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.539 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:26.539 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:26.539 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:26.539 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.539 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.539 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:26.539 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:26.539 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:26.539 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.539 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:26.539 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:26.539 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.539 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:26.798 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:26.798 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:26.798 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:26.798 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:26.798 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:26.798 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:26.798 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:26.798 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:26.798 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:26.798 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:26.798 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.798 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.798 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:26.798 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:26.798 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:26.798 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.798 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:26.798 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:26.798 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:26.798 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:27.057 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:27.057 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.057 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:27.057 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:27.057 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.057 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:27.057 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:27.057 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.057 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:27.057 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:27.057 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.057 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:27.057 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:27.057 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:27.057 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:27.057 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:27.057 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:27.057 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:27.057 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:27.057 15:52:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:27.315 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:27.315 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.315 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:27.315 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:27.315 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.315 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:27.315 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:27.315 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:27.315 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.315 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.315 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:27.315 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:27.315 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:27.315 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.315 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:27.315 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:27.315 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.315 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:27.315 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:27.315 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.315 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:27.315 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:27.315 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.315 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:27.574 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:27.574 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:27.574 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:27.574 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:27.574 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:27.574 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:27.574 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:27.574 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:27.574 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:27.574 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.574 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:27.574 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:27.574 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.574 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:27.574 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:27.574 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.574 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:27.574 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:27.574 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.574 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:27.574 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:27.574 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.574 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:27.574 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:27.574 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.574 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:27.574 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:27.574 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.574 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:27.831 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:27.831 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:27.831 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:27.831 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:27.831 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:27.831 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:27.831 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:27.831 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:27.831 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:27.831 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:27.831 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:28.089 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.089 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.089 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:28.089 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.089 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.089 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:28.089 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.089 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.089 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:28.089 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.089 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.089 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:28.089 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.089 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.089 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:28.089 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.089 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.089 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:28.089 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.089 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.089 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:28.089 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.089 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.089 15:52:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:28.347 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:28.347 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:28.347 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:28.347 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:28.347 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:28.347 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:28.347 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:28.347 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:28.347 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.347 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.347 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:28.347 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.347 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.347 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:28.347 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.347 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.347 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:28.347 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.347 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.347 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.347 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.347 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:28.347 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:28.347 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.347 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.347 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:28.347 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.347 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.347 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:28.347 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.347 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.348 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:28.605 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:28.606 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:28.606 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:28.606 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:28.606 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:28.606 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:28.606 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:28.606 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:28.864 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.864 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.864 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:28.864 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.864 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.864 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:28.864 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.864 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.864 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.864 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.864 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:28.864 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:28.864 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.864 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.864 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:28.864 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.864 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.864 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:28.864 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.864 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.864 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:28.864 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:28.864 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:28.864 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:28.864 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:29.123 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:29.123 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:29.123 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:29.123 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:29.123 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:29.123 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:29.123 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:29.123 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.123 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.123 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:29.123 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.123 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.123 15:52:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:29.123 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.123 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.123 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:29.123 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.123 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.123 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:29.123 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.123 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.123 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:29.123 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.123 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.123 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:29.123 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.123 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.123 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:29.123 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.123 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.123 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:29.381 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:29.381 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:29.381 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:29.381 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:29.381 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:29.381 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:29.381 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:29.381 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:29.639 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.639 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.639 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:29.639 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.639 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.639 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:29.639 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.639 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.639 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:29.639 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.639 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.639 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:29.639 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.639 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.639 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:29.639 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.639 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.639 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:29.639 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.639 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.639 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:29.639 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.639 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.639 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:29.897 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:29.898 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:29.898 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:29.898 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:29.898 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:29.898 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:29.898 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:29.898 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:29.898 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.898 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.898 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.898 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.898 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.898 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.898 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.898 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.898 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.898 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.898 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.898 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.898 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.898 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.898 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:29.898 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:29.898 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:29.898 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:11:29.898 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:29.898 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:11:29.898 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:29.898 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:11:29.898 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:29.898 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:29.898 rmmod nvme_tcp 00:11:29.898 rmmod nvme_fabrics 00:11:29.898 rmmod nvme_keyring 00:11:30.156 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:30.156 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:11:30.156 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:11:30.156 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 3655725 ']' 00:11:30.156 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 3655725 00:11:30.156 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 3655725 ']' 00:11:30.156 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 3655725 00:11:30.156 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:11:30.156 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:30.156 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3655725 00:11:30.156 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:30.156 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:30.156 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3655725' 00:11:30.156 killing process with pid 3655725 00:11:30.156 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 3655725 00:11:30.156 15:52:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 3655725 00:11:30.156 15:52:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:30.156 15:52:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:30.156 15:52:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:30.156 15:52:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:30.156 15:52:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:30.156 15:52:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.156 15:52:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:30.156 15:52:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.692 15:53:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:32.692 00:11:32.692 real 0m46.410s 00:11:32.692 user 3m16.413s 00:11:32.692 sys 0m16.723s 00:11:32.692 15:53:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:32.692 15:53:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:32.692 ************************************ 00:11:32.692 END TEST nvmf_ns_hotplug_stress 00:11:32.692 ************************************ 00:11:32.692 15:53:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:32.692 15:53:01 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:32.692 15:53:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:32.692 15:53:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:32.692 15:53:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:32.692 ************************************ 00:11:32.692 START TEST nvmf_connect_stress 00:11:32.692 ************************************ 00:11:32.692 15:53:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:32.692 * Looking for test storage... 00:11:32.692 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:32.692 15:53:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:32.692 15:53:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:32.692 15:53:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:32.692 15:53:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:32.692 15:53:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:32.692 15:53:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:32.692 15:53:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:32.692 15:53:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:32.692 15:53:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:32.692 15:53:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:32.692 15:53:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:32.692 15:53:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:32.692 15:53:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:32.692 15:53:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:32.692 15:53:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:32.692 15:53:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:32.692 15:53:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:32.692 15:53:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:32.692 15:53:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:32.692 15:53:01 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:32.692 15:53:01 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:32.692 15:53:01 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:32.692 15:53:01 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.692 15:53:01 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.692 15:53:01 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.692 15:53:01 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:32.692 15:53:01 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.692 15:53:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:11:32.692 15:53:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:32.692 15:53:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:32.692 15:53:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:32.692 15:53:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:32.692 15:53:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:32.692 15:53:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:32.693 15:53:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:32.693 15:53:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:32.693 15:53:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:32.693 15:53:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:32.693 15:53:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:32.693 15:53:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:32.693 15:53:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:32.693 15:53:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:32.693 15:53:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.693 15:53:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:32.693 15:53:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.693 15:53:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:32.693 15:53:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:32.693 15:53:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:11:32.693 15:53:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:37.965 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:37.965 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:11:37.965 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:37.965 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:37.965 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:37.965 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:37.965 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:37.965 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:11:37.965 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:37.965 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:11:37.965 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:11:37.965 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:11:37.965 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:11:37.965 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:11:37.965 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:11:37.965 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:37.965 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:37.965 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:37.965 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:37.965 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:37.965 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:37.965 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:37.965 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:37.965 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:37.965 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:37.965 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:37.965 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:37.965 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:37.965 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:37.965 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:37.965 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:37.965 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:37.965 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:37.965 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:37.965 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:37.965 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:37.965 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:37.965 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:37.965 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:37.965 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:37.965 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:37.965 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:37.965 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:37.965 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:37.965 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:37.966 Found net devices under 0000:86:00.0: cvl_0_0 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:37.966 Found net devices under 0000:86:00.1: cvl_0_1 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:37.966 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:37.966 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:11:37.966 00:11:37.966 --- 10.0.0.2 ping statistics --- 00:11:37.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.966 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:37.966 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:37.966 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:11:37.966 00:11:37.966 --- 10.0.0.1 ping statistics --- 00:11:37.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.966 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=3665960 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 3665960 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 3665960 ']' 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:37.966 15:53:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:38.225 [2024-07-15 15:53:06.900470] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:11:38.225 [2024-07-15 15:53:06.900513] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:38.225 EAL: No free 2048 kB hugepages reported on node 1 00:11:38.225 [2024-07-15 15:53:06.957390] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:38.225 [2024-07-15 15:53:07.028220] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:38.225 [2024-07-15 15:53:07.028266] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:38.225 [2024-07-15 15:53:07.028273] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:38.225 [2024-07-15 15:53:07.028279] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:38.225 [2024-07-15 15:53:07.028284] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:38.225 [2024-07-15 15:53:07.028388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:38.225 [2024-07-15 15:53:07.028494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:38.225 [2024-07-15 15:53:07.028495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:38.792 15:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:38.792 15:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:11:38.792 15:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:38.792 15:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:38.792 15:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:39.051 15:53:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:39.051 15:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:39.051 15:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.051 15:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:39.051 [2024-07-15 15:53:07.756971] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:39.051 15:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.051 15:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:39.051 15:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.051 15:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:39.051 15:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.051 15:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:39.051 15:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.051 15:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:39.051 [2024-07-15 15:53:07.781174] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:39.051 15:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.051 15:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:39.051 15:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.051 15:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:39.051 NULL1 00:11:39.051 15:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.051 15:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3666203 00:11:39.051 15:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:39.051 15:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:39.052 15:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:39.052 15:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:39.052 15:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:39.052 15:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:39.052 15:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:39.052 15:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:39.052 15:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:39.052 15:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:39.052 15:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:39.052 15:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:39.052 15:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:39.052 15:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:39.052 15:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:39.052 15:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:39.052 EAL: No free 2048 kB hugepages reported on node 1 00:11:39.052 15:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:39.052 15:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:39.052 15:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:39.052 15:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:39.052 15:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:39.052 15:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:39.052 15:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:39.052 15:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:39.052 15:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:39.052 15:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:39.052 15:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:39.052 15:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:39.052 15:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:39.052 15:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:39.052 15:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:39.052 15:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:39.052 15:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:39.052 15:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:39.052 15:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:39.052 15:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:39.052 15:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:39.052 15:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:39.052 15:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:39.052 15:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:39.052 15:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:39.052 15:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:39.052 15:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:39.052 15:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:39.052 15:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3666203 00:11:39.052 15:53:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:39.052 15:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.052 15:53:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:39.311 15:53:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.311 15:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3666203 00:11:39.311 15:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:39.311 15:53:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.311 15:53:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:39.878 15:53:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.878 15:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3666203 00:11:39.878 15:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:39.878 15:53:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.878 15:53:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:40.136 15:53:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.136 15:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3666203 00:11:40.136 15:53:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:40.136 15:53:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.136 15:53:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:40.395 15:53:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.395 15:53:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3666203 00:11:40.396 15:53:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:40.396 15:53:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.396 15:53:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:40.655 15:53:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.655 15:53:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3666203 00:11:40.655 15:53:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:40.655 15:53:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.655 15:53:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:40.914 15:53:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.914 15:53:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3666203 00:11:40.914 15:53:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:40.914 15:53:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.914 15:53:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:41.481 15:53:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.481 15:53:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3666203 00:11:41.481 15:53:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:41.481 15:53:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.481 15:53:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:41.740 15:53:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.740 15:53:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3666203 00:11:41.740 15:53:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:41.740 15:53:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.740 15:53:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:41.999 15:53:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.999 15:53:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3666203 00:11:41.999 15:53:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:41.999 15:53:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.999 15:53:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:42.257 15:53:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.257 15:53:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3666203 00:11:42.257 15:53:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:42.257 15:53:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.257 15:53:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:42.516 15:53:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.516 15:53:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3666203 00:11:42.517 15:53:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:42.517 15:53:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.517 15:53:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:43.085 15:53:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.085 15:53:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3666203 00:11:43.085 15:53:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:43.085 15:53:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.085 15:53:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:43.345 15:53:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.345 15:53:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3666203 00:11:43.345 15:53:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:43.345 15:53:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.345 15:53:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:43.604 15:53:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.604 15:53:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3666203 00:11:43.604 15:53:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:43.604 15:53:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.604 15:53:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:43.862 15:53:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.862 15:53:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3666203 00:11:43.862 15:53:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:43.862 15:53:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.862 15:53:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:44.121 15:53:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.121 15:53:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3666203 00:11:44.121 15:53:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:44.121 15:53:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.121 15:53:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:44.688 15:53:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.688 15:53:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3666203 00:11:44.688 15:53:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:44.688 15:53:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.688 15:53:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:44.947 15:53:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.947 15:53:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3666203 00:11:44.947 15:53:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:44.947 15:53:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.947 15:53:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:45.205 15:53:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.205 15:53:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3666203 00:11:45.205 15:53:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:45.205 15:53:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.205 15:53:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:45.463 15:53:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.463 15:53:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3666203 00:11:45.463 15:53:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:45.463 15:53:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.463 15:53:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:45.720 15:53:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.720 15:53:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3666203 00:11:45.721 15:53:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:45.721 15:53:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.721 15:53:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:46.312 15:53:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.312 15:53:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3666203 00:11:46.312 15:53:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:46.312 15:53:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.312 15:53:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:46.569 15:53:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.569 15:53:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3666203 00:11:46.569 15:53:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:46.569 15:53:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.569 15:53:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:46.827 15:53:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.827 15:53:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3666203 00:11:46.827 15:53:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:46.827 15:53:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.827 15:53:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:47.084 15:53:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.084 15:53:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3666203 00:11:47.084 15:53:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:47.084 15:53:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.084 15:53:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:47.342 15:53:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.342 15:53:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3666203 00:11:47.342 15:53:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:47.342 15:53:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.342 15:53:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:47.908 15:53:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.908 15:53:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3666203 00:11:47.908 15:53:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:47.908 15:53:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.908 15:53:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:48.167 15:53:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.167 15:53:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3666203 00:11:48.167 15:53:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:48.167 15:53:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.167 15:53:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:48.426 15:53:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.426 15:53:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3666203 00:11:48.426 15:53:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:48.426 15:53:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.426 15:53:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:48.685 15:53:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.685 15:53:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3666203 00:11:48.685 15:53:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:48.685 15:53:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.685 15:53:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:49.253 15:53:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.253 15:53:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3666203 00:11:49.253 15:53:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:49.253 15:53:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.253 15:53:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:49.253 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:49.513 15:53:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.513 15:53:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3666203 00:11:49.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3666203) - No such process 00:11:49.513 15:53:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3666203 00:11:49.513 15:53:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:49.513 15:53:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:49.513 15:53:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:49.513 15:53:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:49.513 15:53:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:11:49.513 15:53:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:49.513 15:53:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:11:49.513 15:53:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:49.513 15:53:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:49.513 rmmod nvme_tcp 00:11:49.513 rmmod nvme_fabrics 00:11:49.513 rmmod nvme_keyring 00:11:49.513 15:53:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:49.513 15:53:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:11:49.513 15:53:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:11:49.513 15:53:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 3665960 ']' 00:11:49.513 15:53:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 3665960 00:11:49.513 15:53:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 3665960 ']' 00:11:49.513 15:53:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 3665960 00:11:49.513 15:53:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:11:49.513 15:53:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:49.513 15:53:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3665960 00:11:49.513 15:53:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:49.513 15:53:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:49.513 15:53:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3665960' 00:11:49.513 killing process with pid 3665960 00:11:49.513 15:53:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 3665960 00:11:49.513 15:53:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 3665960 00:11:49.773 15:53:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:49.773 15:53:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:49.773 15:53:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:49.773 15:53:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:49.773 15:53:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:49.773 15:53:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.773 15:53:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:49.773 15:53:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:51.679 15:53:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:51.679 00:11:51.679 real 0m19.332s 00:11:51.679 user 0m41.931s 00:11:51.679 sys 0m7.989s 00:11:51.679 15:53:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:51.679 15:53:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:51.679 ************************************ 00:11:51.679 END TEST nvmf_connect_stress 00:11:51.679 ************************************ 00:11:51.680 15:53:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:51.680 15:53:20 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:51.680 15:53:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:51.680 15:53:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:51.680 15:53:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:51.939 ************************************ 00:11:51.939 START TEST nvmf_fused_ordering 00:11:51.939 ************************************ 00:11:51.939 15:53:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:51.939 * Looking for test storage... 00:11:51.939 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:51.939 15:53:20 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:51.939 15:53:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:51.939 15:53:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:51.939 15:53:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:51.939 15:53:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:51.939 15:53:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:51.939 15:53:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:51.939 15:53:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:51.939 15:53:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:51.939 15:53:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:51.939 15:53:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:51.939 15:53:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:51.939 15:53:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:51.939 15:53:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:51.939 15:53:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:51.939 15:53:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:51.939 15:53:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:51.939 15:53:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:51.939 15:53:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:51.939 15:53:20 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:51.939 15:53:20 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:51.939 15:53:20 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:51.939 15:53:20 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.939 15:53:20 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.939 15:53:20 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.939 15:53:20 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:51.939 15:53:20 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.939 15:53:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:11:51.939 15:53:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:51.939 15:53:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:51.939 15:53:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:51.939 15:53:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:51.939 15:53:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:51.939 15:53:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:51.939 15:53:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:51.939 15:53:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:51.939 15:53:20 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:51.939 15:53:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:51.939 15:53:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:51.939 15:53:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:51.939 15:53:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:51.939 15:53:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:51.939 15:53:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:51.939 15:53:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:51.939 15:53:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:51.939 15:53:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:51.939 15:53:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:51.939 15:53:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:11:51.939 15:53:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:57.214 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:57.214 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:57.214 Found net devices under 0000:86:00.0: cvl_0_0 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:57.214 Found net devices under 0000:86:00.1: cvl_0_1 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:57.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:57.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:11:57.214 00:11:57.214 --- 10.0.0.2 ping statistics --- 00:11:57.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.214 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:57.214 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:57.214 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:11:57.214 00:11:57.214 --- 10.0.0.1 ping statistics --- 00:11:57.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.214 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:57.214 15:53:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:57.214 15:53:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=3671348 00:11:57.214 15:53:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 3671348 00:11:57.214 15:53:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 3671348 ']' 00:11:57.214 15:53:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.214 15:53:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:57.214 15:53:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.214 15:53:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:57.214 15:53:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:57.215 15:53:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:57.215 [2024-07-15 15:53:26.052043] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:11:57.215 [2024-07-15 15:53:26.052086] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:57.215 EAL: No free 2048 kB hugepages reported on node 1 00:11:57.215 [2024-07-15 15:53:26.110799] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.472 [2024-07-15 15:53:26.189753] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:57.472 [2024-07-15 15:53:26.189787] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:57.472 [2024-07-15 15:53:26.189794] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:57.472 [2024-07-15 15:53:26.189800] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:57.472 [2024-07-15 15:53:26.189805] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:57.472 [2024-07-15 15:53:26.189825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:58.040 15:53:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:58.040 15:53:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:11:58.040 15:53:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:58.040 15:53:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:58.040 15:53:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:58.040 15:53:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:58.040 15:53:26 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:58.040 15:53:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.040 15:53:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:58.041 [2024-07-15 15:53:26.889668] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:58.041 15:53:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.041 15:53:26 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:58.041 15:53:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.041 15:53:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:58.041 15:53:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.041 15:53:26 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:58.041 15:53:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.041 15:53:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:58.041 [2024-07-15 15:53:26.905790] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:58.041 15:53:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.041 15:53:26 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:58.041 15:53:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.041 15:53:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:58.041 NULL1 00:11:58.041 15:53:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.041 15:53:26 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:58.041 15:53:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.041 15:53:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:58.041 15:53:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.041 15:53:26 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:58.041 15:53:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.041 15:53:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:58.041 15:53:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.041 15:53:26 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:58.041 [2024-07-15 15:53:26.958205] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:11:58.041 [2024-07-15 15:53:26.958255] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3671593 ] 00:11:58.300 EAL: No free 2048 kB hugepages reported on node 1 00:11:58.560 Attached to nqn.2016-06.io.spdk:cnode1 00:11:58.560 Namespace ID: 1 size: 1GB 00:11:58.560 fused_ordering(0) 00:11:58.560 fused_ordering(1) 00:11:58.560 fused_ordering(2) 00:11:58.560 fused_ordering(3) 00:11:58.560 fused_ordering(4) 00:11:58.560 fused_ordering(5) 00:11:58.560 fused_ordering(6) 00:11:58.560 fused_ordering(7) 00:11:58.560 fused_ordering(8) 00:11:58.560 fused_ordering(9) 00:11:58.560 fused_ordering(10) 00:11:58.560 fused_ordering(11) 00:11:58.560 fused_ordering(12) 00:11:58.560 fused_ordering(13) 00:11:58.560 fused_ordering(14) 00:11:58.560 fused_ordering(15) 00:11:58.560 fused_ordering(16) 00:11:58.560 fused_ordering(17) 00:11:58.560 fused_ordering(18) 00:11:58.560 fused_ordering(19) 00:11:58.560 fused_ordering(20) 00:11:58.560 fused_ordering(21) 00:11:58.560 fused_ordering(22) 00:11:58.560 fused_ordering(23) 00:11:58.560 fused_ordering(24) 00:11:58.560 fused_ordering(25) 00:11:58.560 fused_ordering(26) 00:11:58.560 fused_ordering(27) 00:11:58.560 fused_ordering(28) 00:11:58.560 fused_ordering(29) 00:11:58.560 fused_ordering(30) 00:11:58.560 fused_ordering(31) 00:11:58.560 fused_ordering(32) 00:11:58.560 fused_ordering(33) 00:11:58.560 fused_ordering(34) 00:11:58.560 fused_ordering(35) 00:11:58.560 fused_ordering(36) 00:11:58.560 fused_ordering(37) 00:11:58.560 fused_ordering(38) 00:11:58.560 fused_ordering(39) 00:11:58.560 fused_ordering(40) 00:11:58.560 fused_ordering(41) 00:11:58.560 fused_ordering(42) 00:11:58.560 fused_ordering(43) 00:11:58.560 fused_ordering(44) 00:11:58.560 fused_ordering(45) 00:11:58.560 fused_ordering(46) 00:11:58.560 fused_ordering(47) 00:11:58.560 fused_ordering(48) 00:11:58.560 fused_ordering(49) 00:11:58.560 fused_ordering(50) 00:11:58.560 fused_ordering(51) 00:11:58.560 fused_ordering(52) 00:11:58.560 fused_ordering(53) 00:11:58.560 fused_ordering(54) 00:11:58.560 fused_ordering(55) 00:11:58.560 fused_ordering(56) 00:11:58.560 fused_ordering(57) 00:11:58.560 fused_ordering(58) 00:11:58.560 fused_ordering(59) 00:11:58.560 fused_ordering(60) 00:11:58.560 fused_ordering(61) 00:11:58.560 fused_ordering(62) 00:11:58.560 fused_ordering(63) 00:11:58.560 fused_ordering(64) 00:11:58.560 fused_ordering(65) 00:11:58.560 fused_ordering(66) 00:11:58.560 fused_ordering(67) 00:11:58.560 fused_ordering(68) 00:11:58.560 fused_ordering(69) 00:11:58.560 fused_ordering(70) 00:11:58.560 fused_ordering(71) 00:11:58.560 fused_ordering(72) 00:11:58.560 fused_ordering(73) 00:11:58.560 fused_ordering(74) 00:11:58.560 fused_ordering(75) 00:11:58.560 fused_ordering(76) 00:11:58.560 fused_ordering(77) 00:11:58.560 fused_ordering(78) 00:11:58.560 fused_ordering(79) 00:11:58.560 fused_ordering(80) 00:11:58.560 fused_ordering(81) 00:11:58.560 fused_ordering(82) 00:11:58.560 fused_ordering(83) 00:11:58.560 fused_ordering(84) 00:11:58.560 fused_ordering(85) 00:11:58.560 fused_ordering(86) 00:11:58.560 fused_ordering(87) 00:11:58.560 fused_ordering(88) 00:11:58.560 fused_ordering(89) 00:11:58.560 fused_ordering(90) 00:11:58.560 fused_ordering(91) 00:11:58.560 fused_ordering(92) 00:11:58.560 fused_ordering(93) 00:11:58.560 fused_ordering(94) 00:11:58.560 fused_ordering(95) 00:11:58.560 fused_ordering(96) 00:11:58.560 fused_ordering(97) 00:11:58.560 fused_ordering(98) 00:11:58.560 fused_ordering(99) 00:11:58.560 fused_ordering(100) 00:11:58.560 fused_ordering(101) 00:11:58.560 fused_ordering(102) 00:11:58.560 fused_ordering(103) 00:11:58.560 fused_ordering(104) 00:11:58.560 fused_ordering(105) 00:11:58.560 fused_ordering(106) 00:11:58.560 fused_ordering(107) 00:11:58.560 fused_ordering(108) 00:11:58.560 fused_ordering(109) 00:11:58.560 fused_ordering(110) 00:11:58.560 fused_ordering(111) 00:11:58.560 fused_ordering(112) 00:11:58.560 fused_ordering(113) 00:11:58.560 fused_ordering(114) 00:11:58.560 fused_ordering(115) 00:11:58.560 fused_ordering(116) 00:11:58.560 fused_ordering(117) 00:11:58.560 fused_ordering(118) 00:11:58.560 fused_ordering(119) 00:11:58.560 fused_ordering(120) 00:11:58.560 fused_ordering(121) 00:11:58.560 fused_ordering(122) 00:11:58.560 fused_ordering(123) 00:11:58.560 fused_ordering(124) 00:11:58.560 fused_ordering(125) 00:11:58.560 fused_ordering(126) 00:11:58.560 fused_ordering(127) 00:11:58.560 fused_ordering(128) 00:11:58.560 fused_ordering(129) 00:11:58.560 fused_ordering(130) 00:11:58.560 fused_ordering(131) 00:11:58.560 fused_ordering(132) 00:11:58.560 fused_ordering(133) 00:11:58.560 fused_ordering(134) 00:11:58.560 fused_ordering(135) 00:11:58.560 fused_ordering(136) 00:11:58.560 fused_ordering(137) 00:11:58.560 fused_ordering(138) 00:11:58.560 fused_ordering(139) 00:11:58.560 fused_ordering(140) 00:11:58.560 fused_ordering(141) 00:11:58.560 fused_ordering(142) 00:11:58.560 fused_ordering(143) 00:11:58.560 fused_ordering(144) 00:11:58.560 fused_ordering(145) 00:11:58.560 fused_ordering(146) 00:11:58.560 fused_ordering(147) 00:11:58.560 fused_ordering(148) 00:11:58.560 fused_ordering(149) 00:11:58.560 fused_ordering(150) 00:11:58.560 fused_ordering(151) 00:11:58.560 fused_ordering(152) 00:11:58.560 fused_ordering(153) 00:11:58.560 fused_ordering(154) 00:11:58.560 fused_ordering(155) 00:11:58.560 fused_ordering(156) 00:11:58.560 fused_ordering(157) 00:11:58.560 fused_ordering(158) 00:11:58.560 fused_ordering(159) 00:11:58.560 fused_ordering(160) 00:11:58.560 fused_ordering(161) 00:11:58.560 fused_ordering(162) 00:11:58.560 fused_ordering(163) 00:11:58.560 fused_ordering(164) 00:11:58.560 fused_ordering(165) 00:11:58.560 fused_ordering(166) 00:11:58.560 fused_ordering(167) 00:11:58.560 fused_ordering(168) 00:11:58.560 fused_ordering(169) 00:11:58.560 fused_ordering(170) 00:11:58.560 fused_ordering(171) 00:11:58.560 fused_ordering(172) 00:11:58.560 fused_ordering(173) 00:11:58.560 fused_ordering(174) 00:11:58.560 fused_ordering(175) 00:11:58.560 fused_ordering(176) 00:11:58.560 fused_ordering(177) 00:11:58.560 fused_ordering(178) 00:11:58.560 fused_ordering(179) 00:11:58.560 fused_ordering(180) 00:11:58.560 fused_ordering(181) 00:11:58.560 fused_ordering(182) 00:11:58.560 fused_ordering(183) 00:11:58.560 fused_ordering(184) 00:11:58.560 fused_ordering(185) 00:11:58.560 fused_ordering(186) 00:11:58.560 fused_ordering(187) 00:11:58.560 fused_ordering(188) 00:11:58.560 fused_ordering(189) 00:11:58.560 fused_ordering(190) 00:11:58.560 fused_ordering(191) 00:11:58.560 fused_ordering(192) 00:11:58.560 fused_ordering(193) 00:11:58.560 fused_ordering(194) 00:11:58.560 fused_ordering(195) 00:11:58.560 fused_ordering(196) 00:11:58.560 fused_ordering(197) 00:11:58.560 fused_ordering(198) 00:11:58.560 fused_ordering(199) 00:11:58.560 fused_ordering(200) 00:11:58.560 fused_ordering(201) 00:11:58.560 fused_ordering(202) 00:11:58.560 fused_ordering(203) 00:11:58.560 fused_ordering(204) 00:11:58.560 fused_ordering(205) 00:11:58.820 fused_ordering(206) 00:11:58.820 fused_ordering(207) 00:11:58.820 fused_ordering(208) 00:11:58.820 fused_ordering(209) 00:11:58.820 fused_ordering(210) 00:11:58.820 fused_ordering(211) 00:11:58.820 fused_ordering(212) 00:11:58.820 fused_ordering(213) 00:11:58.820 fused_ordering(214) 00:11:58.820 fused_ordering(215) 00:11:58.820 fused_ordering(216) 00:11:58.820 fused_ordering(217) 00:11:58.820 fused_ordering(218) 00:11:58.820 fused_ordering(219) 00:11:58.820 fused_ordering(220) 00:11:58.820 fused_ordering(221) 00:11:58.820 fused_ordering(222) 00:11:58.820 fused_ordering(223) 00:11:58.820 fused_ordering(224) 00:11:58.820 fused_ordering(225) 00:11:58.820 fused_ordering(226) 00:11:58.820 fused_ordering(227) 00:11:58.820 fused_ordering(228) 00:11:58.820 fused_ordering(229) 00:11:58.820 fused_ordering(230) 00:11:58.820 fused_ordering(231) 00:11:58.820 fused_ordering(232) 00:11:58.820 fused_ordering(233) 00:11:58.820 fused_ordering(234) 00:11:58.820 fused_ordering(235) 00:11:58.820 fused_ordering(236) 00:11:58.820 fused_ordering(237) 00:11:58.820 fused_ordering(238) 00:11:58.820 fused_ordering(239) 00:11:58.820 fused_ordering(240) 00:11:58.820 fused_ordering(241) 00:11:58.820 fused_ordering(242) 00:11:58.820 fused_ordering(243) 00:11:58.820 fused_ordering(244) 00:11:58.820 fused_ordering(245) 00:11:58.820 fused_ordering(246) 00:11:58.820 fused_ordering(247) 00:11:58.820 fused_ordering(248) 00:11:58.820 fused_ordering(249) 00:11:58.820 fused_ordering(250) 00:11:58.820 fused_ordering(251) 00:11:58.820 fused_ordering(252) 00:11:58.820 fused_ordering(253) 00:11:58.820 fused_ordering(254) 00:11:58.820 fused_ordering(255) 00:11:58.820 fused_ordering(256) 00:11:58.820 fused_ordering(257) 00:11:58.820 fused_ordering(258) 00:11:58.820 fused_ordering(259) 00:11:58.820 fused_ordering(260) 00:11:58.820 fused_ordering(261) 00:11:58.820 fused_ordering(262) 00:11:58.820 fused_ordering(263) 00:11:58.820 fused_ordering(264) 00:11:58.820 fused_ordering(265) 00:11:58.820 fused_ordering(266) 00:11:58.820 fused_ordering(267) 00:11:58.820 fused_ordering(268) 00:11:58.820 fused_ordering(269) 00:11:58.820 fused_ordering(270) 00:11:58.820 fused_ordering(271) 00:11:58.820 fused_ordering(272) 00:11:58.820 fused_ordering(273) 00:11:58.820 fused_ordering(274) 00:11:58.820 fused_ordering(275) 00:11:58.820 fused_ordering(276) 00:11:58.820 fused_ordering(277) 00:11:58.820 fused_ordering(278) 00:11:58.820 fused_ordering(279) 00:11:58.820 fused_ordering(280) 00:11:58.820 fused_ordering(281) 00:11:58.820 fused_ordering(282) 00:11:58.820 fused_ordering(283) 00:11:58.820 fused_ordering(284) 00:11:58.820 fused_ordering(285) 00:11:58.820 fused_ordering(286) 00:11:58.820 fused_ordering(287) 00:11:58.820 fused_ordering(288) 00:11:58.820 fused_ordering(289) 00:11:58.820 fused_ordering(290) 00:11:58.820 fused_ordering(291) 00:11:58.820 fused_ordering(292) 00:11:58.820 fused_ordering(293) 00:11:58.820 fused_ordering(294) 00:11:58.820 fused_ordering(295) 00:11:58.820 fused_ordering(296) 00:11:58.820 fused_ordering(297) 00:11:58.820 fused_ordering(298) 00:11:58.820 fused_ordering(299) 00:11:58.820 fused_ordering(300) 00:11:58.820 fused_ordering(301) 00:11:58.820 fused_ordering(302) 00:11:58.820 fused_ordering(303) 00:11:58.820 fused_ordering(304) 00:11:58.820 fused_ordering(305) 00:11:58.820 fused_ordering(306) 00:11:58.820 fused_ordering(307) 00:11:58.820 fused_ordering(308) 00:11:58.820 fused_ordering(309) 00:11:58.820 fused_ordering(310) 00:11:58.820 fused_ordering(311) 00:11:58.820 fused_ordering(312) 00:11:58.820 fused_ordering(313) 00:11:58.820 fused_ordering(314) 00:11:58.820 fused_ordering(315) 00:11:58.820 fused_ordering(316) 00:11:58.820 fused_ordering(317) 00:11:58.820 fused_ordering(318) 00:11:58.820 fused_ordering(319) 00:11:58.820 fused_ordering(320) 00:11:58.820 fused_ordering(321) 00:11:58.820 fused_ordering(322) 00:11:58.820 fused_ordering(323) 00:11:58.820 fused_ordering(324) 00:11:58.820 fused_ordering(325) 00:11:58.820 fused_ordering(326) 00:11:58.820 fused_ordering(327) 00:11:58.820 fused_ordering(328) 00:11:58.820 fused_ordering(329) 00:11:58.820 fused_ordering(330) 00:11:58.820 fused_ordering(331) 00:11:58.820 fused_ordering(332) 00:11:58.820 fused_ordering(333) 00:11:58.820 fused_ordering(334) 00:11:58.820 fused_ordering(335) 00:11:58.820 fused_ordering(336) 00:11:58.820 fused_ordering(337) 00:11:58.820 fused_ordering(338) 00:11:58.820 fused_ordering(339) 00:11:58.820 fused_ordering(340) 00:11:58.820 fused_ordering(341) 00:11:58.820 fused_ordering(342) 00:11:58.820 fused_ordering(343) 00:11:58.820 fused_ordering(344) 00:11:58.820 fused_ordering(345) 00:11:58.820 fused_ordering(346) 00:11:58.820 fused_ordering(347) 00:11:58.820 fused_ordering(348) 00:11:58.820 fused_ordering(349) 00:11:58.820 fused_ordering(350) 00:11:58.820 fused_ordering(351) 00:11:58.820 fused_ordering(352) 00:11:58.820 fused_ordering(353) 00:11:58.820 fused_ordering(354) 00:11:58.820 fused_ordering(355) 00:11:58.820 fused_ordering(356) 00:11:58.820 fused_ordering(357) 00:11:58.820 fused_ordering(358) 00:11:58.820 fused_ordering(359) 00:11:58.820 fused_ordering(360) 00:11:58.820 fused_ordering(361) 00:11:58.820 fused_ordering(362) 00:11:58.820 fused_ordering(363) 00:11:58.820 fused_ordering(364) 00:11:58.820 fused_ordering(365) 00:11:58.820 fused_ordering(366) 00:11:58.820 fused_ordering(367) 00:11:58.820 fused_ordering(368) 00:11:58.820 fused_ordering(369) 00:11:58.820 fused_ordering(370) 00:11:58.820 fused_ordering(371) 00:11:58.820 fused_ordering(372) 00:11:58.820 fused_ordering(373) 00:11:58.820 fused_ordering(374) 00:11:58.820 fused_ordering(375) 00:11:58.820 fused_ordering(376) 00:11:58.820 fused_ordering(377) 00:11:58.820 fused_ordering(378) 00:11:58.820 fused_ordering(379) 00:11:58.820 fused_ordering(380) 00:11:58.820 fused_ordering(381) 00:11:58.820 fused_ordering(382) 00:11:58.820 fused_ordering(383) 00:11:58.820 fused_ordering(384) 00:11:58.820 fused_ordering(385) 00:11:58.820 fused_ordering(386) 00:11:58.820 fused_ordering(387) 00:11:58.820 fused_ordering(388) 00:11:58.820 fused_ordering(389) 00:11:58.820 fused_ordering(390) 00:11:58.820 fused_ordering(391) 00:11:58.820 fused_ordering(392) 00:11:58.820 fused_ordering(393) 00:11:58.820 fused_ordering(394) 00:11:58.820 fused_ordering(395) 00:11:58.820 fused_ordering(396) 00:11:58.820 fused_ordering(397) 00:11:58.820 fused_ordering(398) 00:11:58.820 fused_ordering(399) 00:11:58.820 fused_ordering(400) 00:11:58.820 fused_ordering(401) 00:11:58.820 fused_ordering(402) 00:11:58.820 fused_ordering(403) 00:11:58.820 fused_ordering(404) 00:11:58.820 fused_ordering(405) 00:11:58.820 fused_ordering(406) 00:11:58.820 fused_ordering(407) 00:11:58.820 fused_ordering(408) 00:11:58.820 fused_ordering(409) 00:11:58.820 fused_ordering(410) 00:11:59.079 fused_ordering(411) 00:11:59.079 fused_ordering(412) 00:11:59.079 fused_ordering(413) 00:11:59.079 fused_ordering(414) 00:11:59.079 fused_ordering(415) 00:11:59.079 fused_ordering(416) 00:11:59.079 fused_ordering(417) 00:11:59.079 fused_ordering(418) 00:11:59.079 fused_ordering(419) 00:11:59.079 fused_ordering(420) 00:11:59.079 fused_ordering(421) 00:11:59.079 fused_ordering(422) 00:11:59.079 fused_ordering(423) 00:11:59.079 fused_ordering(424) 00:11:59.079 fused_ordering(425) 00:11:59.079 fused_ordering(426) 00:11:59.079 fused_ordering(427) 00:11:59.079 fused_ordering(428) 00:11:59.079 fused_ordering(429) 00:11:59.079 fused_ordering(430) 00:11:59.079 fused_ordering(431) 00:11:59.079 fused_ordering(432) 00:11:59.079 fused_ordering(433) 00:11:59.079 fused_ordering(434) 00:11:59.079 fused_ordering(435) 00:11:59.079 fused_ordering(436) 00:11:59.079 fused_ordering(437) 00:11:59.079 fused_ordering(438) 00:11:59.079 fused_ordering(439) 00:11:59.079 fused_ordering(440) 00:11:59.079 fused_ordering(441) 00:11:59.079 fused_ordering(442) 00:11:59.079 fused_ordering(443) 00:11:59.079 fused_ordering(444) 00:11:59.079 fused_ordering(445) 00:11:59.079 fused_ordering(446) 00:11:59.079 fused_ordering(447) 00:11:59.079 fused_ordering(448) 00:11:59.079 fused_ordering(449) 00:11:59.079 fused_ordering(450) 00:11:59.079 fused_ordering(451) 00:11:59.079 fused_ordering(452) 00:11:59.079 fused_ordering(453) 00:11:59.079 fused_ordering(454) 00:11:59.079 fused_ordering(455) 00:11:59.079 fused_ordering(456) 00:11:59.079 fused_ordering(457) 00:11:59.079 fused_ordering(458) 00:11:59.079 fused_ordering(459) 00:11:59.079 fused_ordering(460) 00:11:59.079 fused_ordering(461) 00:11:59.079 fused_ordering(462) 00:11:59.079 fused_ordering(463) 00:11:59.079 fused_ordering(464) 00:11:59.079 fused_ordering(465) 00:11:59.079 fused_ordering(466) 00:11:59.079 fused_ordering(467) 00:11:59.079 fused_ordering(468) 00:11:59.079 fused_ordering(469) 00:11:59.079 fused_ordering(470) 00:11:59.079 fused_ordering(471) 00:11:59.079 fused_ordering(472) 00:11:59.079 fused_ordering(473) 00:11:59.079 fused_ordering(474) 00:11:59.079 fused_ordering(475) 00:11:59.079 fused_ordering(476) 00:11:59.079 fused_ordering(477) 00:11:59.079 fused_ordering(478) 00:11:59.079 fused_ordering(479) 00:11:59.079 fused_ordering(480) 00:11:59.079 fused_ordering(481) 00:11:59.079 fused_ordering(482) 00:11:59.079 fused_ordering(483) 00:11:59.079 fused_ordering(484) 00:11:59.079 fused_ordering(485) 00:11:59.079 fused_ordering(486) 00:11:59.079 fused_ordering(487) 00:11:59.079 fused_ordering(488) 00:11:59.079 fused_ordering(489) 00:11:59.079 fused_ordering(490) 00:11:59.079 fused_ordering(491) 00:11:59.079 fused_ordering(492) 00:11:59.079 fused_ordering(493) 00:11:59.079 fused_ordering(494) 00:11:59.079 fused_ordering(495) 00:11:59.079 fused_ordering(496) 00:11:59.079 fused_ordering(497) 00:11:59.079 fused_ordering(498) 00:11:59.079 fused_ordering(499) 00:11:59.079 fused_ordering(500) 00:11:59.079 fused_ordering(501) 00:11:59.079 fused_ordering(502) 00:11:59.079 fused_ordering(503) 00:11:59.079 fused_ordering(504) 00:11:59.079 fused_ordering(505) 00:11:59.079 fused_ordering(506) 00:11:59.080 fused_ordering(507) 00:11:59.080 fused_ordering(508) 00:11:59.080 fused_ordering(509) 00:11:59.080 fused_ordering(510) 00:11:59.080 fused_ordering(511) 00:11:59.080 fused_ordering(512) 00:11:59.080 fused_ordering(513) 00:11:59.080 fused_ordering(514) 00:11:59.080 fused_ordering(515) 00:11:59.080 fused_ordering(516) 00:11:59.080 fused_ordering(517) 00:11:59.080 fused_ordering(518) 00:11:59.080 fused_ordering(519) 00:11:59.080 fused_ordering(520) 00:11:59.080 fused_ordering(521) 00:11:59.080 fused_ordering(522) 00:11:59.080 fused_ordering(523) 00:11:59.080 fused_ordering(524) 00:11:59.080 fused_ordering(525) 00:11:59.080 fused_ordering(526) 00:11:59.080 fused_ordering(527) 00:11:59.080 fused_ordering(528) 00:11:59.080 fused_ordering(529) 00:11:59.080 fused_ordering(530) 00:11:59.080 fused_ordering(531) 00:11:59.080 fused_ordering(532) 00:11:59.080 fused_ordering(533) 00:11:59.080 fused_ordering(534) 00:11:59.080 fused_ordering(535) 00:11:59.080 fused_ordering(536) 00:11:59.080 fused_ordering(537) 00:11:59.080 fused_ordering(538) 00:11:59.080 fused_ordering(539) 00:11:59.080 fused_ordering(540) 00:11:59.080 fused_ordering(541) 00:11:59.080 fused_ordering(542) 00:11:59.080 fused_ordering(543) 00:11:59.080 fused_ordering(544) 00:11:59.080 fused_ordering(545) 00:11:59.080 fused_ordering(546) 00:11:59.080 fused_ordering(547) 00:11:59.080 fused_ordering(548) 00:11:59.080 fused_ordering(549) 00:11:59.080 fused_ordering(550) 00:11:59.080 fused_ordering(551) 00:11:59.080 fused_ordering(552) 00:11:59.080 fused_ordering(553) 00:11:59.080 fused_ordering(554) 00:11:59.080 fused_ordering(555) 00:11:59.080 fused_ordering(556) 00:11:59.080 fused_ordering(557) 00:11:59.080 fused_ordering(558) 00:11:59.080 fused_ordering(559) 00:11:59.080 fused_ordering(560) 00:11:59.080 fused_ordering(561) 00:11:59.080 fused_ordering(562) 00:11:59.080 fused_ordering(563) 00:11:59.080 fused_ordering(564) 00:11:59.080 fused_ordering(565) 00:11:59.080 fused_ordering(566) 00:11:59.080 fused_ordering(567) 00:11:59.080 fused_ordering(568) 00:11:59.080 fused_ordering(569) 00:11:59.080 fused_ordering(570) 00:11:59.080 fused_ordering(571) 00:11:59.080 fused_ordering(572) 00:11:59.080 fused_ordering(573) 00:11:59.080 fused_ordering(574) 00:11:59.080 fused_ordering(575) 00:11:59.080 fused_ordering(576) 00:11:59.080 fused_ordering(577) 00:11:59.080 fused_ordering(578) 00:11:59.080 fused_ordering(579) 00:11:59.080 fused_ordering(580) 00:11:59.080 fused_ordering(581) 00:11:59.080 fused_ordering(582) 00:11:59.080 fused_ordering(583) 00:11:59.080 fused_ordering(584) 00:11:59.080 fused_ordering(585) 00:11:59.080 fused_ordering(586) 00:11:59.080 fused_ordering(587) 00:11:59.080 fused_ordering(588) 00:11:59.080 fused_ordering(589) 00:11:59.080 fused_ordering(590) 00:11:59.080 fused_ordering(591) 00:11:59.080 fused_ordering(592) 00:11:59.080 fused_ordering(593) 00:11:59.080 fused_ordering(594) 00:11:59.080 fused_ordering(595) 00:11:59.080 fused_ordering(596) 00:11:59.080 fused_ordering(597) 00:11:59.080 fused_ordering(598) 00:11:59.080 fused_ordering(599) 00:11:59.080 fused_ordering(600) 00:11:59.080 fused_ordering(601) 00:11:59.080 fused_ordering(602) 00:11:59.080 fused_ordering(603) 00:11:59.080 fused_ordering(604) 00:11:59.080 fused_ordering(605) 00:11:59.080 fused_ordering(606) 00:11:59.080 fused_ordering(607) 00:11:59.080 fused_ordering(608) 00:11:59.080 fused_ordering(609) 00:11:59.080 fused_ordering(610) 00:11:59.080 fused_ordering(611) 00:11:59.080 fused_ordering(612) 00:11:59.080 fused_ordering(613) 00:11:59.080 fused_ordering(614) 00:11:59.080 fused_ordering(615) 00:11:59.648 fused_ordering(616) 00:11:59.648 fused_ordering(617) 00:11:59.648 fused_ordering(618) 00:11:59.648 fused_ordering(619) 00:11:59.648 fused_ordering(620) 00:11:59.648 fused_ordering(621) 00:11:59.648 fused_ordering(622) 00:11:59.648 fused_ordering(623) 00:11:59.648 fused_ordering(624) 00:11:59.648 fused_ordering(625) 00:11:59.648 fused_ordering(626) 00:11:59.648 fused_ordering(627) 00:11:59.648 fused_ordering(628) 00:11:59.648 fused_ordering(629) 00:11:59.648 fused_ordering(630) 00:11:59.648 fused_ordering(631) 00:11:59.648 fused_ordering(632) 00:11:59.648 fused_ordering(633) 00:11:59.648 fused_ordering(634) 00:11:59.648 fused_ordering(635) 00:11:59.648 fused_ordering(636) 00:11:59.648 fused_ordering(637) 00:11:59.648 fused_ordering(638) 00:11:59.648 fused_ordering(639) 00:11:59.648 fused_ordering(640) 00:11:59.648 fused_ordering(641) 00:11:59.648 fused_ordering(642) 00:11:59.648 fused_ordering(643) 00:11:59.648 fused_ordering(644) 00:11:59.648 fused_ordering(645) 00:11:59.648 fused_ordering(646) 00:11:59.648 fused_ordering(647) 00:11:59.648 fused_ordering(648) 00:11:59.648 fused_ordering(649) 00:11:59.648 fused_ordering(650) 00:11:59.648 fused_ordering(651) 00:11:59.648 fused_ordering(652) 00:11:59.648 fused_ordering(653) 00:11:59.648 fused_ordering(654) 00:11:59.648 fused_ordering(655) 00:11:59.648 fused_ordering(656) 00:11:59.648 fused_ordering(657) 00:11:59.648 fused_ordering(658) 00:11:59.648 fused_ordering(659) 00:11:59.648 fused_ordering(660) 00:11:59.648 fused_ordering(661) 00:11:59.648 fused_ordering(662) 00:11:59.648 fused_ordering(663) 00:11:59.648 fused_ordering(664) 00:11:59.648 fused_ordering(665) 00:11:59.648 fused_ordering(666) 00:11:59.648 fused_ordering(667) 00:11:59.648 fused_ordering(668) 00:11:59.648 fused_ordering(669) 00:11:59.648 fused_ordering(670) 00:11:59.648 fused_ordering(671) 00:11:59.648 fused_ordering(672) 00:11:59.648 fused_ordering(673) 00:11:59.648 fused_ordering(674) 00:11:59.648 fused_ordering(675) 00:11:59.648 fused_ordering(676) 00:11:59.648 fused_ordering(677) 00:11:59.648 fused_ordering(678) 00:11:59.648 fused_ordering(679) 00:11:59.648 fused_ordering(680) 00:11:59.648 fused_ordering(681) 00:11:59.648 fused_ordering(682) 00:11:59.648 fused_ordering(683) 00:11:59.648 fused_ordering(684) 00:11:59.648 fused_ordering(685) 00:11:59.648 fused_ordering(686) 00:11:59.648 fused_ordering(687) 00:11:59.648 fused_ordering(688) 00:11:59.648 fused_ordering(689) 00:11:59.648 fused_ordering(690) 00:11:59.648 fused_ordering(691) 00:11:59.648 fused_ordering(692) 00:11:59.648 fused_ordering(693) 00:11:59.648 fused_ordering(694) 00:11:59.648 fused_ordering(695) 00:11:59.648 fused_ordering(696) 00:11:59.648 fused_ordering(697) 00:11:59.648 fused_ordering(698) 00:11:59.648 fused_ordering(699) 00:11:59.648 fused_ordering(700) 00:11:59.648 fused_ordering(701) 00:11:59.648 fused_ordering(702) 00:11:59.648 fused_ordering(703) 00:11:59.648 fused_ordering(704) 00:11:59.648 fused_ordering(705) 00:11:59.648 fused_ordering(706) 00:11:59.648 fused_ordering(707) 00:11:59.648 fused_ordering(708) 00:11:59.648 fused_ordering(709) 00:11:59.648 fused_ordering(710) 00:11:59.648 fused_ordering(711) 00:11:59.648 fused_ordering(712) 00:11:59.648 fused_ordering(713) 00:11:59.648 fused_ordering(714) 00:11:59.648 fused_ordering(715) 00:11:59.648 fused_ordering(716) 00:11:59.648 fused_ordering(717) 00:11:59.648 fused_ordering(718) 00:11:59.648 fused_ordering(719) 00:11:59.648 fused_ordering(720) 00:11:59.648 fused_ordering(721) 00:11:59.648 fused_ordering(722) 00:11:59.648 fused_ordering(723) 00:11:59.648 fused_ordering(724) 00:11:59.648 fused_ordering(725) 00:11:59.648 fused_ordering(726) 00:11:59.648 fused_ordering(727) 00:11:59.648 fused_ordering(728) 00:11:59.648 fused_ordering(729) 00:11:59.648 fused_ordering(730) 00:11:59.648 fused_ordering(731) 00:11:59.648 fused_ordering(732) 00:11:59.648 fused_ordering(733) 00:11:59.648 fused_ordering(734) 00:11:59.648 fused_ordering(735) 00:11:59.648 fused_ordering(736) 00:11:59.648 fused_ordering(737) 00:11:59.649 fused_ordering(738) 00:11:59.649 fused_ordering(739) 00:11:59.649 fused_ordering(740) 00:11:59.649 fused_ordering(741) 00:11:59.649 fused_ordering(742) 00:11:59.649 fused_ordering(743) 00:11:59.649 fused_ordering(744) 00:11:59.649 fused_ordering(745) 00:11:59.649 fused_ordering(746) 00:11:59.649 fused_ordering(747) 00:11:59.649 fused_ordering(748) 00:11:59.649 fused_ordering(749) 00:11:59.649 fused_ordering(750) 00:11:59.649 fused_ordering(751) 00:11:59.649 fused_ordering(752) 00:11:59.649 fused_ordering(753) 00:11:59.649 fused_ordering(754) 00:11:59.649 fused_ordering(755) 00:11:59.649 fused_ordering(756) 00:11:59.649 fused_ordering(757) 00:11:59.649 fused_ordering(758) 00:11:59.649 fused_ordering(759) 00:11:59.649 fused_ordering(760) 00:11:59.649 fused_ordering(761) 00:11:59.649 fused_ordering(762) 00:11:59.649 fused_ordering(763) 00:11:59.649 fused_ordering(764) 00:11:59.649 fused_ordering(765) 00:11:59.649 fused_ordering(766) 00:11:59.649 fused_ordering(767) 00:11:59.649 fused_ordering(768) 00:11:59.649 fused_ordering(769) 00:11:59.649 fused_ordering(770) 00:11:59.649 fused_ordering(771) 00:11:59.649 fused_ordering(772) 00:11:59.649 fused_ordering(773) 00:11:59.649 fused_ordering(774) 00:11:59.649 fused_ordering(775) 00:11:59.649 fused_ordering(776) 00:11:59.649 fused_ordering(777) 00:11:59.649 fused_ordering(778) 00:11:59.649 fused_ordering(779) 00:11:59.649 fused_ordering(780) 00:11:59.649 fused_ordering(781) 00:11:59.649 fused_ordering(782) 00:11:59.649 fused_ordering(783) 00:11:59.649 fused_ordering(784) 00:11:59.649 fused_ordering(785) 00:11:59.649 fused_ordering(786) 00:11:59.649 fused_ordering(787) 00:11:59.649 fused_ordering(788) 00:11:59.649 fused_ordering(789) 00:11:59.649 fused_ordering(790) 00:11:59.649 fused_ordering(791) 00:11:59.649 fused_ordering(792) 00:11:59.649 fused_ordering(793) 00:11:59.649 fused_ordering(794) 00:11:59.649 fused_ordering(795) 00:11:59.649 fused_ordering(796) 00:11:59.649 fused_ordering(797) 00:11:59.649 fused_ordering(798) 00:11:59.649 fused_ordering(799) 00:11:59.649 fused_ordering(800) 00:11:59.649 fused_ordering(801) 00:11:59.649 fused_ordering(802) 00:11:59.649 fused_ordering(803) 00:11:59.649 fused_ordering(804) 00:11:59.649 fused_ordering(805) 00:11:59.649 fused_ordering(806) 00:11:59.649 fused_ordering(807) 00:11:59.649 fused_ordering(808) 00:11:59.649 fused_ordering(809) 00:11:59.649 fused_ordering(810) 00:11:59.649 fused_ordering(811) 00:11:59.649 fused_ordering(812) 00:11:59.649 fused_ordering(813) 00:11:59.649 fused_ordering(814) 00:11:59.649 fused_ordering(815) 00:11:59.649 fused_ordering(816) 00:11:59.649 fused_ordering(817) 00:11:59.649 fused_ordering(818) 00:11:59.649 fused_ordering(819) 00:11:59.649 fused_ordering(820) 00:12:00.216 fused_ordering(821) 00:12:00.216 fused_ordering(822) 00:12:00.216 fused_ordering(823) 00:12:00.216 fused_ordering(824) 00:12:00.216 fused_ordering(825) 00:12:00.216 fused_ordering(826) 00:12:00.216 fused_ordering(827) 00:12:00.216 fused_ordering(828) 00:12:00.216 fused_ordering(829) 00:12:00.216 fused_ordering(830) 00:12:00.216 fused_ordering(831) 00:12:00.216 fused_ordering(832) 00:12:00.216 fused_ordering(833) 00:12:00.216 fused_ordering(834) 00:12:00.216 fused_ordering(835) 00:12:00.216 fused_ordering(836) 00:12:00.216 fused_ordering(837) 00:12:00.216 fused_ordering(838) 00:12:00.216 fused_ordering(839) 00:12:00.216 fused_ordering(840) 00:12:00.216 fused_ordering(841) 00:12:00.216 fused_ordering(842) 00:12:00.216 fused_ordering(843) 00:12:00.216 fused_ordering(844) 00:12:00.216 fused_ordering(845) 00:12:00.216 fused_ordering(846) 00:12:00.216 fused_ordering(847) 00:12:00.216 fused_ordering(848) 00:12:00.216 fused_ordering(849) 00:12:00.216 fused_ordering(850) 00:12:00.216 fused_ordering(851) 00:12:00.216 fused_ordering(852) 00:12:00.216 fused_ordering(853) 00:12:00.216 fused_ordering(854) 00:12:00.216 fused_ordering(855) 00:12:00.216 fused_ordering(856) 00:12:00.216 fused_ordering(857) 00:12:00.216 fused_ordering(858) 00:12:00.216 fused_ordering(859) 00:12:00.216 fused_ordering(860) 00:12:00.216 fused_ordering(861) 00:12:00.216 fused_ordering(862) 00:12:00.216 fused_ordering(863) 00:12:00.216 fused_ordering(864) 00:12:00.216 fused_ordering(865) 00:12:00.216 fused_ordering(866) 00:12:00.216 fused_ordering(867) 00:12:00.216 fused_ordering(868) 00:12:00.216 fused_ordering(869) 00:12:00.216 fused_ordering(870) 00:12:00.216 fused_ordering(871) 00:12:00.216 fused_ordering(872) 00:12:00.216 fused_ordering(873) 00:12:00.216 fused_ordering(874) 00:12:00.216 fused_ordering(875) 00:12:00.216 fused_ordering(876) 00:12:00.216 fused_ordering(877) 00:12:00.216 fused_ordering(878) 00:12:00.216 fused_ordering(879) 00:12:00.216 fused_ordering(880) 00:12:00.216 fused_ordering(881) 00:12:00.216 fused_ordering(882) 00:12:00.216 fused_ordering(883) 00:12:00.216 fused_ordering(884) 00:12:00.216 fused_ordering(885) 00:12:00.216 fused_ordering(886) 00:12:00.216 fused_ordering(887) 00:12:00.216 fused_ordering(888) 00:12:00.216 fused_ordering(889) 00:12:00.216 fused_ordering(890) 00:12:00.216 fused_ordering(891) 00:12:00.216 fused_ordering(892) 00:12:00.216 fused_ordering(893) 00:12:00.216 fused_ordering(894) 00:12:00.216 fused_ordering(895) 00:12:00.216 fused_ordering(896) 00:12:00.216 fused_ordering(897) 00:12:00.216 fused_ordering(898) 00:12:00.216 fused_ordering(899) 00:12:00.216 fused_ordering(900) 00:12:00.216 fused_ordering(901) 00:12:00.216 fused_ordering(902) 00:12:00.216 fused_ordering(903) 00:12:00.216 fused_ordering(904) 00:12:00.216 fused_ordering(905) 00:12:00.216 fused_ordering(906) 00:12:00.216 fused_ordering(907) 00:12:00.216 fused_ordering(908) 00:12:00.216 fused_ordering(909) 00:12:00.216 fused_ordering(910) 00:12:00.216 fused_ordering(911) 00:12:00.216 fused_ordering(912) 00:12:00.216 fused_ordering(913) 00:12:00.216 fused_ordering(914) 00:12:00.216 fused_ordering(915) 00:12:00.216 fused_ordering(916) 00:12:00.216 fused_ordering(917) 00:12:00.216 fused_ordering(918) 00:12:00.216 fused_ordering(919) 00:12:00.216 fused_ordering(920) 00:12:00.216 fused_ordering(921) 00:12:00.216 fused_ordering(922) 00:12:00.216 fused_ordering(923) 00:12:00.216 fused_ordering(924) 00:12:00.216 fused_ordering(925) 00:12:00.216 fused_ordering(926) 00:12:00.216 fused_ordering(927) 00:12:00.216 fused_ordering(928) 00:12:00.216 fused_ordering(929) 00:12:00.216 fused_ordering(930) 00:12:00.216 fused_ordering(931) 00:12:00.216 fused_ordering(932) 00:12:00.216 fused_ordering(933) 00:12:00.216 fused_ordering(934) 00:12:00.216 fused_ordering(935) 00:12:00.216 fused_ordering(936) 00:12:00.216 fused_ordering(937) 00:12:00.216 fused_ordering(938) 00:12:00.216 fused_ordering(939) 00:12:00.216 fused_ordering(940) 00:12:00.216 fused_ordering(941) 00:12:00.216 fused_ordering(942) 00:12:00.216 fused_ordering(943) 00:12:00.216 fused_ordering(944) 00:12:00.216 fused_ordering(945) 00:12:00.216 fused_ordering(946) 00:12:00.216 fused_ordering(947) 00:12:00.216 fused_ordering(948) 00:12:00.216 fused_ordering(949) 00:12:00.216 fused_ordering(950) 00:12:00.216 fused_ordering(951) 00:12:00.216 fused_ordering(952) 00:12:00.216 fused_ordering(953) 00:12:00.216 fused_ordering(954) 00:12:00.216 fused_ordering(955) 00:12:00.216 fused_ordering(956) 00:12:00.216 fused_ordering(957) 00:12:00.216 fused_ordering(958) 00:12:00.216 fused_ordering(959) 00:12:00.216 fused_ordering(960) 00:12:00.216 fused_ordering(961) 00:12:00.216 fused_ordering(962) 00:12:00.216 fused_ordering(963) 00:12:00.216 fused_ordering(964) 00:12:00.216 fused_ordering(965) 00:12:00.216 fused_ordering(966) 00:12:00.216 fused_ordering(967) 00:12:00.216 fused_ordering(968) 00:12:00.216 fused_ordering(969) 00:12:00.216 fused_ordering(970) 00:12:00.216 fused_ordering(971) 00:12:00.216 fused_ordering(972) 00:12:00.216 fused_ordering(973) 00:12:00.216 fused_ordering(974) 00:12:00.216 fused_ordering(975) 00:12:00.216 fused_ordering(976) 00:12:00.216 fused_ordering(977) 00:12:00.216 fused_ordering(978) 00:12:00.216 fused_ordering(979) 00:12:00.216 fused_ordering(980) 00:12:00.216 fused_ordering(981) 00:12:00.216 fused_ordering(982) 00:12:00.216 fused_ordering(983) 00:12:00.216 fused_ordering(984) 00:12:00.216 fused_ordering(985) 00:12:00.216 fused_ordering(986) 00:12:00.216 fused_ordering(987) 00:12:00.216 fused_ordering(988) 00:12:00.216 fused_ordering(989) 00:12:00.216 fused_ordering(990) 00:12:00.216 fused_ordering(991) 00:12:00.216 fused_ordering(992) 00:12:00.216 fused_ordering(993) 00:12:00.216 fused_ordering(994) 00:12:00.216 fused_ordering(995) 00:12:00.216 fused_ordering(996) 00:12:00.216 fused_ordering(997) 00:12:00.216 fused_ordering(998) 00:12:00.216 fused_ordering(999) 00:12:00.216 fused_ordering(1000) 00:12:00.216 fused_ordering(1001) 00:12:00.216 fused_ordering(1002) 00:12:00.216 fused_ordering(1003) 00:12:00.216 fused_ordering(1004) 00:12:00.216 fused_ordering(1005) 00:12:00.216 fused_ordering(1006) 00:12:00.216 fused_ordering(1007) 00:12:00.216 fused_ordering(1008) 00:12:00.216 fused_ordering(1009) 00:12:00.216 fused_ordering(1010) 00:12:00.216 fused_ordering(1011) 00:12:00.216 fused_ordering(1012) 00:12:00.216 fused_ordering(1013) 00:12:00.216 fused_ordering(1014) 00:12:00.217 fused_ordering(1015) 00:12:00.217 fused_ordering(1016) 00:12:00.217 fused_ordering(1017) 00:12:00.217 fused_ordering(1018) 00:12:00.217 fused_ordering(1019) 00:12:00.217 fused_ordering(1020) 00:12:00.217 fused_ordering(1021) 00:12:00.217 fused_ordering(1022) 00:12:00.217 fused_ordering(1023) 00:12:00.217 15:53:28 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:00.217 15:53:28 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:00.217 15:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:00.217 15:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:12:00.217 15:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:00.217 15:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:12:00.217 15:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:00.217 15:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:00.217 rmmod nvme_tcp 00:12:00.217 rmmod nvme_fabrics 00:12:00.217 rmmod nvme_keyring 00:12:00.217 15:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:00.217 15:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:12:00.217 15:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:12:00.217 15:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 3671348 ']' 00:12:00.217 15:53:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 3671348 00:12:00.217 15:53:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 3671348 ']' 00:12:00.217 15:53:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 3671348 00:12:00.217 15:53:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:12:00.217 15:53:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:00.217 15:53:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3671348 00:12:00.217 15:53:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:00.217 15:53:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:00.217 15:53:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3671348' 00:12:00.217 killing process with pid 3671348 00:12:00.217 15:53:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 3671348 00:12:00.217 15:53:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 3671348 00:12:00.480 15:53:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:00.481 15:53:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:00.481 15:53:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:00.481 15:53:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:00.481 15:53:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:00.481 15:53:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.481 15:53:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:00.481 15:53:29 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.389 15:53:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:02.389 00:12:02.389 real 0m10.648s 00:12:02.389 user 0m5.523s 00:12:02.389 sys 0m5.525s 00:12:02.389 15:53:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:02.389 15:53:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:02.389 ************************************ 00:12:02.389 END TEST nvmf_fused_ordering 00:12:02.389 ************************************ 00:12:02.389 15:53:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:02.389 15:53:31 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:02.389 15:53:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:02.389 15:53:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:02.389 15:53:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:02.648 ************************************ 00:12:02.648 START TEST nvmf_delete_subsystem 00:12:02.648 ************************************ 00:12:02.648 15:53:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:02.648 * Looking for test storage... 00:12:02.648 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:02.648 15:53:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:02.648 15:53:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:12:02.648 15:53:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:02.648 15:53:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:02.648 15:53:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:02.648 15:53:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:02.648 15:53:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:02.648 15:53:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:02.648 15:53:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:02.648 15:53:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:02.648 15:53:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:02.648 15:53:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:02.648 15:53:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:02.648 15:53:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:02.648 15:53:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:02.648 15:53:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:02.648 15:53:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:02.648 15:53:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:02.648 15:53:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:02.648 15:53:31 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:02.648 15:53:31 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:02.648 15:53:31 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:02.648 15:53:31 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.648 15:53:31 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.648 15:53:31 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.648 15:53:31 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:12:02.648 15:53:31 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.648 15:53:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:12:02.648 15:53:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:02.648 15:53:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:02.648 15:53:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:02.648 15:53:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:02.648 15:53:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:02.648 15:53:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:02.648 15:53:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:02.648 15:53:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:02.648 15:53:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:12:02.648 15:53:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:02.648 15:53:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:02.648 15:53:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:02.648 15:53:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:02.648 15:53:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:02.648 15:53:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.648 15:53:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:02.648 15:53:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.648 15:53:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:02.648 15:53:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:02.648 15:53:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:12:02.648 15:53:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:08.019 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:08.019 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:08.019 Found net devices under 0000:86:00.0: cvl_0_0 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:08.019 Found net devices under 0000:86:00.1: cvl_0_1 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:08.019 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:08.020 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:08.020 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:08.020 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:08.020 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:08.020 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:08.020 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:08.020 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:08.020 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:08.020 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:08.020 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:08.020 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:12:08.020 00:12:08.020 --- 10.0.0.2 ping statistics --- 00:12:08.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.020 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:12:08.020 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:08.020 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:08.020 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:12:08.020 00:12:08.020 --- 10.0.0.1 ping statistics --- 00:12:08.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.020 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:12:08.020 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:08.020 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:12:08.020 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:08.020 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:08.020 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:08.020 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:08.020 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:08.020 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:08.020 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:08.020 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:12:08.020 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:08.020 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:08.020 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:08.020 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=3675342 00:12:08.020 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 3675342 00:12:08.020 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:12:08.020 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 3675342 ']' 00:12:08.020 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.020 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:08.020 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.020 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:08.020 15:53:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:08.020 [2024-07-15 15:53:36.775772] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:12:08.020 [2024-07-15 15:53:36.775815] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:08.020 EAL: No free 2048 kB hugepages reported on node 1 00:12:08.020 [2024-07-15 15:53:36.832383] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:08.020 [2024-07-15 15:53:36.912161] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:08.020 [2024-07-15 15:53:36.912192] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:08.020 [2024-07-15 15:53:36.912200] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:08.020 [2024-07-15 15:53:36.912206] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:08.020 [2024-07-15 15:53:36.912211] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:08.020 [2024-07-15 15:53:36.912256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:08.020 [2024-07-15 15:53:36.912259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.956 15:53:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:08.956 15:53:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:12:08.956 15:53:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:08.956 15:53:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:08.956 15:53:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:08.956 15:53:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:08.956 15:53:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:08.956 15:53:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.956 15:53:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:08.956 [2024-07-15 15:53:37.616744] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:08.956 15:53:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.956 15:53:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:08.956 15:53:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.956 15:53:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:08.956 15:53:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.956 15:53:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:08.956 15:53:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.956 15:53:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:08.956 [2024-07-15 15:53:37.636911] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:08.956 15:53:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.956 15:53:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:08.956 15:53:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.956 15:53:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:08.956 NULL1 00:12:08.956 15:53:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.956 15:53:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:08.956 15:53:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.956 15:53:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:08.956 Delay0 00:12:08.956 15:53:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.956 15:53:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:08.956 15:53:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.956 15:53:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:08.956 15:53:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.956 15:53:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3675378 00:12:08.956 15:53:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:12:08.956 15:53:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:08.956 EAL: No free 2048 kB hugepages reported on node 1 00:12:08.956 [2024-07-15 15:53:37.717589] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:10.898 15:53:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:10.898 15:53:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.898 15:53:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 starting I/O failed: -6 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 starting I/O failed: -6 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 starting I/O failed: -6 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 starting I/O failed: -6 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 starting I/O failed: -6 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 starting I/O failed: -6 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 starting I/O failed: -6 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 starting I/O failed: -6 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 starting I/O failed: -6 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 starting I/O failed: -6 00:12:10.898 [2024-07-15 15:53:39.758437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0484000c00 is same with the state(5) to be set 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 starting I/O failed: -6 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 starting I/O failed: -6 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 starting I/O failed: -6 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 starting I/O failed: -6 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 starting I/O failed: -6 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 starting I/O failed: -6 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 starting I/O failed: -6 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 starting I/O failed: -6 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 starting I/O failed: -6 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 starting I/O failed: -6 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 starting I/O failed: -6 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 starting I/O failed: -6 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.898 Write completed with error (sct=0, sc=8) 00:12:10.898 Read completed with error (sct=0, sc=8) 00:12:10.899 starting I/O failed: -6 00:12:10.899 Write completed with error (sct=0, sc=8) 00:12:10.899 Write completed with error (sct=0, sc=8) 00:12:10.899 Read completed with error (sct=0, sc=8) 00:12:10.899 Read completed with error (sct=0, sc=8) 00:12:10.899 Read completed with error (sct=0, sc=8) 00:12:10.899 Read completed with error (sct=0, sc=8) 00:12:10.899 Read completed with error (sct=0, sc=8) 00:12:10.899 Read completed with error (sct=0, sc=8) 00:12:10.899 starting I/O failed: -6 00:12:10.899 Write completed with error (sct=0, sc=8) 00:12:10.899 [2024-07-15 15:53:39.759120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80e5c0 is same with the state(5) to be set 00:12:11.830 [2024-07-15 15:53:40.730848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80fac0 is same with the state(5) to be set 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Write completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Write completed with error (sct=0, sc=8) 00:12:11.830 Write completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Write completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Write completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Write completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Write completed with error (sct=0, sc=8) 00:12:11.830 [2024-07-15 15:53:40.760251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f048400d600 is same with the state(5) to be set 00:12:11.830 Write completed with error (sct=0, sc=8) 00:12:11.830 Write completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Write completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Write completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Write completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Write completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Write completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Write completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Write completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Write completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Write completed with error (sct=0, sc=8) 00:12:11.830 Write completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 [2024-07-15 15:53:40.760958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80e3e0 is same with the state(5) to be set 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Write completed with error (sct=0, sc=8) 00:12:11.830 Write completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Write completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Write completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Write completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Write completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Write completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Write completed with error (sct=0, sc=8) 00:12:11.830 Write completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Write completed with error (sct=0, sc=8) 00:12:11.830 Write completed with error (sct=0, sc=8) 00:12:11.830 Write completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Write completed with error (sct=0, sc=8) 00:12:11.830 Write completed with error (sct=0, sc=8) 00:12:11.830 [2024-07-15 15:53:40.761173] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80e7a0 is same with the state(5) to be set 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Write completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Write completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Write completed with error (sct=0, sc=8) 00:12:11.830 Write completed with error (sct=0, sc=8) 00:12:11.830 Write completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Write completed with error (sct=0, sc=8) 00:12:11.830 Write completed with error (sct=0, sc=8) 00:12:11.830 Write completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Write completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Write completed with error (sct=0, sc=8) 00:12:11.830 Write completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Write completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Write completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 Read completed with error (sct=0, sc=8) 00:12:11.830 [2024-07-15 15:53:40.761341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80e000 is same with the state(5) to be set 00:12:11.830 Initializing NVMe Controllers 00:12:11.830 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:11.830 Controller IO queue size 128, less than required. 00:12:11.830 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:11.830 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:11.830 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:11.830 Initialization complete. Launching workers. 00:12:11.830 ======================================================== 00:12:11.830 Latency(us) 00:12:11.830 Device Information : IOPS MiB/s Average min max 00:12:11.830 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 195.54 0.10 943786.47 925.41 1012394.24 00:12:11.830 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 154.35 0.08 880484.97 378.30 1012316.76 00:12:11.830 ======================================================== 00:12:11.830 Total : 349.89 0.17 915861.98 378.30 1012394.24 00:12:11.830 00:12:11.830 [2024-07-15 15:53:40.762046] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80fac0 (9): Bad file descriptor 00:12:11.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:12:12.087 15:53:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.087 15:53:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:12:12.087 15:53:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3675378 00:12:12.087 15:53:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:12:12.343 15:53:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:12:12.343 15:53:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3675378 00:12:12.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3675378) - No such process 00:12:12.343 15:53:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3675378 00:12:12.343 15:53:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:12:12.343 15:53:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 3675378 00:12:12.343 15:53:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:12:12.343 15:53:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:12.343 15:53:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:12:12.343 15:53:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:12.343 15:53:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 3675378 00:12:12.343 15:53:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:12:12.343 15:53:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:12.343 15:53:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:12.343 15:53:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:12.600 15:53:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:12.600 15:53:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.600 15:53:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:12.600 15:53:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.600 15:53:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:12.600 15:53:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.600 15:53:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:12.600 [2024-07-15 15:53:41.294393] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:12.600 15:53:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.600 15:53:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:12.600 15:53:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.600 15:53:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:12.600 15:53:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.600 15:53:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3676059 00:12:12.600 15:53:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:12.600 15:53:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:12:12.600 15:53:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3676059 00:12:12.600 15:53:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:12.600 EAL: No free 2048 kB hugepages reported on node 1 00:12:12.600 [2024-07-15 15:53:41.360031] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:13.164 15:53:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:13.164 15:53:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3676059 00:12:13.164 15:53:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:13.422 15:53:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:13.422 15:53:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3676059 00:12:13.422 15:53:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:13.987 15:53:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:13.987 15:53:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3676059 00:12:13.987 15:53:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:14.553 15:53:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:14.553 15:53:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3676059 00:12:14.553 15:53:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:15.117 15:53:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:15.117 15:53:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3676059 00:12:15.117 15:53:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:15.682 15:53:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:15.682 15:53:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3676059 00:12:15.682 15:53:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:15.682 Initializing NVMe Controllers 00:12:15.682 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:15.682 Controller IO queue size 128, less than required. 00:12:15.682 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:15.682 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:15.682 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:15.682 Initialization complete. Launching workers. 00:12:15.682 ======================================================== 00:12:15.682 Latency(us) 00:12:15.682 Device Information : IOPS MiB/s Average min max 00:12:15.682 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003587.67 1000145.39 1041208.30 00:12:15.682 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004907.45 1000236.45 1011706.64 00:12:15.682 ======================================================== 00:12:15.682 Total : 256.00 0.12 1004247.56 1000145.39 1041208.30 00:12:15.682 00:12:15.943 15:53:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:15.943 15:53:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3676059 00:12:15.943 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3676059) - No such process 00:12:15.943 15:53:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3676059 00:12:15.943 15:53:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:12:15.943 15:53:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:12:15.943 15:53:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:15.943 15:53:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:12:15.943 15:53:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:15.943 15:53:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:12:15.943 15:53:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:15.943 15:53:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:15.943 rmmod nvme_tcp 00:12:15.943 rmmod nvme_fabrics 00:12:16.201 rmmod nvme_keyring 00:12:16.201 15:53:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:16.201 15:53:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:12:16.201 15:53:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:12:16.201 15:53:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 3675342 ']' 00:12:16.201 15:53:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 3675342 00:12:16.201 15:53:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 3675342 ']' 00:12:16.201 15:53:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 3675342 00:12:16.201 15:53:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:12:16.201 15:53:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:16.201 15:53:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3675342 00:12:16.201 15:53:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:16.201 15:53:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:16.201 15:53:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3675342' 00:12:16.202 killing process with pid 3675342 00:12:16.202 15:53:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 3675342 00:12:16.202 15:53:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 3675342 00:12:16.460 15:53:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:16.460 15:53:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:16.460 15:53:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:16.460 15:53:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:16.460 15:53:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:16.460 15:53:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.460 15:53:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:16.460 15:53:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.361 15:53:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:18.361 00:12:18.361 real 0m15.863s 00:12:18.361 user 0m30.107s 00:12:18.361 sys 0m4.788s 00:12:18.361 15:53:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:18.361 15:53:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:18.361 ************************************ 00:12:18.361 END TEST nvmf_delete_subsystem 00:12:18.361 ************************************ 00:12:18.361 15:53:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:18.361 15:53:47 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:18.361 15:53:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:18.361 15:53:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:18.361 15:53:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:18.361 ************************************ 00:12:18.361 START TEST nvmf_ns_masking 00:12:18.361 ************************************ 00:12:18.361 15:53:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:18.621 * Looking for test storage... 00:12:18.621 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=4fd1147c-50ea-461c-b1d1-69efe15f6a89 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=77466772-8337-44e8-a91f-1ecd6692eca8 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=21b560b0-9483-446a-af1b-d825f68c6c4e 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:12:18.621 15:53:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:23.886 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:23.886 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:12:23.886 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:23.886 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:23.886 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:23.886 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:23.886 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:23.886 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:12:23.886 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:23.886 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:12:23.886 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:12:23.886 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:12:23.886 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:12:23.886 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:12:23.886 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:12:23.886 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:23.886 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:23.886 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:23.886 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:23.886 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:23.886 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:23.886 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:23.886 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:23.886 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:23.886 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:23.886 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:23.886 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:23.886 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:23.886 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:23.886 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:23.886 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:23.887 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:23.887 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:23.887 Found net devices under 0000:86:00.0: cvl_0_0 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:23.887 Found net devices under 0000:86:00.1: cvl_0_1 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:23.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:23.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:12:23.887 00:12:23.887 --- 10.0.0.2 ping statistics --- 00:12:23.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.887 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:23.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:23.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:12:23.887 00:12:23.887 --- 10.0.0.1 ping statistics --- 00:12:23.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.887 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=3680052 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 3680052 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 3680052 ']' 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:23.887 15:53:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:23.887 [2024-07-15 15:53:52.339305] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:12:23.887 [2024-07-15 15:53:52.339350] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:23.887 EAL: No free 2048 kB hugepages reported on node 1 00:12:23.887 [2024-07-15 15:53:52.396344] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.887 [2024-07-15 15:53:52.474991] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:23.887 [2024-07-15 15:53:52.475024] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:23.887 [2024-07-15 15:53:52.475031] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:23.887 [2024-07-15 15:53:52.475037] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:23.887 [2024-07-15 15:53:52.475042] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:23.887 [2024-07-15 15:53:52.475059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.471 15:53:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:24.471 15:53:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:12:24.471 15:53:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:24.471 15:53:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:24.471 15:53:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:24.471 15:53:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:24.471 15:53:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:24.471 [2024-07-15 15:53:53.318792] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:24.471 15:53:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:24.471 15:53:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:24.471 15:53:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:24.735 Malloc1 00:12:24.735 15:53:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:24.993 Malloc2 00:12:24.993 15:53:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:24.993 15:53:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:25.252 15:53:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:25.510 [2024-07-15 15:53:54.200048] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:25.510 15:53:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:25.510 15:53:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 21b560b0-9483-446a-af1b-d825f68c6c4e -a 10.0.0.2 -s 4420 -i 4 00:12:25.510 15:53:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:25.510 15:53:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:25.510 15:53:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:25.510 15:53:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:25.510 15:53:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:28.045 15:53:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:28.045 15:53:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:28.045 15:53:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:28.045 15:53:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:28.045 15:53:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:28.045 15:53:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:28.045 15:53:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:28.045 15:53:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:28.045 15:53:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:28.045 15:53:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:28.045 15:53:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:28.045 15:53:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:28.045 15:53:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:28.045 [ 0]:0x1 00:12:28.045 15:53:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:28.045 15:53:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:28.045 15:53:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1c3587854ee24b4b9cdeacf2e64f830a 00:12:28.045 15:53:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1c3587854ee24b4b9cdeacf2e64f830a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:28.045 15:53:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:28.045 15:53:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:28.045 15:53:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:28.045 15:53:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:28.045 [ 0]:0x1 00:12:28.045 15:53:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:28.045 15:53:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:28.045 15:53:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1c3587854ee24b4b9cdeacf2e64f830a 00:12:28.045 15:53:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1c3587854ee24b4b9cdeacf2e64f830a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:28.045 15:53:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:28.045 15:53:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:28.045 15:53:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:28.045 [ 1]:0x2 00:12:28.045 15:53:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:28.046 15:53:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:28.046 15:53:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f36c9feea33e4190a3018b13564cabc1 00:12:28.046 15:53:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f36c9feea33e4190a3018b13564cabc1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:28.046 15:53:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:28.046 15:53:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:28.046 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.046 15:53:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:28.304 15:53:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:28.304 15:53:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:28.304 15:53:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 21b560b0-9483-446a-af1b-d825f68c6c4e -a 10.0.0.2 -s 4420 -i 4 00:12:28.565 15:53:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:28.565 15:53:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:28.565 15:53:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:28.565 15:53:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:12:28.565 15:53:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:12:28.565 15:53:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:30.520 15:53:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:30.520 15:53:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:30.520 15:53:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:30.520 15:53:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:30.520 15:53:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:30.520 15:53:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:30.520 15:53:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:30.520 15:53:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:30.520 15:53:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:30.520 15:53:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:30.520 15:53:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:30.520 15:53:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:30.520 15:53:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:30.520 15:53:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:30.520 15:53:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:30.520 15:53:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:30.520 15:53:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:30.520 15:53:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:30.520 15:53:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:30.520 15:53:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:30.520 15:53:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:30.520 15:53:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:30.779 15:53:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:30.779 15:53:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:30.779 15:53:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:30.779 15:53:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:30.779 15:53:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:30.779 15:53:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:30.779 15:53:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:30.779 15:53:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:30.779 15:53:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:30.779 [ 0]:0x2 00:12:30.779 15:53:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:30.779 15:53:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:30.779 15:53:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f36c9feea33e4190a3018b13564cabc1 00:12:30.779 15:53:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f36c9feea33e4190a3018b13564cabc1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:30.779 15:53:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:30.779 15:53:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:30.779 15:53:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:30.779 15:53:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:30.779 [ 0]:0x1 00:12:30.779 15:53:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:30.779 15:53:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:31.038 15:53:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1c3587854ee24b4b9cdeacf2e64f830a 00:12:31.038 15:53:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1c3587854ee24b4b9cdeacf2e64f830a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:31.038 15:53:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:31.038 15:53:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:31.038 15:53:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:31.038 [ 1]:0x2 00:12:31.038 15:53:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:31.038 15:53:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:31.038 15:53:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f36c9feea33e4190a3018b13564cabc1 00:12:31.038 15:53:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f36c9feea33e4190a3018b13564cabc1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:31.038 15:53:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:31.038 15:53:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:31.038 15:53:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:31.038 15:53:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:31.038 15:53:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:31.038 15:53:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:31.038 15:53:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:31.038 15:53:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:31.038 15:53:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:31.038 15:53:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:31.038 15:53:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:31.298 15:53:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:31.298 15:53:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:31.298 15:54:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:31.298 15:54:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:31.298 15:54:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:31.298 15:54:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:31.298 15:54:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:31.298 15:54:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:31.298 15:54:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:31.298 15:54:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:31.298 15:54:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:31.298 [ 0]:0x2 00:12:31.298 15:54:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:31.298 15:54:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:31.298 15:54:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f36c9feea33e4190a3018b13564cabc1 00:12:31.298 15:54:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f36c9feea33e4190a3018b13564cabc1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:31.298 15:54:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:31.298 15:54:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:31.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.298 15:54:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:31.557 15:54:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:31.557 15:54:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 21b560b0-9483-446a-af1b-d825f68c6c4e -a 10.0.0.2 -s 4420 -i 4 00:12:31.816 15:54:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:31.816 15:54:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:31.816 15:54:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:31.816 15:54:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:31.816 15:54:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:31.816 15:54:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:33.718 15:54:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:33.718 15:54:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:33.718 15:54:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:33.718 15:54:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:33.718 15:54:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:33.718 15:54:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:33.718 15:54:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:33.718 15:54:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:33.718 15:54:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:33.718 15:54:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:33.718 15:54:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:33.718 15:54:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:33.718 15:54:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:33.718 [ 0]:0x1 00:12:33.718 15:54:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:33.718 15:54:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:33.977 15:54:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1c3587854ee24b4b9cdeacf2e64f830a 00:12:33.977 15:54:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1c3587854ee24b4b9cdeacf2e64f830a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:33.977 15:54:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:33.977 15:54:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:33.977 15:54:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:33.977 [ 1]:0x2 00:12:33.977 15:54:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:33.977 15:54:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:33.977 15:54:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f36c9feea33e4190a3018b13564cabc1 00:12:33.977 15:54:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f36c9feea33e4190a3018b13564cabc1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:33.977 15:54:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:33.977 15:54:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:33.977 15:54:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:33.977 15:54:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:33.977 15:54:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:33.977 15:54:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:33.977 15:54:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:33.977 15:54:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:33.977 15:54:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:33.977 15:54:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:33.977 15:54:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:33.977 15:54:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:33.977 15:54:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:34.237 15:54:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:34.237 15:54:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:34.237 15:54:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:34.237 15:54:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:34.237 15:54:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:34.237 15:54:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:34.237 15:54:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:34.237 15:54:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:34.237 15:54:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:34.237 [ 0]:0x2 00:12:34.237 15:54:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:34.237 15:54:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:34.237 15:54:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f36c9feea33e4190a3018b13564cabc1 00:12:34.237 15:54:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f36c9feea33e4190a3018b13564cabc1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:34.237 15:54:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:34.237 15:54:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:34.237 15:54:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:34.237 15:54:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:34.237 15:54:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:34.237 15:54:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:34.237 15:54:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:34.237 15:54:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:34.237 15:54:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:34.237 15:54:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:34.237 15:54:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:34.237 15:54:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:34.237 [2024-07-15 15:54:03.161299] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:34.237 request: 00:12:34.237 { 00:12:34.237 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:34.237 "nsid": 2, 00:12:34.237 "host": "nqn.2016-06.io.spdk:host1", 00:12:34.237 "method": "nvmf_ns_remove_host", 00:12:34.237 "req_id": 1 00:12:34.237 } 00:12:34.237 Got JSON-RPC error response 00:12:34.237 response: 00:12:34.237 { 00:12:34.237 "code": -32602, 00:12:34.237 "message": "Invalid parameters" 00:12:34.237 } 00:12:34.496 15:54:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:34.496 15:54:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:34.496 15:54:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:34.496 15:54:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:34.496 15:54:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:34.496 15:54:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:34.496 15:54:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:34.496 15:54:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:34.496 15:54:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:34.496 15:54:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:34.496 15:54:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:34.496 15:54:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:34.496 15:54:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:34.496 15:54:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:34.496 15:54:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:34.496 15:54:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:34.496 15:54:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:34.496 15:54:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:34.496 15:54:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:34.496 15:54:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:34.496 15:54:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:34.496 15:54:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:34.496 15:54:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:34.496 15:54:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:34.496 15:54:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:34.496 [ 0]:0x2 00:12:34.496 15:54:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:34.496 15:54:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:34.496 15:54:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f36c9feea33e4190a3018b13564cabc1 00:12:34.496 15:54:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f36c9feea33e4190a3018b13564cabc1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:34.496 15:54:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:34.496 15:54:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:34.496 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.496 15:54:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3682178 00:12:34.496 15:54:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:34.496 15:54:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3682178 /var/tmp/host.sock 00:12:34.496 15:54:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:34.496 15:54:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 3682178 ']' 00:12:34.496 15:54:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:12:34.496 15:54:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:34.496 15:54:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:34.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:34.496 15:54:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:34.496 15:54:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:34.496 [2024-07-15 15:54:03.379858] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:12:34.496 [2024-07-15 15:54:03.379903] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3682178 ] 00:12:34.496 EAL: No free 2048 kB hugepages reported on node 1 00:12:34.754 [2024-07-15 15:54:03.434328] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.754 [2024-07-15 15:54:03.507628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:35.321 15:54:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:35.321 15:54:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:12:35.321 15:54:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:35.579 15:54:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:35.838 15:54:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 4fd1147c-50ea-461c-b1d1-69efe15f6a89 00:12:35.838 15:54:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:35.838 15:54:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 4FD1147C50EA461CB1D169EFE15F6A89 -i 00:12:35.838 15:54:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 77466772-8337-44e8-a91f-1ecd6692eca8 00:12:35.838 15:54:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:35.838 15:54:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 77466772833744E8A91F1ECD6692ECA8 -i 00:12:36.097 15:54:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:36.356 15:54:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:36.356 15:54:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:36.356 15:54:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:36.616 nvme0n1 00:12:36.616 15:54:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:36.616 15:54:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:36.875 nvme1n2 00:12:36.875 15:54:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:36.875 15:54:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:36.875 15:54:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:36.875 15:54:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:36.875 15:54:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:37.133 15:54:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:37.133 15:54:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:37.133 15:54:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:37.133 15:54:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:37.391 15:54:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 4fd1147c-50ea-461c-b1d1-69efe15f6a89 == \4\f\d\1\1\4\7\c\-\5\0\e\a\-\4\6\1\c\-\b\1\d\1\-\6\9\e\f\e\1\5\f\6\a\8\9 ]] 00:12:37.391 15:54:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:37.391 15:54:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:37.391 15:54:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:37.391 15:54:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 77466772-8337-44e8-a91f-1ecd6692eca8 == \7\7\4\6\6\7\7\2\-\8\3\3\7\-\4\4\e\8\-\a\9\1\f\-\1\e\c\d\6\6\9\2\e\c\a\8 ]] 00:12:37.391 15:54:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 3682178 00:12:37.391 15:54:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 3682178 ']' 00:12:37.391 15:54:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 3682178 00:12:37.391 15:54:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:37.391 15:54:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:37.391 15:54:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3682178 00:12:37.650 15:54:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:37.650 15:54:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:37.650 15:54:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3682178' 00:12:37.650 killing process with pid 3682178 00:12:37.650 15:54:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 3682178 00:12:37.650 15:54:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 3682178 00:12:37.908 15:54:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:38.167 15:54:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:12:38.167 15:54:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:12:38.167 15:54:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:38.167 15:54:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:12:38.167 15:54:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:38.167 15:54:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:12:38.167 15:54:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:38.167 15:54:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:38.167 rmmod nvme_tcp 00:12:38.167 rmmod nvme_fabrics 00:12:38.167 rmmod nvme_keyring 00:12:38.167 15:54:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:38.167 15:54:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:12:38.167 15:54:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:12:38.167 15:54:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 3680052 ']' 00:12:38.167 15:54:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 3680052 00:12:38.167 15:54:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 3680052 ']' 00:12:38.167 15:54:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 3680052 00:12:38.167 15:54:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:38.167 15:54:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:38.167 15:54:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3680052 00:12:38.167 15:54:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:38.167 15:54:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:38.167 15:54:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3680052' 00:12:38.167 killing process with pid 3680052 00:12:38.167 15:54:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 3680052 00:12:38.167 15:54:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 3680052 00:12:38.427 15:54:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:38.427 15:54:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:38.427 15:54:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:38.427 15:54:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:38.427 15:54:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:38.427 15:54:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:38.427 15:54:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:38.427 15:54:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:40.334 15:54:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:40.334 00:12:40.334 real 0m21.983s 00:12:40.334 user 0m23.964s 00:12:40.334 sys 0m5.711s 00:12:40.334 15:54:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:40.334 15:54:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:40.334 ************************************ 00:12:40.334 END TEST nvmf_ns_masking 00:12:40.334 ************************************ 00:12:40.593 15:54:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:40.593 15:54:09 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:12:40.593 15:54:09 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:40.593 15:54:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:40.593 15:54:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:40.593 15:54:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:40.593 ************************************ 00:12:40.593 START TEST nvmf_nvme_cli 00:12:40.593 ************************************ 00:12:40.593 15:54:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:40.593 * Looking for test storage... 00:12:40.593 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:12:40.594 15:54:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:45.902 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:45.902 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:45.902 Found net devices under 0000:86:00.0: cvl_0_0 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:45.902 Found net devices under 0000:86:00.1: cvl_0_1 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:45.902 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:45.903 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:45.903 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:45.903 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:12:45.903 00:12:45.903 --- 10.0.0.2 ping statistics --- 00:12:45.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.903 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:12:45.903 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:45.903 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:45.903 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:12:45.903 00:12:45.903 --- 10.0.0.1 ping statistics --- 00:12:45.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.903 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:12:45.903 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:45.903 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:12:45.903 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:45.903 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:45.903 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:45.903 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:45.903 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:45.903 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:45.903 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:45.903 15:54:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:45.903 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:45.903 15:54:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:45.903 15:54:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:45.903 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=3686578 00:12:45.903 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 3686578 00:12:45.903 15:54:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 3686578 ']' 00:12:45.903 15:54:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.903 15:54:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:45.903 15:54:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.903 15:54:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:45.903 15:54:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:45.903 15:54:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:45.903 [2024-07-15 15:54:14.397440] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:12:45.903 [2024-07-15 15:54:14.397483] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:45.903 EAL: No free 2048 kB hugepages reported on node 1 00:12:45.903 [2024-07-15 15:54:14.455071] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:45.903 [2024-07-15 15:54:14.536288] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:45.903 [2024-07-15 15:54:14.536324] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:45.903 [2024-07-15 15:54:14.536331] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:45.903 [2024-07-15 15:54:14.536336] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:45.903 [2024-07-15 15:54:14.536341] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:45.903 [2024-07-15 15:54:14.536385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:45.903 [2024-07-15 15:54:14.536481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:45.903 [2024-07-15 15:54:14.536494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:45.903 [2024-07-15 15:54:14.536495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.470 15:54:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:46.470 15:54:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:12:46.470 15:54:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:46.470 15:54:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:46.470 15:54:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:46.470 15:54:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:46.470 15:54:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:46.470 15:54:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.470 15:54:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:46.470 [2024-07-15 15:54:15.250185] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:46.470 15:54:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.470 15:54:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:46.470 15:54:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.470 15:54:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:46.470 Malloc0 00:12:46.470 15:54:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.470 15:54:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:46.470 15:54:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.470 15:54:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:46.470 Malloc1 00:12:46.470 15:54:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.470 15:54:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:46.470 15:54:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.470 15:54:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:46.470 15:54:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.470 15:54:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:46.470 15:54:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.470 15:54:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:46.470 15:54:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.470 15:54:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:46.470 15:54:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.470 15:54:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:46.470 15:54:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.470 15:54:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:46.470 15:54:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.470 15:54:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:46.470 [2024-07-15 15:54:15.331858] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:46.470 15:54:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.470 15:54:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:46.470 15:54:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.470 15:54:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:46.470 15:54:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.470 15:54:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:12:46.729 00:12:46.729 Discovery Log Number of Records 2, Generation counter 2 00:12:46.729 =====Discovery Log Entry 0====== 00:12:46.729 trtype: tcp 00:12:46.729 adrfam: ipv4 00:12:46.729 subtype: current discovery subsystem 00:12:46.729 treq: not required 00:12:46.729 portid: 0 00:12:46.729 trsvcid: 4420 00:12:46.729 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:46.729 traddr: 10.0.0.2 00:12:46.729 eflags: explicit discovery connections, duplicate discovery information 00:12:46.729 sectype: none 00:12:46.729 =====Discovery Log Entry 1====== 00:12:46.729 trtype: tcp 00:12:46.729 adrfam: ipv4 00:12:46.729 subtype: nvme subsystem 00:12:46.729 treq: not required 00:12:46.729 portid: 0 00:12:46.729 trsvcid: 4420 00:12:46.729 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:46.729 traddr: 10.0.0.2 00:12:46.729 eflags: none 00:12:46.729 sectype: none 00:12:46.729 15:54:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:46.729 15:54:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:46.729 15:54:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:46.729 15:54:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:46.729 15:54:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:46.729 15:54:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:46.729 15:54:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:46.729 15:54:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:46.729 15:54:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:46.729 15:54:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:46.729 15:54:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:48.108 15:54:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:48.108 15:54:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:12:48.108 15:54:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:48.108 15:54:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:48.108 15:54:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:48.108 15:54:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:12:50.013 /dev/nvme0n1 ]] 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:50.013 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:50.013 rmmod nvme_tcp 00:12:50.013 rmmod nvme_fabrics 00:12:50.013 rmmod nvme_keyring 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 3686578 ']' 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 3686578 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 3686578 ']' 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 3686578 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:50.013 15:54:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3686578 00:12:50.273 15:54:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:50.273 15:54:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:50.273 15:54:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3686578' 00:12:50.273 killing process with pid 3686578 00:12:50.273 15:54:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 3686578 00:12:50.273 15:54:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 3686578 00:12:50.273 15:54:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:50.273 15:54:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:50.273 15:54:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:50.273 15:54:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:50.273 15:54:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:50.273 15:54:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.273 15:54:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:50.273 15:54:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.873 15:54:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:52.873 00:12:52.873 real 0m11.934s 00:12:52.873 user 0m19.840s 00:12:52.873 sys 0m4.327s 00:12:52.873 15:54:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:52.873 15:54:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:52.873 ************************************ 00:12:52.873 END TEST nvmf_nvme_cli 00:12:52.873 ************************************ 00:12:52.873 15:54:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:52.873 15:54:21 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:12:52.873 15:54:21 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:52.873 15:54:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:52.873 15:54:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:52.873 15:54:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:52.873 ************************************ 00:12:52.873 START TEST nvmf_vfio_user 00:12:52.873 ************************************ 00:12:52.873 15:54:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:52.873 * Looking for test storage... 00:12:52.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:52.873 15:54:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:52.873 15:54:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:12:52.873 15:54:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:52.873 15:54:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:52.873 15:54:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3687866 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3687866' 00:12:52.874 Process pid: 3687866 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3687866 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 3687866 ']' 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:52.874 15:54:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:52.874 [2024-07-15 15:54:21.487559] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:12:52.874 [2024-07-15 15:54:21.487606] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:52.874 EAL: No free 2048 kB hugepages reported on node 1 00:12:52.874 [2024-07-15 15:54:21.541995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:52.874 [2024-07-15 15:54:21.622417] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:52.874 [2024-07-15 15:54:21.622456] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:52.874 [2024-07-15 15:54:21.622463] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:52.874 [2024-07-15 15:54:21.622470] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:52.874 [2024-07-15 15:54:21.622474] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:52.874 [2024-07-15 15:54:21.622509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.874 [2024-07-15 15:54:21.622528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:52.874 [2024-07-15 15:54:21.622616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:52.874 [2024-07-15 15:54:21.622617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.441 15:54:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:53.441 15:54:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:12:53.441 15:54:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:54.376 15:54:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:54.634 15:54:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:54.634 15:54:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:54.634 15:54:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:54.634 15:54:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:54.634 15:54:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:54.892 Malloc1 00:12:54.892 15:54:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:55.150 15:54:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:55.150 15:54:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:55.407 15:54:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:55.407 15:54:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:55.407 15:54:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:55.665 Malloc2 00:12:55.665 15:54:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:55.922 15:54:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:55.922 15:54:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:56.182 15:54:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:56.182 15:54:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:56.182 15:54:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:56.182 15:54:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:56.182 15:54:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:56.182 15:54:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:56.182 [2024-07-15 15:54:25.036659] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:12:56.182 [2024-07-15 15:54:25.036691] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3688557 ] 00:12:56.182 EAL: No free 2048 kB hugepages reported on node 1 00:12:56.182 [2024-07-15 15:54:25.065763] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:56.182 [2024-07-15 15:54:25.073555] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:56.182 [2024-07-15 15:54:25.073574] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f9087259000 00:12:56.182 [2024-07-15 15:54:25.074552] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:56.182 [2024-07-15 15:54:25.075549] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:56.182 [2024-07-15 15:54:25.076556] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:56.182 [2024-07-15 15:54:25.077561] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:56.182 [2024-07-15 15:54:25.078561] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:56.182 [2024-07-15 15:54:25.079567] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:56.182 [2024-07-15 15:54:25.080576] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:56.182 [2024-07-15 15:54:25.081579] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:56.182 [2024-07-15 15:54:25.082589] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:56.182 [2024-07-15 15:54:25.082598] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f908724e000 00:12:56.182 [2024-07-15 15:54:25.083542] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:56.182 [2024-07-15 15:54:25.094155] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:56.182 [2024-07-15 15:54:25.094176] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:12:56.182 [2024-07-15 15:54:25.098688] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:56.182 [2024-07-15 15:54:25.098723] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:56.182 [2024-07-15 15:54:25.098787] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:12:56.182 [2024-07-15 15:54:25.098803] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:12:56.182 [2024-07-15 15:54:25.098809] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:12:56.182 [2024-07-15 15:54:25.099688] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:56.182 [2024-07-15 15:54:25.099699] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:12:56.182 [2024-07-15 15:54:25.099705] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:12:56.182 [2024-07-15 15:54:25.100686] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:56.182 [2024-07-15 15:54:25.100695] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:12:56.182 [2024-07-15 15:54:25.100702] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:12:56.182 [2024-07-15 15:54:25.101695] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:56.182 [2024-07-15 15:54:25.101703] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:56.182 [2024-07-15 15:54:25.102703] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:56.182 [2024-07-15 15:54:25.102711] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:12:56.182 [2024-07-15 15:54:25.102715] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:12:56.182 [2024-07-15 15:54:25.102721] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:56.182 [2024-07-15 15:54:25.102826] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:12:56.182 [2024-07-15 15:54:25.102830] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:56.182 [2024-07-15 15:54:25.102834] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:56.182 [2024-07-15 15:54:25.103714] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:56.182 [2024-07-15 15:54:25.104718] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:56.182 [2024-07-15 15:54:25.105722] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:56.182 [2024-07-15 15:54:25.106719] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:56.182 [2024-07-15 15:54:25.106782] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:56.182 [2024-07-15 15:54:25.107738] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:56.182 [2024-07-15 15:54:25.107745] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:56.182 [2024-07-15 15:54:25.107750] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:12:56.182 [2024-07-15 15:54:25.107767] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:12:56.182 [2024-07-15 15:54:25.107777] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:12:56.182 [2024-07-15 15:54:25.107792] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:56.182 [2024-07-15 15:54:25.107796] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:56.182 [2024-07-15 15:54:25.107808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:56.182 [2024-07-15 15:54:25.107851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:56.182 [2024-07-15 15:54:25.107861] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:12:56.182 [2024-07-15 15:54:25.107869] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:12:56.182 [2024-07-15 15:54:25.107873] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:12:56.182 [2024-07-15 15:54:25.107877] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:56.182 [2024-07-15 15:54:25.107881] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:12:56.182 [2024-07-15 15:54:25.107884] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:12:56.182 [2024-07-15 15:54:25.107888] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:12:56.182 [2024-07-15 15:54:25.107895] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:12:56.182 [2024-07-15 15:54:25.107904] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:56.182 [2024-07-15 15:54:25.107918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:56.182 [2024-07-15 15:54:25.107931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:56.182 [2024-07-15 15:54:25.107938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:56.182 [2024-07-15 15:54:25.107946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:56.182 [2024-07-15 15:54:25.107953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:56.182 [2024-07-15 15:54:25.107957] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:56.182 [2024-07-15 15:54:25.107965] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:56.182 [2024-07-15 15:54:25.107973] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:56.182 [2024-07-15 15:54:25.107984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:56.183 [2024-07-15 15:54:25.107990] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:12:56.183 [2024-07-15 15:54:25.107994] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:56.183 [2024-07-15 15:54:25.107999] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:12:56.183 [2024-07-15 15:54:25.108006] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:56.183 [2024-07-15 15:54:25.108014] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:56.183 [2024-07-15 15:54:25.108027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:56.183 [2024-07-15 15:54:25.108075] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:12:56.183 [2024-07-15 15:54:25.108083] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:56.183 [2024-07-15 15:54:25.108090] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:56.183 [2024-07-15 15:54:25.108093] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:56.183 [2024-07-15 15:54:25.108099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:56.183 [2024-07-15 15:54:25.108113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:56.183 [2024-07-15 15:54:25.108122] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:12:56.183 [2024-07-15 15:54:25.108129] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:12:56.183 [2024-07-15 15:54:25.108136] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:12:56.183 [2024-07-15 15:54:25.108142] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:56.183 [2024-07-15 15:54:25.108146] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:56.183 [2024-07-15 15:54:25.108152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:56.183 [2024-07-15 15:54:25.108171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:56.183 [2024-07-15 15:54:25.108182] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:56.183 [2024-07-15 15:54:25.108189] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:56.183 [2024-07-15 15:54:25.108195] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:56.183 [2024-07-15 15:54:25.108199] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:56.183 [2024-07-15 15:54:25.108206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:56.183 [2024-07-15 15:54:25.108217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:56.183 [2024-07-15 15:54:25.108230] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:56.183 [2024-07-15 15:54:25.108236] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:12:56.183 [2024-07-15 15:54:25.108328] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:12:56.183 [2024-07-15 15:54:25.108334] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:12:56.183 [2024-07-15 15:54:25.108339] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:56.183 [2024-07-15 15:54:25.108344] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:12:56.183 [2024-07-15 15:54:25.108348] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:12:56.183 [2024-07-15 15:54:25.108352] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:12:56.183 [2024-07-15 15:54:25.108356] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:12:56.183 [2024-07-15 15:54:25.108373] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:56.183 [2024-07-15 15:54:25.108383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:56.183 [2024-07-15 15:54:25.108393] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:56.183 [2024-07-15 15:54:25.108403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:56.183 [2024-07-15 15:54:25.108412] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:56.183 [2024-07-15 15:54:25.108420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:56.183 [2024-07-15 15:54:25.108429] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:56.183 [2024-07-15 15:54:25.108437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:56.183 [2024-07-15 15:54:25.108449] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:56.183 [2024-07-15 15:54:25.108453] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:56.183 [2024-07-15 15:54:25.108456] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:56.183 [2024-07-15 15:54:25.108459] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:56.183 [2024-07-15 15:54:25.108465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:56.183 [2024-07-15 15:54:25.108471] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:56.183 [2024-07-15 15:54:25.108475] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:56.183 [2024-07-15 15:54:25.108480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:56.183 [2024-07-15 15:54:25.108486] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:56.183 [2024-07-15 15:54:25.108490] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:56.183 [2024-07-15 15:54:25.108495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:56.183 [2024-07-15 15:54:25.108501] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:56.183 [2024-07-15 15:54:25.108505] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:56.183 [2024-07-15 15:54:25.108510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:56.183 [2024-07-15 15:54:25.108518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:56.183 [2024-07-15 15:54:25.108528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:56.183 [2024-07-15 15:54:25.108538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:56.183 [2024-07-15 15:54:25.108544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:56.183 ===================================================== 00:12:56.183 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:56.183 ===================================================== 00:12:56.183 Controller Capabilities/Features 00:12:56.183 ================================ 00:12:56.183 Vendor ID: 4e58 00:12:56.183 Subsystem Vendor ID: 4e58 00:12:56.183 Serial Number: SPDK1 00:12:56.183 Model Number: SPDK bdev Controller 00:12:56.183 Firmware Version: 24.09 00:12:56.183 Recommended Arb Burst: 6 00:12:56.183 IEEE OUI Identifier: 8d 6b 50 00:12:56.183 Multi-path I/O 00:12:56.183 May have multiple subsystem ports: Yes 00:12:56.183 May have multiple controllers: Yes 00:12:56.183 Associated with SR-IOV VF: No 00:12:56.183 Max Data Transfer Size: 131072 00:12:56.183 Max Number of Namespaces: 32 00:12:56.183 Max Number of I/O Queues: 127 00:12:56.183 NVMe Specification Version (VS): 1.3 00:12:56.183 NVMe Specification Version (Identify): 1.3 00:12:56.183 Maximum Queue Entries: 256 00:12:56.183 Contiguous Queues Required: Yes 00:12:56.183 Arbitration Mechanisms Supported 00:12:56.183 Weighted Round Robin: Not Supported 00:12:56.183 Vendor Specific: Not Supported 00:12:56.183 Reset Timeout: 15000 ms 00:12:56.183 Doorbell Stride: 4 bytes 00:12:56.183 NVM Subsystem Reset: Not Supported 00:12:56.183 Command Sets Supported 00:12:56.183 NVM Command Set: Supported 00:12:56.183 Boot Partition: Not Supported 00:12:56.183 Memory Page Size Minimum: 4096 bytes 00:12:56.183 Memory Page Size Maximum: 4096 bytes 00:12:56.183 Persistent Memory Region: Not Supported 00:12:56.183 Optional Asynchronous Events Supported 00:12:56.183 Namespace Attribute Notices: Supported 00:12:56.183 Firmware Activation Notices: Not Supported 00:12:56.183 ANA Change Notices: Not Supported 00:12:56.183 PLE Aggregate Log Change Notices: Not Supported 00:12:56.183 LBA Status Info Alert Notices: Not Supported 00:12:56.183 EGE Aggregate Log Change Notices: Not Supported 00:12:56.183 Normal NVM Subsystem Shutdown event: Not Supported 00:12:56.183 Zone Descriptor Change Notices: Not Supported 00:12:56.183 Discovery Log Change Notices: Not Supported 00:12:56.183 Controller Attributes 00:12:56.183 128-bit Host Identifier: Supported 00:12:56.183 Non-Operational Permissive Mode: Not Supported 00:12:56.183 NVM Sets: Not Supported 00:12:56.183 Read Recovery Levels: Not Supported 00:12:56.183 Endurance Groups: Not Supported 00:12:56.183 Predictable Latency Mode: Not Supported 00:12:56.183 Traffic Based Keep ALive: Not Supported 00:12:56.183 Namespace Granularity: Not Supported 00:12:56.183 SQ Associations: Not Supported 00:12:56.183 UUID List: Not Supported 00:12:56.183 Multi-Domain Subsystem: Not Supported 00:12:56.183 Fixed Capacity Management: Not Supported 00:12:56.183 Variable Capacity Management: Not Supported 00:12:56.183 Delete Endurance Group: Not Supported 00:12:56.183 Delete NVM Set: Not Supported 00:12:56.183 Extended LBA Formats Supported: Not Supported 00:12:56.183 Flexible Data Placement Supported: Not Supported 00:12:56.183 00:12:56.183 Controller Memory Buffer Support 00:12:56.183 ================================ 00:12:56.183 Supported: No 00:12:56.183 00:12:56.183 Persistent Memory Region Support 00:12:56.183 ================================ 00:12:56.183 Supported: No 00:12:56.183 00:12:56.183 Admin Command Set Attributes 00:12:56.183 ============================ 00:12:56.183 Security Send/Receive: Not Supported 00:12:56.183 Format NVM: Not Supported 00:12:56.183 Firmware Activate/Download: Not Supported 00:12:56.183 Namespace Management: Not Supported 00:12:56.183 Device Self-Test: Not Supported 00:12:56.183 Directives: Not Supported 00:12:56.184 NVMe-MI: Not Supported 00:12:56.184 Virtualization Management: Not Supported 00:12:56.184 Doorbell Buffer Config: Not Supported 00:12:56.184 Get LBA Status Capability: Not Supported 00:12:56.184 Command & Feature Lockdown Capability: Not Supported 00:12:56.184 Abort Command Limit: 4 00:12:56.184 Async Event Request Limit: 4 00:12:56.184 Number of Firmware Slots: N/A 00:12:56.184 Firmware Slot 1 Read-Only: N/A 00:12:56.184 Firmware Activation Without Reset: N/A 00:12:56.184 Multiple Update Detection Support: N/A 00:12:56.184 Firmware Update Granularity: No Information Provided 00:12:56.184 Per-Namespace SMART Log: No 00:12:56.184 Asymmetric Namespace Access Log Page: Not Supported 00:12:56.184 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:56.184 Command Effects Log Page: Supported 00:12:56.184 Get Log Page Extended Data: Supported 00:12:56.184 Telemetry Log Pages: Not Supported 00:12:56.184 Persistent Event Log Pages: Not Supported 00:12:56.184 Supported Log Pages Log Page: May Support 00:12:56.184 Commands Supported & Effects Log Page: Not Supported 00:12:56.184 Feature Identifiers & Effects Log Page:May Support 00:12:56.184 NVMe-MI Commands & Effects Log Page: May Support 00:12:56.184 Data Area 4 for Telemetry Log: Not Supported 00:12:56.184 Error Log Page Entries Supported: 128 00:12:56.184 Keep Alive: Supported 00:12:56.184 Keep Alive Granularity: 10000 ms 00:12:56.184 00:12:56.184 NVM Command Set Attributes 00:12:56.184 ========================== 00:12:56.184 Submission Queue Entry Size 00:12:56.184 Max: 64 00:12:56.184 Min: 64 00:12:56.184 Completion Queue Entry Size 00:12:56.184 Max: 16 00:12:56.184 Min: 16 00:12:56.184 Number of Namespaces: 32 00:12:56.184 Compare Command: Supported 00:12:56.184 Write Uncorrectable Command: Not Supported 00:12:56.184 Dataset Management Command: Supported 00:12:56.184 Write Zeroes Command: Supported 00:12:56.184 Set Features Save Field: Not Supported 00:12:56.184 Reservations: Not Supported 00:12:56.184 Timestamp: Not Supported 00:12:56.184 Copy: Supported 00:12:56.184 Volatile Write Cache: Present 00:12:56.184 Atomic Write Unit (Normal): 1 00:12:56.184 Atomic Write Unit (PFail): 1 00:12:56.184 Atomic Compare & Write Unit: 1 00:12:56.184 Fused Compare & Write: Supported 00:12:56.184 Scatter-Gather List 00:12:56.184 SGL Command Set: Supported (Dword aligned) 00:12:56.184 SGL Keyed: Not Supported 00:12:56.184 SGL Bit Bucket Descriptor: Not Supported 00:12:56.184 SGL Metadata Pointer: Not Supported 00:12:56.184 Oversized SGL: Not Supported 00:12:56.184 SGL Metadata Address: Not Supported 00:12:56.184 SGL Offset: Not Supported 00:12:56.184 Transport SGL Data Block: Not Supported 00:12:56.184 Replay Protected Memory Block: Not Supported 00:12:56.184 00:12:56.184 Firmware Slot Information 00:12:56.184 ========================= 00:12:56.184 Active slot: 1 00:12:56.184 Slot 1 Firmware Revision: 24.09 00:12:56.184 00:12:56.184 00:12:56.184 Commands Supported and Effects 00:12:56.184 ============================== 00:12:56.184 Admin Commands 00:12:56.184 -------------- 00:12:56.184 Get Log Page (02h): Supported 00:12:56.184 Identify (06h): Supported 00:12:56.184 Abort (08h): Supported 00:12:56.184 Set Features (09h): Supported 00:12:56.184 Get Features (0Ah): Supported 00:12:56.184 Asynchronous Event Request (0Ch): Supported 00:12:56.184 Keep Alive (18h): Supported 00:12:56.184 I/O Commands 00:12:56.184 ------------ 00:12:56.184 Flush (00h): Supported LBA-Change 00:12:56.184 Write (01h): Supported LBA-Change 00:12:56.184 Read (02h): Supported 00:12:56.184 Compare (05h): Supported 00:12:56.184 Write Zeroes (08h): Supported LBA-Change 00:12:56.184 Dataset Management (09h): Supported LBA-Change 00:12:56.184 Copy (19h): Supported LBA-Change 00:12:56.184 00:12:56.184 Error Log 00:12:56.184 ========= 00:12:56.184 00:12:56.184 Arbitration 00:12:56.184 =========== 00:12:56.184 Arbitration Burst: 1 00:12:56.184 00:12:56.184 Power Management 00:12:56.184 ================ 00:12:56.184 Number of Power States: 1 00:12:56.184 Current Power State: Power State #0 00:12:56.184 Power State #0: 00:12:56.184 Max Power: 0.00 W 00:12:56.184 Non-Operational State: Operational 00:12:56.184 Entry Latency: Not Reported 00:12:56.184 Exit Latency: Not Reported 00:12:56.184 Relative Read Throughput: 0 00:12:56.184 Relative Read Latency: 0 00:12:56.184 Relative Write Throughput: 0 00:12:56.184 Relative Write Latency: 0 00:12:56.184 Idle Power: Not Reported 00:12:56.184 Active Power: Not Reported 00:12:56.184 Non-Operational Permissive Mode: Not Supported 00:12:56.184 00:12:56.184 Health Information 00:12:56.184 ================== 00:12:56.184 Critical Warnings: 00:12:56.184 Available Spare Space: OK 00:12:56.184 Temperature: OK 00:12:56.184 Device Reliability: OK 00:12:56.184 Read Only: No 00:12:56.184 Volatile Memory Backup: OK 00:12:56.184 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:56.184 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:56.184 Available Spare: 0% 00:12:56.184 Available Sp[2024-07-15 15:54:25.108628] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:56.184 [2024-07-15 15:54:25.108641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:56.184 [2024-07-15 15:54:25.108666] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:12:56.184 [2024-07-15 15:54:25.108674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:56.184 [2024-07-15 15:54:25.108680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:56.184 [2024-07-15 15:54:25.108685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:56.184 [2024-07-15 15:54:25.108690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:56.184 [2024-07-15 15:54:25.108743] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:56.184 [2024-07-15 15:54:25.108752] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:56.184 [2024-07-15 15:54:25.109745] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:56.184 [2024-07-15 15:54:25.109791] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:12:56.184 [2024-07-15 15:54:25.109796] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:12:56.184 [2024-07-15 15:54:25.110751] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:56.184 [2024-07-15 15:54:25.110761] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:12:56.184 [2024-07-15 15:54:25.110807] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:56.442 [2024-07-15 15:54:25.117234] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:56.442 are Threshold: 0% 00:12:56.442 Life Percentage Used: 0% 00:12:56.442 Data Units Read: 0 00:12:56.442 Data Units Written: 0 00:12:56.442 Host Read Commands: 0 00:12:56.442 Host Write Commands: 0 00:12:56.442 Controller Busy Time: 0 minutes 00:12:56.442 Power Cycles: 0 00:12:56.442 Power On Hours: 0 hours 00:12:56.442 Unsafe Shutdowns: 0 00:12:56.442 Unrecoverable Media Errors: 0 00:12:56.442 Lifetime Error Log Entries: 0 00:12:56.442 Warning Temperature Time: 0 minutes 00:12:56.442 Critical Temperature Time: 0 minutes 00:12:56.442 00:12:56.442 Number of Queues 00:12:56.442 ================ 00:12:56.442 Number of I/O Submission Queues: 127 00:12:56.442 Number of I/O Completion Queues: 127 00:12:56.442 00:12:56.442 Active Namespaces 00:12:56.442 ================= 00:12:56.442 Namespace ID:1 00:12:56.442 Error Recovery Timeout: Unlimited 00:12:56.442 Command Set Identifier: NVM (00h) 00:12:56.442 Deallocate: Supported 00:12:56.442 Deallocated/Unwritten Error: Not Supported 00:12:56.442 Deallocated Read Value: Unknown 00:12:56.442 Deallocate in Write Zeroes: Not Supported 00:12:56.442 Deallocated Guard Field: 0xFFFF 00:12:56.442 Flush: Supported 00:12:56.442 Reservation: Supported 00:12:56.442 Namespace Sharing Capabilities: Multiple Controllers 00:12:56.442 Size (in LBAs): 131072 (0GiB) 00:12:56.442 Capacity (in LBAs): 131072 (0GiB) 00:12:56.442 Utilization (in LBAs): 131072 (0GiB) 00:12:56.442 NGUID: BD9AD292F7754983926228907C2BFE41 00:12:56.442 UUID: bd9ad292-f775-4983-9262-28907c2bfe41 00:12:56.442 Thin Provisioning: Not Supported 00:12:56.442 Per-NS Atomic Units: Yes 00:12:56.442 Atomic Boundary Size (Normal): 0 00:12:56.442 Atomic Boundary Size (PFail): 0 00:12:56.442 Atomic Boundary Offset: 0 00:12:56.442 Maximum Single Source Range Length: 65535 00:12:56.442 Maximum Copy Length: 65535 00:12:56.442 Maximum Source Range Count: 1 00:12:56.442 NGUID/EUI64 Never Reused: No 00:12:56.442 Namespace Write Protected: No 00:12:56.442 Number of LBA Formats: 1 00:12:56.442 Current LBA Format: LBA Format #00 00:12:56.442 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:56.442 00:12:56.442 15:54:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:56.442 EAL: No free 2048 kB hugepages reported on node 1 00:12:56.442 [2024-07-15 15:54:25.331019] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:01.704 Initializing NVMe Controllers 00:13:01.704 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:01.704 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:01.704 Initialization complete. Launching workers. 00:13:01.704 ======================================================== 00:13:01.704 Latency(us) 00:13:01.704 Device Information : IOPS MiB/s Average min max 00:13:01.704 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39928.50 155.97 3205.32 957.96 8596.19 00:13:01.704 ======================================================== 00:13:01.704 Total : 39928.50 155.97 3205.32 957.96 8596.19 00:13:01.704 00:13:01.704 [2024-07-15 15:54:30.349129] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:01.704 15:54:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:01.704 EAL: No free 2048 kB hugepages reported on node 1 00:13:01.704 [2024-07-15 15:54:30.568171] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:06.969 Initializing NVMe Controllers 00:13:06.969 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:06.969 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:06.969 Initialization complete. Launching workers. 00:13:06.969 ======================================================== 00:13:06.969 Latency(us) 00:13:06.969 Device Information : IOPS MiB/s Average min max 00:13:06.969 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16039.31 62.65 7979.69 6988.24 8971.24 00:13:06.969 ======================================================== 00:13:06.969 Total : 16039.31 62.65 7979.69 6988.24 8971.24 00:13:06.969 00:13:06.969 [2024-07-15 15:54:35.602992] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:06.969 15:54:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:06.969 EAL: No free 2048 kB hugepages reported on node 1 00:13:06.969 [2024-07-15 15:54:35.786953] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:12.238 [2024-07-15 15:54:40.868596] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:12.238 Initializing NVMe Controllers 00:13:12.238 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:12.238 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:12.238 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:12.238 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:12.238 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:12.238 Initialization complete. Launching workers. 00:13:12.238 Starting thread on core 2 00:13:12.238 Starting thread on core 3 00:13:12.238 Starting thread on core 1 00:13:12.238 15:54:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:13:12.238 EAL: No free 2048 kB hugepages reported on node 1 00:13:12.238 [2024-07-15 15:54:41.146645] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:15.526 [2024-07-15 15:54:44.202342] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:15.526 Initializing NVMe Controllers 00:13:15.526 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:15.526 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:15.526 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:15.526 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:15.526 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:15.526 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:15.526 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:15.526 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:15.526 Initialization complete. Launching workers. 00:13:15.526 Starting thread on core 1 with urgent priority queue 00:13:15.526 Starting thread on core 2 with urgent priority queue 00:13:15.526 Starting thread on core 3 with urgent priority queue 00:13:15.526 Starting thread on core 0 with urgent priority queue 00:13:15.526 SPDK bdev Controller (SPDK1 ) core 0: 9864.33 IO/s 10.14 secs/100000 ios 00:13:15.526 SPDK bdev Controller (SPDK1 ) core 1: 8246.67 IO/s 12.13 secs/100000 ios 00:13:15.526 SPDK bdev Controller (SPDK1 ) core 2: 7567.00 IO/s 13.22 secs/100000 ios 00:13:15.526 SPDK bdev Controller (SPDK1 ) core 3: 7316.67 IO/s 13.67 secs/100000 ios 00:13:15.526 ======================================================== 00:13:15.526 00:13:15.526 15:54:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:15.526 EAL: No free 2048 kB hugepages reported on node 1 00:13:15.783 [2024-07-15 15:54:44.473505] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:15.783 Initializing NVMe Controllers 00:13:15.783 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:15.783 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:15.783 Namespace ID: 1 size: 0GB 00:13:15.783 Initialization complete. 00:13:15.783 INFO: using host memory buffer for IO 00:13:15.783 Hello world! 00:13:15.783 [2024-07-15 15:54:44.506707] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:15.783 15:54:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:15.783 EAL: No free 2048 kB hugepages reported on node 1 00:13:16.041 [2024-07-15 15:54:44.777912] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:16.975 Initializing NVMe Controllers 00:13:16.975 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:16.975 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:16.975 Initialization complete. Launching workers. 00:13:16.975 submit (in ns) avg, min, max = 7412.4, 3232.2, 3999907.8 00:13:16.975 complete (in ns) avg, min, max = 19259.4, 1765.2, 7988816.5 00:13:16.975 00:13:16.975 Submit histogram 00:13:16.975 ================ 00:13:16.975 Range in us Cumulative Count 00:13:16.975 3.228 - 3.242: 0.0061% ( 1) 00:13:16.975 3.256 - 3.270: 0.0122% ( 1) 00:13:16.975 3.270 - 3.283: 0.0428% ( 5) 00:13:16.975 3.283 - 3.297: 0.1958% ( 25) 00:13:16.975 3.297 - 3.311: 0.4282% ( 38) 00:13:16.975 3.311 - 3.325: 0.8136% ( 63) 00:13:16.975 3.325 - 3.339: 1.4926% ( 111) 00:13:16.975 3.339 - 3.353: 3.3462% ( 303) 00:13:16.975 3.353 - 3.367: 7.0839% ( 611) 00:13:16.975 3.367 - 3.381: 12.0511% ( 812) 00:13:16.975 3.381 - 3.395: 18.2725% ( 1017) 00:13:16.975 3.395 - 3.409: 24.6590% ( 1044) 00:13:16.975 3.409 - 3.423: 30.5683% ( 966) 00:13:16.975 3.423 - 3.437: 36.5205% ( 973) 00:13:16.975 3.437 - 3.450: 41.4633% ( 808) 00:13:16.975 3.450 - 3.464: 46.0023% ( 742) 00:13:16.975 3.464 - 3.478: 50.7433% ( 775) 00:13:16.975 3.478 - 3.492: 55.3802% ( 758) 00:13:16.975 3.492 - 3.506: 61.5893% ( 1015) 00:13:16.975 3.506 - 3.520: 67.3212% ( 937) 00:13:16.975 3.520 - 3.534: 72.0927% ( 780) 00:13:16.976 3.534 - 3.548: 77.1640% ( 829) 00:13:16.976 3.548 - 3.562: 81.4400% ( 699) 00:13:16.976 3.562 - 3.590: 86.5052% ( 828) 00:13:16.976 3.590 - 3.617: 88.0223% ( 248) 00:13:16.976 3.617 - 3.645: 88.8236% ( 131) 00:13:16.976 3.645 - 3.673: 89.8758% ( 172) 00:13:16.976 3.673 - 3.701: 91.4174% ( 252) 00:13:16.976 3.701 - 3.729: 93.0874% ( 273) 00:13:16.976 3.729 - 3.757: 94.7574% ( 273) 00:13:16.976 3.757 - 3.784: 96.3112% ( 254) 00:13:16.976 3.784 - 3.812: 97.7611% ( 237) 00:13:16.976 3.812 - 3.840: 98.5869% ( 135) 00:13:16.976 3.840 - 3.868: 99.1252% ( 88) 00:13:16.976 3.868 - 3.896: 99.4556% ( 54) 00:13:16.976 3.896 - 3.923: 99.5840% ( 21) 00:13:16.976 3.923 - 3.951: 99.6330% ( 8) 00:13:16.976 3.979 - 4.007: 99.6452% ( 2) 00:13:16.976 4.035 - 4.063: 99.6513% ( 1) 00:13:16.976 5.287 - 5.315: 99.6574% ( 1) 00:13:16.976 5.398 - 5.426: 99.6635% ( 1) 00:13:16.976 5.426 - 5.454: 99.6697% ( 1) 00:13:16.976 5.482 - 5.510: 99.6758% ( 1) 00:13:16.976 5.537 - 5.565: 99.6819% ( 1) 00:13:16.976 5.732 - 5.760: 99.6941% ( 2) 00:13:16.976 5.816 - 5.843: 99.7003% ( 1) 00:13:16.976 6.289 - 6.317: 99.7064% ( 1) 00:13:16.976 6.539 - 6.567: 99.7125% ( 1) 00:13:16.976 6.623 - 6.650: 99.7247% ( 2) 00:13:16.976 6.650 - 6.678: 99.7308% ( 1) 00:13:16.976 6.706 - 6.734: 99.7370% ( 1) 00:13:16.976 6.734 - 6.762: 99.7492% ( 2) 00:13:16.976 6.873 - 6.901: 99.7553% ( 1) 00:13:16.976 6.901 - 6.929: 99.7614% ( 1) 00:13:16.976 6.929 - 6.957: 99.7737% ( 2) 00:13:16.976 6.957 - 6.984: 99.7798% ( 1) 00:13:16.976 7.012 - 7.040: 99.7981% ( 3) 00:13:16.976 7.123 - 7.179: 99.8042% ( 1) 00:13:16.976 7.179 - 7.235: 99.8104% ( 1) 00:13:16.976 7.235 - 7.290: 99.8165% ( 1) 00:13:16.976 7.290 - 7.346: 99.8226% ( 1) 00:13:16.976 7.513 - 7.569: 99.8287% ( 1) 00:13:16.976 7.624 - 7.680: 99.8348% ( 1) 00:13:16.976 7.736 - 7.791: 99.8409% ( 1) 00:13:16.976 7.847 - 7.903: 99.8471% ( 1) 00:13:16.976 7.958 - 8.014: 99.8532% ( 1) 00:13:16.976 8.014 - 8.070: 99.8593% ( 1) 00:13:16.976 8.070 - 8.125: 99.8654% ( 1) 00:13:16.976 8.125 - 8.181: 99.8715% ( 1) 00:13:16.976 8.737 - 8.793: 99.8838% ( 2) 00:13:16.976 10.129 - 10.184: 99.8899% ( 1) 00:13:16.976 10.407 - 10.463: 99.8960% ( 1) 00:13:16.976 13.802 - 13.857: 99.9021% ( 1) 00:13:16.976 3989.148 - 4017.642: 100.0000% ( 16) 00:13:16.976 00:13:16.976 Complete histogram 00:13:16.976 ================== 00:13:16.976 Range in us Cumulative Count 00:13:16.976 1.760 - 1.767: 0.0061% ( 1) 00:13:16.976 1.774 - 1.781: 0.0306% ( 4) 00:13:16.976 1.781 - 1.795: 0.0489% ( 3) 00:13:16.976 1.795 - [2024-07-15 15:54:45.801830] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:16.976 1.809: 0.0612% ( 2) 00:13:16.976 1.809 - 1.823: 0.1407% ( 13) 00:13:16.976 1.823 - 1.837: 2.1717% ( 332) 00:13:16.976 1.837 - 1.850: 6.5822% ( 721) 00:13:16.976 1.850 - 1.864: 9.1148% ( 414) 00:13:16.976 1.864 - 1.878: 10.3199% ( 197) 00:13:16.976 1.878 - 1.892: 23.3927% ( 2137) 00:13:16.976 1.892 - 1.906: 68.1593% ( 7318) 00:13:16.976 1.906 - 1.920: 91.0198% ( 3737) 00:13:16.976 1.920 - 1.934: 95.1673% ( 678) 00:13:16.976 1.934 - 1.948: 96.1094% ( 154) 00:13:16.976 1.948 - 1.962: 96.8496% ( 121) 00:13:16.976 1.962 - 1.976: 98.0730% ( 200) 00:13:16.976 1.976 - 1.990: 98.9478% ( 143) 00:13:16.976 1.990 - 2.003: 99.2476% ( 49) 00:13:16.976 2.003 - 2.017: 99.3026% ( 9) 00:13:16.976 2.017 - 2.031: 99.3149% ( 2) 00:13:16.976 2.031 - 2.045: 99.3210% ( 1) 00:13:16.976 2.073 - 2.087: 99.3271% ( 1) 00:13:16.976 2.087 - 2.101: 99.3393% ( 2) 00:13:16.976 2.129 - 2.143: 99.3454% ( 1) 00:13:16.976 2.254 - 2.268: 99.3516% ( 1) 00:13:16.976 3.840 - 3.868: 99.3577% ( 1) 00:13:16.976 3.896 - 3.923: 99.3699% ( 2) 00:13:16.976 4.035 - 4.063: 99.3760% ( 1) 00:13:16.976 4.146 - 4.174: 99.3821% ( 1) 00:13:16.976 4.285 - 4.313: 99.3883% ( 1) 00:13:16.976 4.591 - 4.619: 99.3944% ( 1) 00:13:16.976 4.730 - 4.758: 99.4005% ( 1) 00:13:16.976 4.786 - 4.814: 99.4066% ( 1) 00:13:16.976 4.953 - 4.981: 99.4127% ( 1) 00:13:16.976 4.981 - 5.009: 99.4250% ( 2) 00:13:16.976 5.092 - 5.120: 99.4311% ( 1) 00:13:16.976 5.120 - 5.148: 99.4372% ( 1) 00:13:16.976 5.148 - 5.176: 99.4433% ( 1) 00:13:16.976 5.398 - 5.426: 99.4556% ( 2) 00:13:16.976 5.454 - 5.482: 99.4617% ( 1) 00:13:16.976 5.482 - 5.510: 99.4739% ( 2) 00:13:16.976 5.760 - 5.788: 99.4800% ( 1) 00:13:16.976 5.927 - 5.955: 99.4861% ( 1) 00:13:16.976 5.983 - 6.010: 99.4984% ( 2) 00:13:16.976 6.066 - 6.094: 99.5045% ( 1) 00:13:16.976 6.094 - 6.122: 99.5106% ( 1) 00:13:16.976 6.150 - 6.177: 99.5167% ( 1) 00:13:16.976 6.344 - 6.372: 99.5228% ( 1) 00:13:16.976 6.400 - 6.428: 99.5290% ( 1) 00:13:16.976 6.706 - 6.734: 99.5351% ( 1) 00:13:16.976 7.179 - 7.235: 99.5412% ( 1) 00:13:16.976 7.791 - 7.847: 99.5473% ( 1) 00:13:16.976 8.070 - 8.125: 99.5534% ( 1) 00:13:16.976 8.125 - 8.181: 99.5596% ( 1) 00:13:16.976 20.035 - 20.146: 99.5657% ( 1) 00:13:16.976 25.377 - 25.489: 99.5718% ( 1) 00:13:16.976 3989.148 - 4017.642: 99.9939% ( 69) 00:13:16.976 7978.296 - 8035.283: 100.0000% ( 1) 00:13:16.976 00:13:16.976 15:54:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:16.976 15:54:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:16.976 15:54:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:16.976 15:54:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:16.976 15:54:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:17.235 [ 00:13:17.235 { 00:13:17.235 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:17.235 "subtype": "Discovery", 00:13:17.235 "listen_addresses": [], 00:13:17.235 "allow_any_host": true, 00:13:17.235 "hosts": [] 00:13:17.235 }, 00:13:17.235 { 00:13:17.235 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:17.235 "subtype": "NVMe", 00:13:17.235 "listen_addresses": [ 00:13:17.235 { 00:13:17.235 "trtype": "VFIOUSER", 00:13:17.235 "adrfam": "IPv4", 00:13:17.235 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:17.235 "trsvcid": "0" 00:13:17.235 } 00:13:17.235 ], 00:13:17.235 "allow_any_host": true, 00:13:17.235 "hosts": [], 00:13:17.235 "serial_number": "SPDK1", 00:13:17.235 "model_number": "SPDK bdev Controller", 00:13:17.235 "max_namespaces": 32, 00:13:17.235 "min_cntlid": 1, 00:13:17.235 "max_cntlid": 65519, 00:13:17.235 "namespaces": [ 00:13:17.235 { 00:13:17.235 "nsid": 1, 00:13:17.235 "bdev_name": "Malloc1", 00:13:17.235 "name": "Malloc1", 00:13:17.235 "nguid": "BD9AD292F7754983926228907C2BFE41", 00:13:17.235 "uuid": "bd9ad292-f775-4983-9262-28907c2bfe41" 00:13:17.235 } 00:13:17.235 ] 00:13:17.235 }, 00:13:17.235 { 00:13:17.235 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:17.235 "subtype": "NVMe", 00:13:17.235 "listen_addresses": [ 00:13:17.235 { 00:13:17.235 "trtype": "VFIOUSER", 00:13:17.235 "adrfam": "IPv4", 00:13:17.235 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:17.235 "trsvcid": "0" 00:13:17.235 } 00:13:17.235 ], 00:13:17.235 "allow_any_host": true, 00:13:17.235 "hosts": [], 00:13:17.235 "serial_number": "SPDK2", 00:13:17.235 "model_number": "SPDK bdev Controller", 00:13:17.235 "max_namespaces": 32, 00:13:17.235 "min_cntlid": 1, 00:13:17.235 "max_cntlid": 65519, 00:13:17.235 "namespaces": [ 00:13:17.235 { 00:13:17.235 "nsid": 1, 00:13:17.235 "bdev_name": "Malloc2", 00:13:17.235 "name": "Malloc2", 00:13:17.235 "nguid": "2B79A51708B340DE892EB7FCE5EC7B85", 00:13:17.235 "uuid": "2b79a517-08b3-40de-892e-b7fce5ec7b85" 00:13:17.235 } 00:13:17.235 ] 00:13:17.235 } 00:13:17.235 ] 00:13:17.235 15:54:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:17.235 15:54:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3692023 00:13:17.235 15:54:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:17.236 15:54:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:17.236 15:54:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:13:17.236 15:54:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:17.236 15:54:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:17.236 15:54:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:13:17.236 15:54:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:17.236 15:54:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:17.236 EAL: No free 2048 kB hugepages reported on node 1 00:13:17.494 [2024-07-15 15:54:46.180638] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:17.494 Malloc3 00:13:17.494 15:54:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:17.494 [2024-07-15 15:54:46.399340] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:17.494 15:54:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:17.753 Asynchronous Event Request test 00:13:17.753 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:17.753 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:17.753 Registering asynchronous event callbacks... 00:13:17.753 Starting namespace attribute notice tests for all controllers... 00:13:17.753 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:17.753 aer_cb - Changed Namespace 00:13:17.753 Cleaning up... 00:13:17.753 [ 00:13:17.753 { 00:13:17.753 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:17.753 "subtype": "Discovery", 00:13:17.753 "listen_addresses": [], 00:13:17.753 "allow_any_host": true, 00:13:17.753 "hosts": [] 00:13:17.753 }, 00:13:17.753 { 00:13:17.753 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:17.753 "subtype": "NVMe", 00:13:17.753 "listen_addresses": [ 00:13:17.753 { 00:13:17.753 "trtype": "VFIOUSER", 00:13:17.753 "adrfam": "IPv4", 00:13:17.753 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:17.753 "trsvcid": "0" 00:13:17.753 } 00:13:17.753 ], 00:13:17.753 "allow_any_host": true, 00:13:17.753 "hosts": [], 00:13:17.753 "serial_number": "SPDK1", 00:13:17.753 "model_number": "SPDK bdev Controller", 00:13:17.753 "max_namespaces": 32, 00:13:17.753 "min_cntlid": 1, 00:13:17.753 "max_cntlid": 65519, 00:13:17.753 "namespaces": [ 00:13:17.753 { 00:13:17.753 "nsid": 1, 00:13:17.753 "bdev_name": "Malloc1", 00:13:17.753 "name": "Malloc1", 00:13:17.753 "nguid": "BD9AD292F7754983926228907C2BFE41", 00:13:17.753 "uuid": "bd9ad292-f775-4983-9262-28907c2bfe41" 00:13:17.754 }, 00:13:17.754 { 00:13:17.754 "nsid": 2, 00:13:17.754 "bdev_name": "Malloc3", 00:13:17.754 "name": "Malloc3", 00:13:17.754 "nguid": "E91FACFB726A4FF5A7ECE54EB0BB2ACC", 00:13:17.754 "uuid": "e91facfb-726a-4ff5-a7ec-e54eb0bb2acc" 00:13:17.754 } 00:13:17.754 ] 00:13:17.754 }, 00:13:17.754 { 00:13:17.754 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:17.754 "subtype": "NVMe", 00:13:17.754 "listen_addresses": [ 00:13:17.754 { 00:13:17.754 "trtype": "VFIOUSER", 00:13:17.754 "adrfam": "IPv4", 00:13:17.754 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:17.754 "trsvcid": "0" 00:13:17.754 } 00:13:17.754 ], 00:13:17.754 "allow_any_host": true, 00:13:17.754 "hosts": [], 00:13:17.754 "serial_number": "SPDK2", 00:13:17.754 "model_number": "SPDK bdev Controller", 00:13:17.754 "max_namespaces": 32, 00:13:17.754 "min_cntlid": 1, 00:13:17.754 "max_cntlid": 65519, 00:13:17.754 "namespaces": [ 00:13:17.754 { 00:13:17.754 "nsid": 1, 00:13:17.754 "bdev_name": "Malloc2", 00:13:17.754 "name": "Malloc2", 00:13:17.754 "nguid": "2B79A51708B340DE892EB7FCE5EC7B85", 00:13:17.754 "uuid": "2b79a517-08b3-40de-892e-b7fce5ec7b85" 00:13:17.754 } 00:13:17.754 ] 00:13:17.754 } 00:13:17.754 ] 00:13:17.754 15:54:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3692023 00:13:17.754 15:54:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:17.754 15:54:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:17.754 15:54:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:17.754 15:54:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:17.754 [2024-07-15 15:54:46.626035] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:13:17.754 [2024-07-15 15:54:46.626083] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3692041 ] 00:13:17.754 EAL: No free 2048 kB hugepages reported on node 1 00:13:17.754 [2024-07-15 15:54:46.654639] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:17.754 [2024-07-15 15:54:46.664507] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:17.754 [2024-07-15 15:54:46.664528] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f849907c000 00:13:17.754 [2024-07-15 15:54:46.665496] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:17.754 [2024-07-15 15:54:46.666505] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:17.754 [2024-07-15 15:54:46.667513] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:17.754 [2024-07-15 15:54:46.668521] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:17.754 [2024-07-15 15:54:46.669522] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:17.754 [2024-07-15 15:54:46.670541] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:17.754 [2024-07-15 15:54:46.671542] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:17.754 [2024-07-15 15:54:46.672547] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:17.754 [2024-07-15 15:54:46.673550] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:17.754 [2024-07-15 15:54:46.673560] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f8499071000 00:13:17.754 [2024-07-15 15:54:46.674503] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:18.015 [2024-07-15 15:54:46.687025] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:18.015 [2024-07-15 15:54:46.687045] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:13:18.015 [2024-07-15 15:54:46.689116] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:18.015 [2024-07-15 15:54:46.689151] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:18.015 [2024-07-15 15:54:46.689215] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:13:18.015 [2024-07-15 15:54:46.689233] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:13:18.015 [2024-07-15 15:54:46.689238] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:13:18.015 [2024-07-15 15:54:46.690229] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:18.015 [2024-07-15 15:54:46.690238] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:13:18.015 [2024-07-15 15:54:46.690244] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:13:18.015 [2024-07-15 15:54:46.691124] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:18.015 [2024-07-15 15:54:46.691132] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:13:18.015 [2024-07-15 15:54:46.691139] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:13:18.015 [2024-07-15 15:54:46.692127] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:18.015 [2024-07-15 15:54:46.692135] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:18.015 [2024-07-15 15:54:46.694229] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:18.015 [2024-07-15 15:54:46.694239] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:13:18.015 [2024-07-15 15:54:46.694243] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:13:18.015 [2024-07-15 15:54:46.694249] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:18.015 [2024-07-15 15:54:46.694353] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:13:18.015 [2024-07-15 15:54:46.694357] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:18.015 [2024-07-15 15:54:46.694361] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:18.015 [2024-07-15 15:54:46.695142] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:18.015 [2024-07-15 15:54:46.696149] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:18.015 [2024-07-15 15:54:46.697160] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:18.015 [2024-07-15 15:54:46.698163] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:18.015 [2024-07-15 15:54:46.698199] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:18.015 [2024-07-15 15:54:46.699170] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:18.015 [2024-07-15 15:54:46.699178] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:18.015 [2024-07-15 15:54:46.699183] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:13:18.015 [2024-07-15 15:54:46.699199] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:13:18.015 [2024-07-15 15:54:46.699209] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:13:18.015 [2024-07-15 15:54:46.699219] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:18.015 [2024-07-15 15:54:46.699226] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:18.015 [2024-07-15 15:54:46.699237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:18.015 [2024-07-15 15:54:46.705235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:18.015 [2024-07-15 15:54:46.705246] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:13:18.015 [2024-07-15 15:54:46.705253] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:13:18.015 [2024-07-15 15:54:46.705257] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:13:18.015 [2024-07-15 15:54:46.705261] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:18.015 [2024-07-15 15:54:46.705265] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:13:18.015 [2024-07-15 15:54:46.705269] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:13:18.015 [2024-07-15 15:54:46.705273] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:13:18.015 [2024-07-15 15:54:46.705280] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:13:18.016 [2024-07-15 15:54:46.705289] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:18.016 [2024-07-15 15:54:46.713232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:18.016 [2024-07-15 15:54:46.713246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:18.016 [2024-07-15 15:54:46.713254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:18.016 [2024-07-15 15:54:46.713261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:18.016 [2024-07-15 15:54:46.713268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:18.016 [2024-07-15 15:54:46.713275] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:13:18.016 [2024-07-15 15:54:46.713283] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:18.016 [2024-07-15 15:54:46.713291] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:18.016 [2024-07-15 15:54:46.721228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:18.016 [2024-07-15 15:54:46.721235] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:13:18.016 [2024-07-15 15:54:46.721240] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:18.016 [2024-07-15 15:54:46.721246] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:13:18.016 [2024-07-15 15:54:46.721251] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:13:18.016 [2024-07-15 15:54:46.721259] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:18.016 [2024-07-15 15:54:46.729229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:18.016 [2024-07-15 15:54:46.729281] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:13:18.016 [2024-07-15 15:54:46.729288] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:13:18.016 [2024-07-15 15:54:46.729295] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:18.016 [2024-07-15 15:54:46.729299] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:18.016 [2024-07-15 15:54:46.729305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:18.016 [2024-07-15 15:54:46.737229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:18.016 [2024-07-15 15:54:46.737239] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:13:18.016 [2024-07-15 15:54:46.737247] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:13:18.016 [2024-07-15 15:54:46.737254] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:13:18.016 [2024-07-15 15:54:46.737260] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:18.016 [2024-07-15 15:54:46.737264] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:18.016 [2024-07-15 15:54:46.737270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:18.016 [2024-07-15 15:54:46.745231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:18.016 [2024-07-15 15:54:46.745243] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:18.016 [2024-07-15 15:54:46.745251] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:18.016 [2024-07-15 15:54:46.745260] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:18.016 [2024-07-15 15:54:46.745263] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:18.016 [2024-07-15 15:54:46.745269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:18.016 [2024-07-15 15:54:46.753230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:18.016 [2024-07-15 15:54:46.753239] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:18.016 [2024-07-15 15:54:46.753245] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:13:18.016 [2024-07-15 15:54:46.753253] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:13:18.016 [2024-07-15 15:54:46.753258] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:13:18.016 [2024-07-15 15:54:46.753263] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:18.016 [2024-07-15 15:54:46.753267] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:13:18.016 [2024-07-15 15:54:46.753271] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:13:18.016 [2024-07-15 15:54:46.753275] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:13:18.016 [2024-07-15 15:54:46.753279] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:13:18.016 [2024-07-15 15:54:46.753294] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:18.016 [2024-07-15 15:54:46.761229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:18.016 [2024-07-15 15:54:46.761241] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:18.016 [2024-07-15 15:54:46.769229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:18.016 [2024-07-15 15:54:46.769247] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:18.016 [2024-07-15 15:54:46.777230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:18.016 [2024-07-15 15:54:46.777242] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:18.016 [2024-07-15 15:54:46.785230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:18.016 [2024-07-15 15:54:46.785244] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:18.016 [2024-07-15 15:54:46.785249] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:18.016 [2024-07-15 15:54:46.785252] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:18.016 [2024-07-15 15:54:46.785255] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:18.016 [2024-07-15 15:54:46.785261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:18.016 [2024-07-15 15:54:46.785269] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:18.017 [2024-07-15 15:54:46.785273] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:18.017 [2024-07-15 15:54:46.785279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:18.017 [2024-07-15 15:54:46.785285] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:18.017 [2024-07-15 15:54:46.785289] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:18.017 [2024-07-15 15:54:46.785294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:18.017 [2024-07-15 15:54:46.785301] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:18.017 [2024-07-15 15:54:46.785305] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:18.017 [2024-07-15 15:54:46.785310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:18.017 [2024-07-15 15:54:46.793231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:18.017 [2024-07-15 15:54:46.793244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:18.017 [2024-07-15 15:54:46.793253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:18.017 [2024-07-15 15:54:46.793259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:18.017 ===================================================== 00:13:18.017 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:18.017 ===================================================== 00:13:18.017 Controller Capabilities/Features 00:13:18.017 ================================ 00:13:18.017 Vendor ID: 4e58 00:13:18.017 Subsystem Vendor ID: 4e58 00:13:18.017 Serial Number: SPDK2 00:13:18.017 Model Number: SPDK bdev Controller 00:13:18.017 Firmware Version: 24.09 00:13:18.017 Recommended Arb Burst: 6 00:13:18.017 IEEE OUI Identifier: 8d 6b 50 00:13:18.017 Multi-path I/O 00:13:18.017 May have multiple subsystem ports: Yes 00:13:18.017 May have multiple controllers: Yes 00:13:18.017 Associated with SR-IOV VF: No 00:13:18.017 Max Data Transfer Size: 131072 00:13:18.017 Max Number of Namespaces: 32 00:13:18.017 Max Number of I/O Queues: 127 00:13:18.017 NVMe Specification Version (VS): 1.3 00:13:18.017 NVMe Specification Version (Identify): 1.3 00:13:18.017 Maximum Queue Entries: 256 00:13:18.017 Contiguous Queues Required: Yes 00:13:18.017 Arbitration Mechanisms Supported 00:13:18.017 Weighted Round Robin: Not Supported 00:13:18.017 Vendor Specific: Not Supported 00:13:18.017 Reset Timeout: 15000 ms 00:13:18.017 Doorbell Stride: 4 bytes 00:13:18.017 NVM Subsystem Reset: Not Supported 00:13:18.017 Command Sets Supported 00:13:18.017 NVM Command Set: Supported 00:13:18.017 Boot Partition: Not Supported 00:13:18.017 Memory Page Size Minimum: 4096 bytes 00:13:18.017 Memory Page Size Maximum: 4096 bytes 00:13:18.017 Persistent Memory Region: Not Supported 00:13:18.017 Optional Asynchronous Events Supported 00:13:18.017 Namespace Attribute Notices: Supported 00:13:18.017 Firmware Activation Notices: Not Supported 00:13:18.017 ANA Change Notices: Not Supported 00:13:18.017 PLE Aggregate Log Change Notices: Not Supported 00:13:18.017 LBA Status Info Alert Notices: Not Supported 00:13:18.017 EGE Aggregate Log Change Notices: Not Supported 00:13:18.017 Normal NVM Subsystem Shutdown event: Not Supported 00:13:18.017 Zone Descriptor Change Notices: Not Supported 00:13:18.017 Discovery Log Change Notices: Not Supported 00:13:18.017 Controller Attributes 00:13:18.017 128-bit Host Identifier: Supported 00:13:18.017 Non-Operational Permissive Mode: Not Supported 00:13:18.017 NVM Sets: Not Supported 00:13:18.017 Read Recovery Levels: Not Supported 00:13:18.017 Endurance Groups: Not Supported 00:13:18.017 Predictable Latency Mode: Not Supported 00:13:18.017 Traffic Based Keep ALive: Not Supported 00:13:18.017 Namespace Granularity: Not Supported 00:13:18.017 SQ Associations: Not Supported 00:13:18.017 UUID List: Not Supported 00:13:18.017 Multi-Domain Subsystem: Not Supported 00:13:18.017 Fixed Capacity Management: Not Supported 00:13:18.017 Variable Capacity Management: Not Supported 00:13:18.017 Delete Endurance Group: Not Supported 00:13:18.017 Delete NVM Set: Not Supported 00:13:18.017 Extended LBA Formats Supported: Not Supported 00:13:18.017 Flexible Data Placement Supported: Not Supported 00:13:18.017 00:13:18.017 Controller Memory Buffer Support 00:13:18.017 ================================ 00:13:18.017 Supported: No 00:13:18.017 00:13:18.017 Persistent Memory Region Support 00:13:18.017 ================================ 00:13:18.017 Supported: No 00:13:18.017 00:13:18.017 Admin Command Set Attributes 00:13:18.017 ============================ 00:13:18.017 Security Send/Receive: Not Supported 00:13:18.017 Format NVM: Not Supported 00:13:18.017 Firmware Activate/Download: Not Supported 00:13:18.017 Namespace Management: Not Supported 00:13:18.017 Device Self-Test: Not Supported 00:13:18.017 Directives: Not Supported 00:13:18.017 NVMe-MI: Not Supported 00:13:18.017 Virtualization Management: Not Supported 00:13:18.017 Doorbell Buffer Config: Not Supported 00:13:18.017 Get LBA Status Capability: Not Supported 00:13:18.017 Command & Feature Lockdown Capability: Not Supported 00:13:18.017 Abort Command Limit: 4 00:13:18.017 Async Event Request Limit: 4 00:13:18.017 Number of Firmware Slots: N/A 00:13:18.017 Firmware Slot 1 Read-Only: N/A 00:13:18.017 Firmware Activation Without Reset: N/A 00:13:18.017 Multiple Update Detection Support: N/A 00:13:18.017 Firmware Update Granularity: No Information Provided 00:13:18.017 Per-Namespace SMART Log: No 00:13:18.017 Asymmetric Namespace Access Log Page: Not Supported 00:13:18.017 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:18.017 Command Effects Log Page: Supported 00:13:18.017 Get Log Page Extended Data: Supported 00:13:18.017 Telemetry Log Pages: Not Supported 00:13:18.017 Persistent Event Log Pages: Not Supported 00:13:18.017 Supported Log Pages Log Page: May Support 00:13:18.017 Commands Supported & Effects Log Page: Not Supported 00:13:18.017 Feature Identifiers & Effects Log Page:May Support 00:13:18.017 NVMe-MI Commands & Effects Log Page: May Support 00:13:18.017 Data Area 4 for Telemetry Log: Not Supported 00:13:18.017 Error Log Page Entries Supported: 128 00:13:18.017 Keep Alive: Supported 00:13:18.017 Keep Alive Granularity: 10000 ms 00:13:18.017 00:13:18.017 NVM Command Set Attributes 00:13:18.017 ========================== 00:13:18.017 Submission Queue Entry Size 00:13:18.017 Max: 64 00:13:18.017 Min: 64 00:13:18.017 Completion Queue Entry Size 00:13:18.017 Max: 16 00:13:18.017 Min: 16 00:13:18.018 Number of Namespaces: 32 00:13:18.018 Compare Command: Supported 00:13:18.018 Write Uncorrectable Command: Not Supported 00:13:18.018 Dataset Management Command: Supported 00:13:18.018 Write Zeroes Command: Supported 00:13:18.018 Set Features Save Field: Not Supported 00:13:18.018 Reservations: Not Supported 00:13:18.018 Timestamp: Not Supported 00:13:18.018 Copy: Supported 00:13:18.018 Volatile Write Cache: Present 00:13:18.018 Atomic Write Unit (Normal): 1 00:13:18.018 Atomic Write Unit (PFail): 1 00:13:18.018 Atomic Compare & Write Unit: 1 00:13:18.018 Fused Compare & Write: Supported 00:13:18.018 Scatter-Gather List 00:13:18.018 SGL Command Set: Supported (Dword aligned) 00:13:18.018 SGL Keyed: Not Supported 00:13:18.018 SGL Bit Bucket Descriptor: Not Supported 00:13:18.018 SGL Metadata Pointer: Not Supported 00:13:18.018 Oversized SGL: Not Supported 00:13:18.018 SGL Metadata Address: Not Supported 00:13:18.018 SGL Offset: Not Supported 00:13:18.018 Transport SGL Data Block: Not Supported 00:13:18.018 Replay Protected Memory Block: Not Supported 00:13:18.018 00:13:18.018 Firmware Slot Information 00:13:18.018 ========================= 00:13:18.018 Active slot: 1 00:13:18.018 Slot 1 Firmware Revision: 24.09 00:13:18.018 00:13:18.018 00:13:18.018 Commands Supported and Effects 00:13:18.018 ============================== 00:13:18.018 Admin Commands 00:13:18.018 -------------- 00:13:18.018 Get Log Page (02h): Supported 00:13:18.018 Identify (06h): Supported 00:13:18.018 Abort (08h): Supported 00:13:18.018 Set Features (09h): Supported 00:13:18.018 Get Features (0Ah): Supported 00:13:18.018 Asynchronous Event Request (0Ch): Supported 00:13:18.018 Keep Alive (18h): Supported 00:13:18.018 I/O Commands 00:13:18.018 ------------ 00:13:18.018 Flush (00h): Supported LBA-Change 00:13:18.018 Write (01h): Supported LBA-Change 00:13:18.018 Read (02h): Supported 00:13:18.018 Compare (05h): Supported 00:13:18.018 Write Zeroes (08h): Supported LBA-Change 00:13:18.018 Dataset Management (09h): Supported LBA-Change 00:13:18.018 Copy (19h): Supported LBA-Change 00:13:18.018 00:13:18.018 Error Log 00:13:18.018 ========= 00:13:18.018 00:13:18.018 Arbitration 00:13:18.018 =========== 00:13:18.018 Arbitration Burst: 1 00:13:18.018 00:13:18.018 Power Management 00:13:18.018 ================ 00:13:18.018 Number of Power States: 1 00:13:18.018 Current Power State: Power State #0 00:13:18.018 Power State #0: 00:13:18.018 Max Power: 0.00 W 00:13:18.018 Non-Operational State: Operational 00:13:18.018 Entry Latency: Not Reported 00:13:18.018 Exit Latency: Not Reported 00:13:18.018 Relative Read Throughput: 0 00:13:18.018 Relative Read Latency: 0 00:13:18.018 Relative Write Throughput: 0 00:13:18.018 Relative Write Latency: 0 00:13:18.018 Idle Power: Not Reported 00:13:18.018 Active Power: Not Reported 00:13:18.018 Non-Operational Permissive Mode: Not Supported 00:13:18.018 00:13:18.018 Health Information 00:13:18.018 ================== 00:13:18.018 Critical Warnings: 00:13:18.018 Available Spare Space: OK 00:13:18.018 Temperature: OK 00:13:18.018 Device Reliability: OK 00:13:18.018 Read Only: No 00:13:18.018 Volatile Memory Backup: OK 00:13:18.018 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:18.018 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:18.018 Available Spare: 0% 00:13:18.018 Available Sp[2024-07-15 15:54:46.793341] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:18.018 [2024-07-15 15:54:46.801229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:18.018 [2024-07-15 15:54:46.801258] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:13:18.018 [2024-07-15 15:54:46.801266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:18.018 [2024-07-15 15:54:46.801272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:18.018 [2024-07-15 15:54:46.801277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:18.018 [2024-07-15 15:54:46.801282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:18.018 [2024-07-15 15:54:46.801331] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:18.018 [2024-07-15 15:54:46.801342] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:18.018 [2024-07-15 15:54:46.802340] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:18.018 [2024-07-15 15:54:46.802381] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:13:18.018 [2024-07-15 15:54:46.802387] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:13:18.018 [2024-07-15 15:54:46.803346] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:18.018 [2024-07-15 15:54:46.803357] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:13:18.018 [2024-07-15 15:54:46.803405] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:18.018 [2024-07-15 15:54:46.806230] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:18.018 are Threshold: 0% 00:13:18.018 Life Percentage Used: 0% 00:13:18.018 Data Units Read: 0 00:13:18.018 Data Units Written: 0 00:13:18.018 Host Read Commands: 0 00:13:18.018 Host Write Commands: 0 00:13:18.018 Controller Busy Time: 0 minutes 00:13:18.018 Power Cycles: 0 00:13:18.018 Power On Hours: 0 hours 00:13:18.018 Unsafe Shutdowns: 0 00:13:18.018 Unrecoverable Media Errors: 0 00:13:18.018 Lifetime Error Log Entries: 0 00:13:18.018 Warning Temperature Time: 0 minutes 00:13:18.018 Critical Temperature Time: 0 minutes 00:13:18.018 00:13:18.018 Number of Queues 00:13:18.019 ================ 00:13:18.019 Number of I/O Submission Queues: 127 00:13:18.019 Number of I/O Completion Queues: 127 00:13:18.019 00:13:18.019 Active Namespaces 00:13:18.019 ================= 00:13:18.019 Namespace ID:1 00:13:18.019 Error Recovery Timeout: Unlimited 00:13:18.019 Command Set Identifier: NVM (00h) 00:13:18.019 Deallocate: Supported 00:13:18.019 Deallocated/Unwritten Error: Not Supported 00:13:18.019 Deallocated Read Value: Unknown 00:13:18.019 Deallocate in Write Zeroes: Not Supported 00:13:18.019 Deallocated Guard Field: 0xFFFF 00:13:18.019 Flush: Supported 00:13:18.019 Reservation: Supported 00:13:18.019 Namespace Sharing Capabilities: Multiple Controllers 00:13:18.019 Size (in LBAs): 131072 (0GiB) 00:13:18.019 Capacity (in LBAs): 131072 (0GiB) 00:13:18.019 Utilization (in LBAs): 131072 (0GiB) 00:13:18.019 NGUID: 2B79A51708B340DE892EB7FCE5EC7B85 00:13:18.019 UUID: 2b79a517-08b3-40de-892e-b7fce5ec7b85 00:13:18.019 Thin Provisioning: Not Supported 00:13:18.019 Per-NS Atomic Units: Yes 00:13:18.019 Atomic Boundary Size (Normal): 0 00:13:18.019 Atomic Boundary Size (PFail): 0 00:13:18.019 Atomic Boundary Offset: 0 00:13:18.019 Maximum Single Source Range Length: 65535 00:13:18.019 Maximum Copy Length: 65535 00:13:18.019 Maximum Source Range Count: 1 00:13:18.019 NGUID/EUI64 Never Reused: No 00:13:18.019 Namespace Write Protected: No 00:13:18.019 Number of LBA Formats: 1 00:13:18.019 Current LBA Format: LBA Format #00 00:13:18.019 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:18.019 00:13:18.019 15:54:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:18.019 EAL: No free 2048 kB hugepages reported on node 1 00:13:18.278 [2024-07-15 15:54:47.013713] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:23.603 Initializing NVMe Controllers 00:13:23.603 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:23.603 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:23.603 Initialization complete. Launching workers. 00:13:23.603 ======================================================== 00:13:23.603 Latency(us) 00:13:23.603 Device Information : IOPS MiB/s Average min max 00:13:23.603 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39932.66 155.99 3205.21 953.54 6820.93 00:13:23.603 ======================================================== 00:13:23.603 Total : 39932.66 155.99 3205.21 953.54 6820.93 00:13:23.603 00:13:23.603 [2024-07-15 15:54:52.119489] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:23.603 15:54:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:23.603 EAL: No free 2048 kB hugepages reported on node 1 00:13:23.603 [2024-07-15 15:54:52.338205] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:28.867 Initializing NVMe Controllers 00:13:28.867 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:28.867 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:28.867 Initialization complete. Launching workers. 00:13:28.867 ======================================================== 00:13:28.867 Latency(us) 00:13:28.867 Device Information : IOPS MiB/s Average min max 00:13:28.867 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39971.00 156.14 3202.59 970.83 8036.12 00:13:28.867 ======================================================== 00:13:28.867 Total : 39971.00 156.14 3202.59 970.83 8036.12 00:13:28.867 00:13:28.867 [2024-07-15 15:54:57.359515] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:28.867 15:54:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:28.867 EAL: No free 2048 kB hugepages reported on node 1 00:13:28.867 [2024-07-15 15:54:57.546943] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:34.135 [2024-07-15 15:55:02.683522] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:34.135 Initializing NVMe Controllers 00:13:34.135 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:34.135 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:34.135 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:34.135 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:34.135 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:34.135 Initialization complete. Launching workers. 00:13:34.135 Starting thread on core 2 00:13:34.135 Starting thread on core 3 00:13:34.135 Starting thread on core 1 00:13:34.135 15:55:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:34.135 EAL: No free 2048 kB hugepages reported on node 1 00:13:34.135 [2024-07-15 15:55:02.966654] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:37.419 [2024-07-15 15:55:06.022930] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:37.419 Initializing NVMe Controllers 00:13:37.419 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:37.419 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:37.419 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:37.419 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:37.419 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:37.419 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:37.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:37.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:37.419 Initialization complete. Launching workers. 00:13:37.419 Starting thread on core 1 with urgent priority queue 00:13:37.419 Starting thread on core 2 with urgent priority queue 00:13:37.419 Starting thread on core 3 with urgent priority queue 00:13:37.419 Starting thread on core 0 with urgent priority queue 00:13:37.419 SPDK bdev Controller (SPDK2 ) core 0: 10572.33 IO/s 9.46 secs/100000 ios 00:13:37.419 SPDK bdev Controller (SPDK2 ) core 1: 9105.33 IO/s 10.98 secs/100000 ios 00:13:37.419 SPDK bdev Controller (SPDK2 ) core 2: 9428.33 IO/s 10.61 secs/100000 ios 00:13:37.420 SPDK bdev Controller (SPDK2 ) core 3: 9082.00 IO/s 11.01 secs/100000 ios 00:13:37.420 ======================================================== 00:13:37.420 00:13:37.420 15:55:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:37.420 EAL: No free 2048 kB hugepages reported on node 1 00:13:37.420 [2024-07-15 15:55:06.297714] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:37.420 Initializing NVMe Controllers 00:13:37.420 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:37.420 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:37.420 Namespace ID: 1 size: 0GB 00:13:37.420 Initialization complete. 00:13:37.420 INFO: using host memory buffer for IO 00:13:37.420 Hello world! 00:13:37.420 [2024-07-15 15:55:06.307771] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:37.420 15:55:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:37.676 EAL: No free 2048 kB hugepages reported on node 1 00:13:37.676 [2024-07-15 15:55:06.578165] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:39.051 Initializing NVMe Controllers 00:13:39.051 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:39.051 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:39.051 Initialization complete. Launching workers. 00:13:39.051 submit (in ns) avg, min, max = 7189.0, 3225.2, 4000383.5 00:13:39.051 complete (in ns) avg, min, max = 20499.9, 1781.7, 4007580.9 00:13:39.051 00:13:39.051 Submit histogram 00:13:39.051 ================ 00:13:39.051 Range in us Cumulative Count 00:13:39.051 3.214 - 3.228: 0.0062% ( 1) 00:13:39.051 3.228 - 3.242: 0.0497% ( 7) 00:13:39.051 3.242 - 3.256: 0.0559% ( 1) 00:13:39.051 3.256 - 3.270: 0.0807% ( 4) 00:13:39.051 3.270 - 3.283: 0.2110% ( 21) 00:13:39.051 3.283 - 3.297: 1.0738% ( 139) 00:13:39.051 3.297 - 3.311: 4.1835% ( 501) 00:13:39.051 3.311 - 3.325: 7.7711% ( 578) 00:13:39.051 3.325 - 3.339: 11.9918% ( 680) 00:13:39.051 3.339 - 3.353: 16.7712% ( 770) 00:13:39.051 3.353 - 3.367: 21.8546% ( 819) 00:13:39.051 3.367 - 3.381: 26.9133% ( 815) 00:13:39.051 3.381 - 3.395: 32.9216% ( 968) 00:13:39.051 3.395 - 3.409: 38.0796% ( 831) 00:13:39.051 3.409 - 3.423: 42.6293% ( 733) 00:13:39.051 3.423 - 3.437: 46.3845% ( 605) 00:13:39.051 3.437 - 3.450: 52.1569% ( 930) 00:13:39.051 3.450 - 3.464: 58.3701% ( 1001) 00:13:39.051 3.464 - 3.478: 62.5039% ( 666) 00:13:39.051 3.478 - 3.492: 66.8674% ( 703) 00:13:39.051 3.492 - 3.506: 72.2115% ( 861) 00:13:39.051 3.506 - 3.520: 77.0592% ( 781) 00:13:39.051 3.520 - 3.534: 80.4606% ( 548) 00:13:39.051 3.534 - 3.548: 82.6888% ( 359) 00:13:39.051 3.548 - 3.562: 84.7744% ( 336) 00:13:39.051 3.562 - 3.590: 87.3192% ( 410) 00:13:39.051 3.590 - 3.617: 88.7096% ( 224) 00:13:39.051 3.617 - 3.645: 90.3048% ( 257) 00:13:39.051 3.645 - 3.673: 91.8875% ( 255) 00:13:39.051 3.673 - 3.701: 93.3834% ( 241) 00:13:39.051 3.701 - 3.729: 94.9103% ( 246) 00:13:39.051 3.729 - 3.757: 96.3441% ( 231) 00:13:39.051 3.757 - 3.784: 97.6848% ( 216) 00:13:39.051 3.784 - 3.812: 98.4855% ( 129) 00:13:39.051 3.812 - 3.840: 99.0379% ( 89) 00:13:39.051 3.840 - 3.868: 99.3421% ( 49) 00:13:39.051 3.868 - 3.896: 99.5531% ( 34) 00:13:39.052 3.896 - 3.923: 99.6338% ( 13) 00:13:39.052 3.923 - 3.951: 99.6400% ( 1) 00:13:39.052 3.951 - 3.979: 99.6524% ( 2) 00:13:39.052 3.979 - 4.007: 99.6586% ( 1) 00:13:39.052 4.007 - 4.035: 99.6648% ( 1) 00:13:39.052 4.063 - 4.090: 99.6710% ( 1) 00:13:39.052 5.454 - 5.482: 99.6772% ( 1) 00:13:39.052 5.482 - 5.510: 99.6834% ( 1) 00:13:39.052 5.593 - 5.621: 99.6959% ( 2) 00:13:39.052 5.649 - 5.677: 99.7021% ( 1) 00:13:39.052 5.843 - 5.871: 99.7083% ( 1) 00:13:39.052 5.871 - 5.899: 99.7145% ( 1) 00:13:39.052 6.150 - 6.177: 99.7207% ( 1) 00:13:39.052 6.372 - 6.400: 99.7269% ( 1) 00:13:39.052 6.428 - 6.456: 99.7393% ( 2) 00:13:39.052 6.483 - 6.511: 99.7455% ( 1) 00:13:39.052 6.539 - 6.567: 99.7517% ( 1) 00:13:39.052 6.567 - 6.595: 99.7641% ( 2) 00:13:39.052 6.595 - 6.623: 99.7703% ( 1) 00:13:39.052 6.623 - 6.650: 99.7828% ( 2) 00:13:39.052 6.706 - 6.734: 99.7952% ( 2) 00:13:39.052 6.762 - 6.790: 99.8076% ( 2) 00:13:39.052 6.901 - 6.929: 99.8138% ( 1) 00:13:39.052 7.012 - 7.040: 99.8324% ( 3) 00:13:39.052 7.040 - 7.068: 99.8386% ( 1) 00:13:39.052 7.068 - 7.096: 99.8448% ( 1) 00:13:39.052 7.457 - 7.513: 99.8510% ( 1) 00:13:39.052 7.513 - 7.569: 99.8634% ( 2) 00:13:39.052 7.569 - 7.624: 99.8697% ( 1) 00:13:39.052 7.624 - 7.680: 99.8821% ( 2) 00:13:39.052 7.791 - 7.847: 99.8883% ( 1) 00:13:39.052 8.070 - 8.125: 99.8945% ( 1) 00:13:39.052 9.016 - 9.071: 99.9007% ( 1) 00:13:39.052 9.127 - 9.183: 99.9069% ( 1) 00:13:39.052 3989.148 - 4017.642: 100.0000% ( 15) 00:13:39.052 00:13:39.052 Complete histogram 00:13:39.052 ================== 00:13:39.052 Range in us Cumulative Count 00:13:39.052 1.781 - 1.795: 0.0248% ( 4) 00:13:39.052 1.795 - 1.809: 0.0310% ( 1) 00:13:39.052 1.809 - 1.823: 0.0497% ( 3) 00:13:39.052 1.823 - 1.837: 0.9683% ( 148) 00:13:39.052 1.837 - 1.850: 3.2214% ( 363) 00:13:39.052 1.850 - [2024-07-15 15:55:07.672270] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:39.052 1.864: 4.4442% ( 197) 00:13:39.052 1.864 - 1.878: 5.5614% ( 180) 00:13:39.052 1.878 - 1.892: 33.2257% ( 4457) 00:13:39.052 1.892 - 1.906: 81.5592% ( 7787) 00:13:39.052 1.906 - 1.920: 92.7937% ( 1810) 00:13:39.052 1.920 - 1.934: 95.3945% ( 419) 00:13:39.052 1.934 - 1.948: 96.0276% ( 102) 00:13:39.052 1.948 - 1.962: 96.7351% ( 114) 00:13:39.052 1.962 - 1.976: 97.9021% ( 188) 00:13:39.052 1.976 - 1.990: 98.8703% ( 156) 00:13:39.052 1.990 - 2.003: 99.1559% ( 46) 00:13:39.052 2.003 - 2.017: 99.2676% ( 18) 00:13:39.052 2.017 - 2.031: 99.2924% ( 4) 00:13:39.052 2.031 - 2.045: 99.2986% ( 1) 00:13:39.052 2.045 - 2.059: 99.3172% ( 3) 00:13:39.052 2.059 - 2.073: 99.3421% ( 4) 00:13:39.052 2.073 - 2.087: 99.3483% ( 1) 00:13:39.052 2.101 - 2.115: 99.3545% ( 1) 00:13:39.052 2.143 - 2.157: 99.3607% ( 1) 00:13:39.052 2.379 - 2.393: 99.3669% ( 1) 00:13:39.052 2.477 - 2.490: 99.3731% ( 1) 00:13:39.052 3.757 - 3.784: 99.3793% ( 1) 00:13:39.052 3.923 - 3.951: 99.3855% ( 1) 00:13:39.052 4.007 - 4.035: 99.3917% ( 1) 00:13:39.052 4.146 - 4.174: 99.3979% ( 1) 00:13:39.052 4.257 - 4.285: 99.4041% ( 1) 00:13:39.052 4.369 - 4.397: 99.4103% ( 1) 00:13:39.052 4.591 - 4.619: 99.4228% ( 2) 00:13:39.052 4.619 - 4.647: 99.4290% ( 1) 00:13:39.052 4.675 - 4.703: 99.4352% ( 1) 00:13:39.052 4.730 - 4.758: 99.4414% ( 1) 00:13:39.052 4.870 - 4.897: 99.4476% ( 1) 00:13:39.052 4.953 - 4.981: 99.4538% ( 1) 00:13:39.052 4.981 - 5.009: 99.4600% ( 1) 00:13:39.052 5.009 - 5.037: 99.4662% ( 1) 00:13:39.052 5.203 - 5.231: 99.4724% ( 1) 00:13:39.052 5.287 - 5.315: 99.4848% ( 2) 00:13:39.052 5.398 - 5.426: 99.4910% ( 1) 00:13:39.052 5.537 - 5.565: 99.4972% ( 1) 00:13:39.052 5.565 - 5.593: 99.5034% ( 1) 00:13:39.052 5.732 - 5.760: 99.5097% ( 1) 00:13:39.052 5.843 - 5.871: 99.5159% ( 1) 00:13:39.052 6.066 - 6.094: 99.5221% ( 1) 00:13:39.052 6.233 - 6.261: 99.5283% ( 1) 00:13:39.052 6.623 - 6.650: 99.5345% ( 1) 00:13:39.052 3989.148 - 4017.642: 100.0000% ( 75) 00:13:39.052 00:13:39.052 15:55:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:39.052 15:55:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:39.052 15:55:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:39.052 15:55:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:39.052 15:55:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:39.052 [ 00:13:39.052 { 00:13:39.052 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:39.052 "subtype": "Discovery", 00:13:39.052 "listen_addresses": [], 00:13:39.052 "allow_any_host": true, 00:13:39.052 "hosts": [] 00:13:39.052 }, 00:13:39.052 { 00:13:39.053 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:39.053 "subtype": "NVMe", 00:13:39.053 "listen_addresses": [ 00:13:39.053 { 00:13:39.053 "trtype": "VFIOUSER", 00:13:39.053 "adrfam": "IPv4", 00:13:39.053 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:39.053 "trsvcid": "0" 00:13:39.053 } 00:13:39.053 ], 00:13:39.053 "allow_any_host": true, 00:13:39.053 "hosts": [], 00:13:39.053 "serial_number": "SPDK1", 00:13:39.053 "model_number": "SPDK bdev Controller", 00:13:39.053 "max_namespaces": 32, 00:13:39.053 "min_cntlid": 1, 00:13:39.053 "max_cntlid": 65519, 00:13:39.053 "namespaces": [ 00:13:39.053 { 00:13:39.053 "nsid": 1, 00:13:39.053 "bdev_name": "Malloc1", 00:13:39.053 "name": "Malloc1", 00:13:39.053 "nguid": "BD9AD292F7754983926228907C2BFE41", 00:13:39.053 "uuid": "bd9ad292-f775-4983-9262-28907c2bfe41" 00:13:39.053 }, 00:13:39.053 { 00:13:39.053 "nsid": 2, 00:13:39.053 "bdev_name": "Malloc3", 00:13:39.053 "name": "Malloc3", 00:13:39.053 "nguid": "E91FACFB726A4FF5A7ECE54EB0BB2ACC", 00:13:39.053 "uuid": "e91facfb-726a-4ff5-a7ec-e54eb0bb2acc" 00:13:39.053 } 00:13:39.053 ] 00:13:39.053 }, 00:13:39.053 { 00:13:39.053 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:39.053 "subtype": "NVMe", 00:13:39.053 "listen_addresses": [ 00:13:39.053 { 00:13:39.053 "trtype": "VFIOUSER", 00:13:39.053 "adrfam": "IPv4", 00:13:39.053 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:39.053 "trsvcid": "0" 00:13:39.053 } 00:13:39.053 ], 00:13:39.053 "allow_any_host": true, 00:13:39.053 "hosts": [], 00:13:39.053 "serial_number": "SPDK2", 00:13:39.053 "model_number": "SPDK bdev Controller", 00:13:39.053 "max_namespaces": 32, 00:13:39.053 "min_cntlid": 1, 00:13:39.053 "max_cntlid": 65519, 00:13:39.053 "namespaces": [ 00:13:39.053 { 00:13:39.053 "nsid": 1, 00:13:39.053 "bdev_name": "Malloc2", 00:13:39.053 "name": "Malloc2", 00:13:39.053 "nguid": "2B79A51708B340DE892EB7FCE5EC7B85", 00:13:39.053 "uuid": "2b79a517-08b3-40de-892e-b7fce5ec7b85" 00:13:39.053 } 00:13:39.053 ] 00:13:39.053 } 00:13:39.053 ] 00:13:39.053 15:55:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:39.053 15:55:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:39.053 15:55:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3695514 00:13:39.053 15:55:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:39.053 15:55:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:13:39.053 15:55:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:39.053 15:55:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:39.053 15:55:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:13:39.053 15:55:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:39.053 15:55:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:39.053 EAL: No free 2048 kB hugepages reported on node 1 00:13:39.312 [2024-07-15 15:55:08.030657] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:39.312 Malloc4 00:13:39.312 15:55:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:39.569 [2024-07-15 15:55:08.274507] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:39.569 15:55:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:39.569 Asynchronous Event Request test 00:13:39.569 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:39.569 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:39.569 Registering asynchronous event callbacks... 00:13:39.569 Starting namespace attribute notice tests for all controllers... 00:13:39.569 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:39.569 aer_cb - Changed Namespace 00:13:39.569 Cleaning up... 00:13:39.569 [ 00:13:39.569 { 00:13:39.569 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:39.569 "subtype": "Discovery", 00:13:39.569 "listen_addresses": [], 00:13:39.569 "allow_any_host": true, 00:13:39.569 "hosts": [] 00:13:39.569 }, 00:13:39.569 { 00:13:39.569 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:39.569 "subtype": "NVMe", 00:13:39.569 "listen_addresses": [ 00:13:39.569 { 00:13:39.569 "trtype": "VFIOUSER", 00:13:39.569 "adrfam": "IPv4", 00:13:39.569 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:39.569 "trsvcid": "0" 00:13:39.569 } 00:13:39.569 ], 00:13:39.569 "allow_any_host": true, 00:13:39.569 "hosts": [], 00:13:39.569 "serial_number": "SPDK1", 00:13:39.569 "model_number": "SPDK bdev Controller", 00:13:39.569 "max_namespaces": 32, 00:13:39.569 "min_cntlid": 1, 00:13:39.569 "max_cntlid": 65519, 00:13:39.569 "namespaces": [ 00:13:39.569 { 00:13:39.569 "nsid": 1, 00:13:39.569 "bdev_name": "Malloc1", 00:13:39.569 "name": "Malloc1", 00:13:39.569 "nguid": "BD9AD292F7754983926228907C2BFE41", 00:13:39.569 "uuid": "bd9ad292-f775-4983-9262-28907c2bfe41" 00:13:39.569 }, 00:13:39.569 { 00:13:39.569 "nsid": 2, 00:13:39.569 "bdev_name": "Malloc3", 00:13:39.569 "name": "Malloc3", 00:13:39.570 "nguid": "E91FACFB726A4FF5A7ECE54EB0BB2ACC", 00:13:39.570 "uuid": "e91facfb-726a-4ff5-a7ec-e54eb0bb2acc" 00:13:39.570 } 00:13:39.570 ] 00:13:39.570 }, 00:13:39.570 { 00:13:39.570 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:39.570 "subtype": "NVMe", 00:13:39.570 "listen_addresses": [ 00:13:39.570 { 00:13:39.570 "trtype": "VFIOUSER", 00:13:39.570 "adrfam": "IPv4", 00:13:39.570 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:39.570 "trsvcid": "0" 00:13:39.570 } 00:13:39.570 ], 00:13:39.570 "allow_any_host": true, 00:13:39.570 "hosts": [], 00:13:39.570 "serial_number": "SPDK2", 00:13:39.570 "model_number": "SPDK bdev Controller", 00:13:39.570 "max_namespaces": 32, 00:13:39.570 "min_cntlid": 1, 00:13:39.570 "max_cntlid": 65519, 00:13:39.570 "namespaces": [ 00:13:39.570 { 00:13:39.570 "nsid": 1, 00:13:39.570 "bdev_name": "Malloc2", 00:13:39.570 "name": "Malloc2", 00:13:39.570 "nguid": "2B79A51708B340DE892EB7FCE5EC7B85", 00:13:39.570 "uuid": "2b79a517-08b3-40de-892e-b7fce5ec7b85" 00:13:39.570 }, 00:13:39.570 { 00:13:39.570 "nsid": 2, 00:13:39.570 "bdev_name": "Malloc4", 00:13:39.570 "name": "Malloc4", 00:13:39.570 "nguid": "6191B8DD91C5421980096763E4678BA7", 00:13:39.570 "uuid": "6191b8dd-91c5-4219-8009-6763e4678ba7" 00:13:39.570 } 00:13:39.570 ] 00:13:39.570 } 00:13:39.570 ] 00:13:39.570 15:55:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3695514 00:13:39.570 15:55:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:39.570 15:55:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3687866 00:13:39.570 15:55:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 3687866 ']' 00:13:39.570 15:55:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 3687866 00:13:39.570 15:55:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:13:39.570 15:55:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:39.570 15:55:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3687866 00:13:39.828 15:55:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:39.828 15:55:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:39.828 15:55:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3687866' 00:13:39.828 killing process with pid 3687866 00:13:39.828 15:55:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 3687866 00:13:39.828 15:55:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 3687866 00:13:40.087 15:55:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:40.087 15:55:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:40.087 15:55:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:40.087 15:55:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:40.087 15:55:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:40.087 15:55:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3695728 00:13:40.087 15:55:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3695728' 00:13:40.087 Process pid: 3695728 00:13:40.087 15:55:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:40.087 15:55:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:40.087 15:55:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3695728 00:13:40.087 15:55:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 3695728 ']' 00:13:40.087 15:55:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.087 15:55:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:40.087 15:55:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.087 15:55:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:40.087 15:55:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:40.087 [2024-07-15 15:55:08.828900] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:40.087 [2024-07-15 15:55:08.829756] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:13:40.087 [2024-07-15 15:55:08.829794] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:40.087 EAL: No free 2048 kB hugepages reported on node 1 00:13:40.087 [2024-07-15 15:55:08.882146] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:40.087 [2024-07-15 15:55:08.950603] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:40.087 [2024-07-15 15:55:08.950644] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:40.087 [2024-07-15 15:55:08.950651] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:40.087 [2024-07-15 15:55:08.950656] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:40.087 [2024-07-15 15:55:08.950661] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:40.087 [2024-07-15 15:55:08.950722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:40.087 [2024-07-15 15:55:08.950816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:40.087 [2024-07-15 15:55:08.950904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:40.087 [2024-07-15 15:55:08.950905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.345 [2024-07-15 15:55:09.029597] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:13:40.345 [2024-07-15 15:55:09.029740] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:13:40.345 [2024-07-15 15:55:09.029964] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:13:40.345 [2024-07-15 15:55:09.030257] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:40.345 [2024-07-15 15:55:09.030478] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:13:40.910 15:55:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:40.910 15:55:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:13:40.910 15:55:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:41.843 15:55:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:42.102 15:55:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:42.102 15:55:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:42.102 15:55:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:42.102 15:55:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:42.102 15:55:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:42.102 Malloc1 00:13:42.102 15:55:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:42.360 15:55:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:42.618 15:55:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:42.877 15:55:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:42.877 15:55:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:42.877 15:55:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:42.877 Malloc2 00:13:42.877 15:55:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:43.135 15:55:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:43.433 15:55:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:43.433 15:55:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:43.433 15:55:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3695728 00:13:43.433 15:55:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 3695728 ']' 00:13:43.433 15:55:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 3695728 00:13:43.433 15:55:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:13:43.433 15:55:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:43.433 15:55:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3695728 00:13:43.433 15:55:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:43.692 15:55:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:43.692 15:55:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3695728' 00:13:43.692 killing process with pid 3695728 00:13:43.692 15:55:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 3695728 00:13:43.692 15:55:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 3695728 00:13:43.692 15:55:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:43.692 15:55:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:43.692 00:13:43.692 real 0m51.258s 00:13:43.692 user 3m22.994s 00:13:43.692 sys 0m3.604s 00:13:43.692 15:55:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:43.692 15:55:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:43.692 ************************************ 00:13:43.692 END TEST nvmf_vfio_user 00:13:43.692 ************************************ 00:13:43.692 15:55:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:43.692 15:55:12 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:43.692 15:55:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:43.692 15:55:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:43.692 15:55:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:43.950 ************************************ 00:13:43.951 START TEST nvmf_vfio_user_nvme_compliance 00:13:43.951 ************************************ 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:43.951 * Looking for test storage... 00:13:43.951 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3696491 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3696491' 00:13:43.951 Process pid: 3696491 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3696491 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 3696491 ']' 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:43.951 15:55:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:43.951 [2024-07-15 15:55:12.800544] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:13:43.951 [2024-07-15 15:55:12.800588] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:43.951 EAL: No free 2048 kB hugepages reported on node 1 00:13:43.951 [2024-07-15 15:55:12.854098] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:44.210 [2024-07-15 15:55:12.927850] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:44.210 [2024-07-15 15:55:12.927891] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:44.210 [2024-07-15 15:55:12.927898] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:44.210 [2024-07-15 15:55:12.927904] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:44.210 [2024-07-15 15:55:12.927909] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:44.210 [2024-07-15 15:55:12.927954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:44.210 [2024-07-15 15:55:12.928040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:44.210 [2024-07-15 15:55:12.928042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.787 15:55:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:44.787 15:55:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:13:44.787 15:55:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:13:45.720 15:55:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:45.720 15:55:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:45.720 15:55:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:45.720 15:55:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.720 15:55:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:45.720 15:55:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.720 15:55:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:45.720 15:55:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:45.720 15:55:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.720 15:55:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:45.979 malloc0 00:13:45.979 15:55:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.979 15:55:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:45.979 15:55:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.979 15:55:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:45.979 15:55:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.979 15:55:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:45.979 15:55:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.979 15:55:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:45.979 15:55:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.979 15:55:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:45.979 15:55:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.979 15:55:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:45.979 15:55:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.980 15:55:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:45.980 EAL: No free 2048 kB hugepages reported on node 1 00:13:45.980 00:13:45.980 00:13:45.980 CUnit - A unit testing framework for C - Version 2.1-3 00:13:45.980 http://cunit.sourceforge.net/ 00:13:45.980 00:13:45.980 00:13:45.980 Suite: nvme_compliance 00:13:45.980 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 15:55:14.835701] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:45.980 [2024-07-15 15:55:14.837043] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:45.980 [2024-07-15 15:55:14.837059] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:45.980 [2024-07-15 15:55:14.837065] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:45.980 [2024-07-15 15:55:14.838723] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:45.980 passed 00:13:46.295 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 15:55:14.915250] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:46.295 [2024-07-15 15:55:14.918273] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:46.295 passed 00:13:46.296 Test: admin_identify_ns ...[2024-07-15 15:55:14.998667] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:46.296 [2024-07-15 15:55:15.062238] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:46.296 [2024-07-15 15:55:15.070236] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:46.296 [2024-07-15 15:55:15.091341] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:46.296 passed 00:13:46.296 Test: admin_get_features_mandatory_features ...[2024-07-15 15:55:15.164676] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:46.296 [2024-07-15 15:55:15.167700] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:46.296 passed 00:13:46.553 Test: admin_get_features_optional_features ...[2024-07-15 15:55:15.245215] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:46.553 [2024-07-15 15:55:15.248233] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:46.553 passed 00:13:46.553 Test: admin_set_features_number_of_queues ...[2024-07-15 15:55:15.326096] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:46.553 [2024-07-15 15:55:15.429321] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:46.553 passed 00:13:46.811 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 15:55:15.505585] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:46.811 [2024-07-15 15:55:15.508602] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:46.811 passed 00:13:46.811 Test: admin_get_log_page_with_lpo ...[2024-07-15 15:55:15.586666] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:46.811 [2024-07-15 15:55:15.658233] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:46.811 [2024-07-15 15:55:15.671288] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:46.811 passed 00:13:47.069 Test: fabric_property_get ...[2024-07-15 15:55:15.745394] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:47.069 [2024-07-15 15:55:15.746618] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:13:47.069 [2024-07-15 15:55:15.748410] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:47.069 passed 00:13:47.069 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 15:55:15.826896] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:47.069 [2024-07-15 15:55:15.828142] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:47.069 [2024-07-15 15:55:15.831926] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:47.069 passed 00:13:47.069 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 15:55:15.910711] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:47.069 [2024-07-15 15:55:15.995238] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:47.327 [2024-07-15 15:55:16.011236] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:47.327 [2024-07-15 15:55:16.016323] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:47.327 passed 00:13:47.327 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 15:55:16.090457] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:47.327 [2024-07-15 15:55:16.091696] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:47.327 [2024-07-15 15:55:16.094486] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:47.327 passed 00:13:47.327 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 15:55:16.172371] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:47.327 [2024-07-15 15:55:16.250237] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:47.586 [2024-07-15 15:55:16.274237] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:47.586 [2024-07-15 15:55:16.279312] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:47.586 passed 00:13:47.586 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 15:55:16.353642] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:47.586 [2024-07-15 15:55:16.354877] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:47.586 [2024-07-15 15:55:16.354900] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:47.586 [2024-07-15 15:55:16.356663] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:47.586 passed 00:13:47.586 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 15:55:16.435620] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:47.845 [2024-07-15 15:55:16.528244] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:47.845 [2024-07-15 15:55:16.536236] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:47.845 [2024-07-15 15:55:16.544233] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:47.845 [2024-07-15 15:55:16.552231] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:47.845 [2024-07-15 15:55:16.581315] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:47.845 passed 00:13:47.845 Test: admin_create_io_sq_verify_pc ...[2024-07-15 15:55:16.658403] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:47.845 [2024-07-15 15:55:16.675240] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:47.845 [2024-07-15 15:55:16.692578] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:47.845 passed 00:13:47.845 Test: admin_create_io_qp_max_qps ...[2024-07-15 15:55:16.765080] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:49.219 [2024-07-15 15:55:17.852236] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:13:49.477 [2024-07-15 15:55:18.232336] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:49.477 passed 00:13:49.477 Test: admin_create_io_sq_shared_cq ...[2024-07-15 15:55:18.309682] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:49.735 [2024-07-15 15:55:18.445231] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:49.735 [2024-07-15 15:55:18.482296] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:49.735 passed 00:13:49.735 00:13:49.735 Run Summary: Type Total Ran Passed Failed Inactive 00:13:49.735 suites 1 1 n/a 0 0 00:13:49.735 tests 18 18 18 0 0 00:13:49.735 asserts 360 360 360 0 n/a 00:13:49.735 00:13:49.735 Elapsed time = 1.502 seconds 00:13:49.735 15:55:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3696491 00:13:49.735 15:55:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 3696491 ']' 00:13:49.735 15:55:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 3696491 00:13:49.735 15:55:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:13:49.735 15:55:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:49.735 15:55:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3696491 00:13:49.735 15:55:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:49.735 15:55:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:49.735 15:55:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3696491' 00:13:49.735 killing process with pid 3696491 00:13:49.735 15:55:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 3696491 00:13:49.735 15:55:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 3696491 00:13:49.994 15:55:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:49.994 15:55:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:49.994 00:13:49.994 real 0m6.117s 00:13:49.994 user 0m17.482s 00:13:49.994 sys 0m0.468s 00:13:49.994 15:55:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:49.994 15:55:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:49.994 ************************************ 00:13:49.994 END TEST nvmf_vfio_user_nvme_compliance 00:13:49.994 ************************************ 00:13:49.994 15:55:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:49.994 15:55:18 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:49.994 15:55:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:49.994 15:55:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:49.994 15:55:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:49.994 ************************************ 00:13:49.994 START TEST nvmf_vfio_user_fuzz 00:13:49.994 ************************************ 00:13:49.994 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:49.994 * Looking for test storage... 00:13:49.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:49.994 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:49.994 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:13:49.994 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:49.994 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:49.994 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:49.994 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:49.994 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:49.994 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:49.994 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:49.994 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:49.994 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:49.994 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:50.254 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:50.254 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:50.254 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:50.254 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:50.254 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:50.254 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:50.254 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:50.254 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:50.254 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:50.254 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:50.254 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.254 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.254 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.254 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:13:50.254 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.254 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:13:50.254 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:50.254 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:50.254 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:50.254 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:50.254 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:50.254 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:50.254 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:50.254 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:50.254 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:50.254 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:50.254 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:50.254 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:50.254 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:50.254 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:50.254 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:50.254 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3697481 00:13:50.254 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3697481' 00:13:50.254 Process pid: 3697481 00:13:50.254 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:50.254 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3697481 00:13:50.254 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:50.254 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 3697481 ']' 00:13:50.254 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.254 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:50.254 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.254 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:50.254 15:55:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:51.189 15:55:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:51.189 15:55:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:13:51.189 15:55:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:52.126 15:55:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:52.126 15:55:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.126 15:55:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:52.126 15:55:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.126 15:55:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:52.126 15:55:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:52.126 15:55:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.126 15:55:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:52.126 malloc0 00:13:52.126 15:55:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.126 15:55:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:52.127 15:55:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.127 15:55:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:52.127 15:55:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.127 15:55:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:52.127 15:55:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.127 15:55:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:52.127 15:55:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.127 15:55:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:52.127 15:55:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.127 15:55:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:52.127 15:55:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.127 15:55:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:52.127 15:55:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:24.198 Fuzzing completed. Shutting down the fuzz application 00:14:24.198 00:14:24.198 Dumping successful admin opcodes: 00:14:24.198 8, 9, 10, 24, 00:14:24.198 Dumping successful io opcodes: 00:14:24.198 0, 00:14:24.198 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1005719, total successful commands: 3942, random_seed: 1753436672 00:14:24.198 NS: 0x200003a1ef00 admin qp, Total commands completed: 249691, total successful commands: 2018, random_seed: 3358536640 00:14:24.198 15:55:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:24.198 15:55:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.198 15:55:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:24.198 15:55:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.198 15:55:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3697481 00:14:24.198 15:55:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 3697481 ']' 00:14:24.198 15:55:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 3697481 00:14:24.198 15:55:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:14:24.198 15:55:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:24.198 15:55:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3697481 00:14:24.198 15:55:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:24.198 15:55:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:24.198 15:55:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3697481' 00:14:24.198 killing process with pid 3697481 00:14:24.198 15:55:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 3697481 00:14:24.198 15:55:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 3697481 00:14:24.198 15:55:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:24.198 15:55:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:24.198 00:14:24.198 real 0m32.752s 00:14:24.198 user 0m31.006s 00:14:24.198 sys 0m30.101s 00:14:24.198 15:55:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:24.198 15:55:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:24.198 ************************************ 00:14:24.198 END TEST nvmf_vfio_user_fuzz 00:14:24.198 ************************************ 00:14:24.198 15:55:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:24.198 15:55:51 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:24.198 15:55:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:24.198 15:55:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:24.198 15:55:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:24.198 ************************************ 00:14:24.198 START TEST nvmf_host_management 00:14:24.198 ************************************ 00:14:24.198 15:55:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:24.198 * Looking for test storage... 00:14:24.198 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:24.198 15:55:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:24.198 15:55:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:14:24.198 15:55:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:24.198 15:55:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:24.198 15:55:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:24.198 15:55:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:24.198 15:55:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:24.198 15:55:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:24.198 15:55:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:24.199 15:55:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:24.199 15:55:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:24.199 15:55:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:24.199 15:55:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:24.199 15:55:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:24.199 15:55:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:24.199 15:55:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:24.199 15:55:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:24.199 15:55:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:24.199 15:55:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:24.199 15:55:51 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:24.199 15:55:51 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:24.199 15:55:51 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:24.199 15:55:51 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.199 15:55:51 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.199 15:55:51 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.199 15:55:51 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:14:24.199 15:55:51 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.199 15:55:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:14:24.199 15:55:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:24.199 15:55:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:24.199 15:55:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:24.199 15:55:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:24.199 15:55:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:24.199 15:55:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:24.199 15:55:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:24.199 15:55:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:24.199 15:55:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:24.199 15:55:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:24.199 15:55:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:14:24.199 15:55:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:24.199 15:55:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:24.199 15:55:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:24.199 15:55:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:24.199 15:55:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:24.199 15:55:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.199 15:55:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:24.199 15:55:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.199 15:55:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:24.199 15:55:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:24.199 15:55:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:14:24.199 15:55:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:28.387 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:28.387 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:28.387 Found net devices under 0000:86:00.0: cvl_0_0 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:28.387 Found net devices under 0000:86:00.1: cvl_0_1 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:28.387 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:28.388 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:28.388 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:28.388 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:28.388 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:28.388 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:28.388 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:28.388 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:28.388 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:28.388 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:28.388 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:28.388 15:55:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:28.388 15:55:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:28.388 15:55:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:28.388 15:55:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:28.388 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:28.388 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:14:28.388 00:14:28.388 --- 10.0.0.2 ping statistics --- 00:14:28.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:28.388 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:14:28.388 15:55:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:28.388 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:28.388 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:14:28.388 00:14:28.388 --- 10.0.0.1 ping statistics --- 00:14:28.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:28.388 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:14:28.388 15:55:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:28.388 15:55:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:14:28.388 15:55:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:28.388 15:55:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:28.388 15:55:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:28.388 15:55:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:28.388 15:55:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:28.388 15:55:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:28.388 15:55:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:28.388 15:55:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:14:28.388 15:55:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:14:28.388 15:55:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:28.388 15:55:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:28.388 15:55:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:28.388 15:55:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:28.388 15:55:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=3706007 00:14:28.388 15:55:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 3706007 00:14:28.388 15:55:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:28.388 15:55:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 3706007 ']' 00:14:28.388 15:55:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.388 15:55:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:28.388 15:55:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.388 15:55:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:28.388 15:55:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:28.388 [2024-07-15 15:55:57.133974] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:14:28.388 [2024-07-15 15:55:57.134017] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:28.388 EAL: No free 2048 kB hugepages reported on node 1 00:14:28.388 [2024-07-15 15:55:57.191297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:28.388 [2024-07-15 15:55:57.264552] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:28.388 [2024-07-15 15:55:57.264593] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:28.388 [2024-07-15 15:55:57.264600] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:28.388 [2024-07-15 15:55:57.264606] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:28.388 [2024-07-15 15:55:57.264612] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:28.388 [2024-07-15 15:55:57.264712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:28.388 [2024-07-15 15:55:57.264803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:28.388 [2024-07-15 15:55:57.264912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:28.388 [2024-07-15 15:55:57.264914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:29.324 15:55:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:29.324 15:55:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:14:29.324 15:55:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:29.324 15:55:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:29.324 15:55:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:29.324 15:55:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:29.324 15:55:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:29.324 15:55:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.324 15:55:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:29.324 [2024-07-15 15:55:57.987318] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:29.324 15:55:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.324 15:55:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:29.324 15:55:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:29.324 15:55:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:29.324 15:55:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:29.324 15:55:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:14:29.324 15:55:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:14:29.324 15:55:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.324 15:55:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:29.324 Malloc0 00:14:29.324 [2024-07-15 15:55:58.047324] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:29.324 15:55:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.324 15:55:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:29.324 15:55:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:29.324 15:55:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:29.324 15:55:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3706276 00:14:29.324 15:55:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3706276 /var/tmp/bdevperf.sock 00:14:29.324 15:55:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 3706276 ']' 00:14:29.324 15:55:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:29.324 15:55:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:29.324 15:55:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:29.324 15:55:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:29.324 15:55:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:29.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:29.324 15:55:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:29.324 15:55:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:29.324 15:55:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:29.324 15:55:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:29.324 15:55:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:29.324 15:55:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:29.324 { 00:14:29.324 "params": { 00:14:29.324 "name": "Nvme$subsystem", 00:14:29.324 "trtype": "$TEST_TRANSPORT", 00:14:29.324 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:29.324 "adrfam": "ipv4", 00:14:29.324 "trsvcid": "$NVMF_PORT", 00:14:29.324 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:29.324 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:29.324 "hdgst": ${hdgst:-false}, 00:14:29.324 "ddgst": ${ddgst:-false} 00:14:29.324 }, 00:14:29.324 "method": "bdev_nvme_attach_controller" 00:14:29.324 } 00:14:29.324 EOF 00:14:29.324 )") 00:14:29.324 15:55:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:29.324 15:55:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:29.325 15:55:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:29.325 15:55:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:29.325 "params": { 00:14:29.325 "name": "Nvme0", 00:14:29.325 "trtype": "tcp", 00:14:29.325 "traddr": "10.0.0.2", 00:14:29.325 "adrfam": "ipv4", 00:14:29.325 "trsvcid": "4420", 00:14:29.325 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:29.325 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:29.325 "hdgst": false, 00:14:29.325 "ddgst": false 00:14:29.325 }, 00:14:29.325 "method": "bdev_nvme_attach_controller" 00:14:29.325 }' 00:14:29.325 [2024-07-15 15:55:58.138696] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:14:29.325 [2024-07-15 15:55:58.138742] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3706276 ] 00:14:29.325 EAL: No free 2048 kB hugepages reported on node 1 00:14:29.325 [2024-07-15 15:55:58.192622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.583 [2024-07-15 15:55:58.266552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.841 Running I/O for 10 seconds... 00:14:30.101 15:55:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:30.101 15:55:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:14:30.101 15:55:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:30.101 15:55:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.101 15:55:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:30.101 15:55:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.101 15:55:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:30.101 15:55:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:30.101 15:55:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:30.101 15:55:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:30.101 15:55:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:14:30.101 15:55:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:14:30.101 15:55:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:30.101 15:55:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:30.101 15:55:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:30.101 15:55:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.101 15:55:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:30.101 15:55:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:30.101 15:55:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.101 15:55:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:14:30.101 15:55:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:14:30.101 15:55:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:14:30.101 15:55:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:14:30.101 15:55:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:14:30.101 15:55:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:30.101 15:55:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.101 15:55:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:30.101 [2024-07-15 15:55:59.030554] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534460 is same with the state(5) to be set 00:14:30.101 [2024-07-15 15:55:59.030622] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534460 is same with the state(5) to be set 00:14:30.101 [2024-07-15 15:55:59.030630] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534460 is same with the state(5) to be set 00:14:30.101 [2024-07-15 15:55:59.030636] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534460 is same with the state(5) to be set 00:14:30.101 [2024-07-15 15:55:59.030644] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534460 is same with the state(5) to be set 00:14:30.101 [2024-07-15 15:55:59.030650] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534460 is same with the state(5) to be set 00:14:30.101 [2024-07-15 15:55:59.030657] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534460 is same with the state(5) to be set 00:14:30.101 [2024-07-15 15:55:59.030662] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534460 is same with the state(5) to be set 00:14:30.101 [2024-07-15 15:55:59.030668] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534460 is same with the state(5) to be set 00:14:30.101 [2024-07-15 15:55:59.030674] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534460 is same with the state(5) to be set 00:14:30.101 [2024-07-15 15:55:59.030681] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534460 is same with the state(5) to be set 00:14:30.101 [2024-07-15 15:55:59.030687] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534460 is same with the state(5) to be set 00:14:30.101 [2024-07-15 15:55:59.030693] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534460 is same with the state(5) to be set 00:14:30.101 [2024-07-15 15:55:59.030699] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534460 is same with the state(5) to be set 00:14:30.101 [2024-07-15 15:55:59.030705] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534460 is same with the state(5) to be set 00:14:30.101 [2024-07-15 15:55:59.030711] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534460 is same with the state(5) to be set 00:14:30.101 [2024-07-15 15:55:59.030717] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534460 is same with the state(5) to be set 00:14:30.101 [2024-07-15 15:55:59.030723] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534460 is same with the state(5) to be set 00:14:30.101 [2024-07-15 15:55:59.030730] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534460 is same with the state(5) to be set 00:14:30.101 [2024-07-15 15:55:59.030736] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534460 is same with the state(5) to be set 00:14:30.101 [2024-07-15 15:55:59.030742] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534460 is same with the state(5) to be set 00:14:30.101 [2024-07-15 15:55:59.030748] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534460 is same with the state(5) to be set 00:14:30.101 [2024-07-15 15:55:59.030759] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534460 is same with the state(5) to be set 00:14:30.101 [2024-07-15 15:55:59.030765] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534460 is same with the state(5) to be set 00:14:30.101 [2024-07-15 15:55:59.030771] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534460 is same with the state(5) to be set 00:14:30.101 [2024-07-15 15:55:59.030777] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534460 is same with the state(5) to be set 00:14:30.101 [2024-07-15 15:55:59.030783] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534460 is same with the state(5) to be set 00:14:30.101 [2024-07-15 15:55:59.030790] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534460 is same with the state(5) to be set 00:14:30.102 [2024-07-15 15:55:59.030797] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534460 is same with the state(5) to be set 00:14:30.102 [2024-07-15 15:55:59.030803] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534460 is same with the state(5) to be set 00:14:30.102 [2024-07-15 15:55:59.030810] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534460 is same with the state(5) to be set 00:14:30.102 [2024-07-15 15:55:59.030816] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534460 is same with the state(5) to be set 00:14:30.102 [2024-07-15 15:55:59.030822] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534460 is same with the state(5) to be set 00:14:30.102 [2024-07-15 15:55:59.030828] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534460 is same with the state(5) to be set 00:14:30.102 [2024-07-15 15:55:59.030836] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534460 is same with the state(5) to be set 00:14:30.102 [2024-07-15 15:55:59.030843] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534460 is same with the state(5) to be set 00:14:30.102 [2024-07-15 15:55:59.030849] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534460 is same with the state(5) to be set 00:14:30.102 [2024-07-15 15:55:59.030855] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534460 is same with the state(5) to be set 00:14:30.102 [2024-07-15 15:55:59.030861] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534460 is same with the state(5) to be set 00:14:30.102 [2024-07-15 15:55:59.030867] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534460 is same with the state(5) to be set 00:14:30.102 [2024-07-15 15:55:59.030873] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534460 is same with the state(5) to be set 00:14:30.102 [2024-07-15 15:55:59.030879] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534460 is same with the state(5) to be set 00:14:30.102 [2024-07-15 15:55:59.030885] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534460 is same with the state(5) to be set 00:14:30.102 [2024-07-15 15:55:59.030891] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534460 is same with the state(5) to be set 00:14:30.102 [2024-07-15 15:55:59.030897] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534460 is same with the state(5) to be set 00:14:30.102 [2024-07-15 15:55:59.030903] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534460 is same with the state(5) to be set 00:14:30.102 [2024-07-15 15:55:59.030909] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534460 is same with the state(5) to be set 00:14:30.102 [2024-07-15 15:55:59.030915] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1534460 is same with the state(5) to be set 00:14:30.102 [2024-07-15 15:55:59.032518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:30.102 [2024-07-15 15:55:59.032561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.102 [2024-07-15 15:55:59.032571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:30.102 [2024-07-15 15:55:59.032578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.102 [2024-07-15 15:55:59.032585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:30.102 [2024-07-15 15:55:59.032592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.102 [2024-07-15 15:55:59.032600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:30.102 [2024-07-15 15:55:59.032607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.102 [2024-07-15 15:55:59.032613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a1980 is same with the state(5) to be set 00:14:30.102 [2024-07-15 15:55:59.033491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.102 [2024-07-15 15:55:59.033513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.102 [2024-07-15 15:55:59.033527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.102 [2024-07-15 15:55:59.033535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.102 [2024-07-15 15:55:59.033544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.102 [2024-07-15 15:55:59.033551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.102 [2024-07-15 15:55:59.033561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.102 [2024-07-15 15:55:59.033568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.102 [2024-07-15 15:55:59.033577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.102 [2024-07-15 15:55:59.033584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.102 [2024-07-15 15:55:59.033593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.102 [2024-07-15 15:55:59.033601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.102 [2024-07-15 15:55:59.033610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.102 [2024-07-15 15:55:59.033618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.102 [2024-07-15 15:55:59.033627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.102 [2024-07-15 15:55:59.033635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.102 [2024-07-15 15:55:59.033644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.102 [2024-07-15 15:55:59.033654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.102 [2024-07-15 15:55:59.033663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.102 [2024-07-15 15:55:59.033671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.102 [2024-07-15 15:55:59.033680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.102 [2024-07-15 15:55:59.033688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.102 [2024-07-15 15:55:59.033696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.102 [2024-07-15 15:55:59.033704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.102 [2024-07-15 15:55:59.033712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.102 [2024-07-15 15:55:59.033720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.102 [2024-07-15 15:55:59.033729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.102 [2024-07-15 15:55:59.033736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.102 [2024-07-15 15:55:59.033745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.102 [2024-07-15 15:55:59.033752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.102 [2024-07-15 15:55:59.033760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.102 [2024-07-15 15:55:59.033768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.102 [2024-07-15 15:55:59.033777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.102 [2024-07-15 15:55:59.033784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.102 [2024-07-15 15:55:59.033793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.102 [2024-07-15 15:55:59.033800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.102 [2024-07-15 15:55:59.033809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.102 [2024-07-15 15:55:59.033816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.362 [2024-07-15 15:55:59.033826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.362 [2024-07-15 15:55:59.033834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.363 [2024-07-15 15:55:59.033843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.363 [2024-07-15 15:55:59.033851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.363 [2024-07-15 15:55:59.033861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.363 [2024-07-15 15:55:59.033869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.363 [2024-07-15 15:55:59.033878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.363 [2024-07-15 15:55:59.033885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.363 [2024-07-15 15:55:59.033894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.363 [2024-07-15 15:55:59.033901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.363 [2024-07-15 15:55:59.033910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.363 [2024-07-15 15:55:59.033918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.363 [2024-07-15 15:55:59.033927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.363 [2024-07-15 15:55:59.033934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.363 [2024-07-15 15:55:59.033943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.363 [2024-07-15 15:55:59.033950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.363 [2024-07-15 15:55:59.033960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.363 [2024-07-15 15:55:59.033967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.363 [2024-07-15 15:55:59.033975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.363 [2024-07-15 15:55:59.033983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.363 [2024-07-15 15:55:59.033992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.363 [2024-07-15 15:55:59.033999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.363 [2024-07-15 15:55:59.034008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.363 [2024-07-15 15:55:59.034015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.363 [2024-07-15 15:55:59.034024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.363 [2024-07-15 15:55:59.034032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.363 [2024-07-15 15:55:59.034040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.363 [2024-07-15 15:55:59.034047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.363 [2024-07-15 15:55:59.034056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.363 [2024-07-15 15:55:59.034065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.363 [2024-07-15 15:55:59.034073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.363 [2024-07-15 15:55:59.034081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.363 [2024-07-15 15:55:59.034089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.363 [2024-07-15 15:55:59.034096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.363 [2024-07-15 15:55:59.034104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.363 [2024-07-15 15:55:59.034112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.363 [2024-07-15 15:55:59.034122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.363 [2024-07-15 15:55:59.034129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.363 [2024-07-15 15:55:59.034138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.363 [2024-07-15 15:55:59.034145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.363 [2024-07-15 15:55:59.034153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.363 [2024-07-15 15:55:59.034161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.363 [2024-07-15 15:55:59.034170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.363 [2024-07-15 15:55:59.034178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.363 [2024-07-15 15:55:59.034187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.363 [2024-07-15 15:55:59.034195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.363 [2024-07-15 15:55:59.034203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.363 [2024-07-15 15:55:59.034210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.363 [2024-07-15 15:55:59.034219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.363 [2024-07-15 15:55:59.034232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.363 [2024-07-15 15:55:59.034241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.363 [2024-07-15 15:55:59.034248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.363 [2024-07-15 15:55:59.034257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.363 [2024-07-15 15:55:59.034265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.363 [2024-07-15 15:55:59.034275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.363 [2024-07-15 15:55:59.034283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.363 [2024-07-15 15:55:59.034291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.363 [2024-07-15 15:55:59.034299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.363 [2024-07-15 15:55:59.034308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.363 [2024-07-15 15:55:59.034315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.363 [2024-07-15 15:55:59.034324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.363 [2024-07-15 15:55:59.034331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.363 [2024-07-15 15:55:59.034339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.363 [2024-07-15 15:55:59.034346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.363 [2024-07-15 15:55:59.034355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.363 [2024-07-15 15:55:59.034363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.363 [2024-07-15 15:55:59.034371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.363 [2024-07-15 15:55:59.034378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.363 [2024-07-15 15:55:59.034387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.364 [2024-07-15 15:55:59.034393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.364 [2024-07-15 15:55:59.034402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.364 [2024-07-15 15:55:59.034409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.364 [2024-07-15 15:55:59.034418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.364 [2024-07-15 15:55:59.034425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.364 [2024-07-15 15:55:59.034434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.364 [2024-07-15 15:55:59.034441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.364 [2024-07-15 15:55:59.034450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.364 [2024-07-15 15:55:59.034458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.364 [2024-07-15 15:55:59.034467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.364 [2024-07-15 15:55:59.034475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.364 [2024-07-15 15:55:59.034484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.364 [2024-07-15 15:55:59.034492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.364 [2024-07-15 15:55:59.034500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.364 [2024-07-15 15:55:59.034508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.364 [2024-07-15 15:55:59.034516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.364 [2024-07-15 15:55:59.034524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.364 [2024-07-15 15:55:59.034532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.364 [2024-07-15 15:55:59.034539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.364 [2024-07-15 15:55:59.034547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.364 [2024-07-15 15:55:59.034554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.364 [2024-07-15 15:55:59.034627] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18b2b20 was disconnected and freed. reset controller. 00:14:30.364 15:55:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.364 [2024-07-15 15:55:59.035552] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:30.364 15:55:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:30.364 15:55:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.364 15:55:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:30.364 task offset: 100224 on job bdev=Nvme0n1 fails 00:14:30.364 00:14:30.364 Latency(us) 00:14:30.364 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:30.364 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:30.364 Job: Nvme0n1 ended in about 0.45 seconds with error 00:14:30.364 Verification LBA range: start 0x0 length 0x400 00:14:30.364 Nvme0n1 : 0.45 1725.84 107.87 141.06 0.00 33451.44 1417.57 29063.79 00:14:30.364 =================================================================================================================== 00:14:30.364 Total : 1725.84 107.87 141.06 0.00 33451.44 1417.57 29063.79 00:14:30.364 [2024-07-15 15:55:59.037143] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:30.364 [2024-07-15 15:55:59.037158] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a1980 (9): Bad file descriptor 00:14:30.364 [2024-07-15 15:55:59.040208] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:14:30.364 [2024-07-15 15:55:59.040341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:30.364 [2024-07-15 15:55:59.040366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.364 [2024-07-15 15:55:59.040382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:14:30.364 [2024-07-15 15:55:59.040395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:14:30.364 [2024-07-15 15:55:59.040404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:14:30.364 [2024-07-15 15:55:59.040413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14a1980 00:14:30.364 [2024-07-15 15:55:59.040434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a1980 (9): Bad file descriptor 00:14:30.364 [2024-07-15 15:55:59.040447] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:14:30.364 [2024-07-15 15:55:59.040454] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:14:30.364 [2024-07-15 15:55:59.040462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:14:30.364 [2024-07-15 15:55:59.040475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:30.364 15:55:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.364 15:55:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:14:31.301 15:56:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3706276 00:14:31.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3706276) - No such process 00:14:31.301 15:56:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:14:31.301 15:56:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:31.301 15:56:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:31.301 15:56:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:31.301 15:56:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:31.301 15:56:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:31.301 15:56:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:31.301 15:56:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:31.301 { 00:14:31.301 "params": { 00:14:31.301 "name": "Nvme$subsystem", 00:14:31.301 "trtype": "$TEST_TRANSPORT", 00:14:31.301 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:31.301 "adrfam": "ipv4", 00:14:31.301 "trsvcid": "$NVMF_PORT", 00:14:31.301 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:31.301 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:31.301 "hdgst": ${hdgst:-false}, 00:14:31.301 "ddgst": ${ddgst:-false} 00:14:31.301 }, 00:14:31.301 "method": "bdev_nvme_attach_controller" 00:14:31.301 } 00:14:31.301 EOF 00:14:31.301 )") 00:14:31.301 15:56:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:31.301 15:56:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:31.301 15:56:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:31.301 15:56:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:31.301 "params": { 00:14:31.301 "name": "Nvme0", 00:14:31.301 "trtype": "tcp", 00:14:31.301 "traddr": "10.0.0.2", 00:14:31.301 "adrfam": "ipv4", 00:14:31.301 "trsvcid": "4420", 00:14:31.301 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:31.301 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:31.301 "hdgst": false, 00:14:31.301 "ddgst": false 00:14:31.301 }, 00:14:31.301 "method": "bdev_nvme_attach_controller" 00:14:31.301 }' 00:14:31.301 [2024-07-15 15:56:00.099320] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:14:31.301 [2024-07-15 15:56:00.099368] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3706525 ] 00:14:31.301 EAL: No free 2048 kB hugepages reported on node 1 00:14:31.301 [2024-07-15 15:56:00.153897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.301 [2024-07-15 15:56:00.228161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.560 Running I/O for 1 seconds... 00:14:32.497 00:14:32.497 Latency(us) 00:14:32.497 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:32.497 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:32.497 Verification LBA range: start 0x0 length 0x400 00:14:32.497 Nvme0n1 : 1.00 1847.27 115.45 0.00 0.00 34114.95 7978.30 28151.99 00:14:32.497 =================================================================================================================== 00:14:32.497 Total : 1847.27 115.45 0.00 0.00 34114.95 7978.30 28151.99 00:14:32.755 15:56:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:14:32.755 15:56:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:32.755 15:56:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:14:32.755 15:56:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:32.755 15:56:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:14:32.755 15:56:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:32.755 15:56:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:14:32.755 15:56:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:32.755 15:56:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:14:32.755 15:56:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:32.755 15:56:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:32.755 rmmod nvme_tcp 00:14:32.755 rmmod nvme_fabrics 00:14:32.755 rmmod nvme_keyring 00:14:32.755 15:56:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:32.755 15:56:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:14:32.755 15:56:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:14:32.755 15:56:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 3706007 ']' 00:14:32.755 15:56:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 3706007 00:14:32.755 15:56:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 3706007 ']' 00:14:32.755 15:56:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 3706007 00:14:32.755 15:56:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:14:32.755 15:56:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:32.755 15:56:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3706007 00:14:33.045 15:56:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:33.045 15:56:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:33.045 15:56:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3706007' 00:14:33.045 killing process with pid 3706007 00:14:33.045 15:56:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 3706007 00:14:33.045 15:56:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 3706007 00:14:33.045 [2024-07-15 15:56:01.882911] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:33.045 15:56:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:33.045 15:56:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:33.045 15:56:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:33.045 15:56:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:33.045 15:56:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:33.045 15:56:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.045 15:56:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:33.045 15:56:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.581 15:56:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:35.581 15:56:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:14:35.581 00:14:35.581 real 0m12.327s 00:14:35.581 user 0m22.355s 00:14:35.581 sys 0m5.120s 00:14:35.581 15:56:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:35.581 15:56:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:35.581 ************************************ 00:14:35.581 END TEST nvmf_host_management 00:14:35.581 ************************************ 00:14:35.581 15:56:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:35.581 15:56:04 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:35.581 15:56:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:35.581 15:56:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:35.581 15:56:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:35.581 ************************************ 00:14:35.581 START TEST nvmf_lvol 00:14:35.581 ************************************ 00:14:35.581 15:56:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:35.581 * Looking for test storage... 00:14:35.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:35.581 15:56:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:35.581 15:56:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:14:35.581 15:56:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:35.581 15:56:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:35.581 15:56:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:35.581 15:56:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:35.581 15:56:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:35.581 15:56:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:35.581 15:56:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:35.581 15:56:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:35.581 15:56:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:35.581 15:56:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:35.581 15:56:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:35.581 15:56:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:35.581 15:56:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:35.581 15:56:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:35.581 15:56:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:35.581 15:56:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:35.581 15:56:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:35.581 15:56:04 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:35.581 15:56:04 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:35.581 15:56:04 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:35.581 15:56:04 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.581 15:56:04 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.581 15:56:04 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.581 15:56:04 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:14:35.581 15:56:04 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.581 15:56:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:14:35.581 15:56:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:35.581 15:56:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:35.581 15:56:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:35.581 15:56:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:35.581 15:56:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:35.581 15:56:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:35.581 15:56:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:35.581 15:56:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:35.581 15:56:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:35.581 15:56:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:35.581 15:56:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:35.581 15:56:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:35.581 15:56:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:35.581 15:56:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:35.581 15:56:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:35.581 15:56:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:35.582 15:56:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:35.582 15:56:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:35.582 15:56:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:35.582 15:56:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.582 15:56:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:35.582 15:56:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.582 15:56:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:35.582 15:56:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:35.582 15:56:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:14:35.582 15:56:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:39.772 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:39.772 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:14:39.772 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:39.772 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:39.772 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:39.772 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:39.772 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:39.772 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:14:39.772 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:39.772 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:14:39.772 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:39.773 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:39.773 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:39.773 Found net devices under 0000:86:00.0: cvl_0_0 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:39.773 Found net devices under 0000:86:00.1: cvl_0_1 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:39.773 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:40.033 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:40.033 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:40.033 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:40.033 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:40.033 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:14:40.033 00:14:40.033 --- 10.0.0.2 ping statistics --- 00:14:40.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:40.033 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:14:40.033 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:40.033 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:40.033 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:14:40.033 00:14:40.033 --- 10.0.0.1 ping statistics --- 00:14:40.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:40.033 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:14:40.033 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:40.033 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:14:40.033 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:40.033 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:40.033 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:40.033 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:40.033 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:40.033 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:40.033 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:40.033 15:56:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:40.033 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:40.033 15:56:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:40.033 15:56:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:40.033 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=3710061 00:14:40.033 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 3710061 00:14:40.033 15:56:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:40.033 15:56:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 3710061 ']' 00:14:40.033 15:56:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.033 15:56:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:40.033 15:56:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.033 15:56:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:40.033 15:56:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:40.033 [2024-07-15 15:56:08.865830] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:14:40.033 [2024-07-15 15:56:08.865873] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:40.033 EAL: No free 2048 kB hugepages reported on node 1 00:14:40.033 [2024-07-15 15:56:08.925827] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:40.292 [2024-07-15 15:56:09.005871] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:40.292 [2024-07-15 15:56:09.005903] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:40.292 [2024-07-15 15:56:09.005910] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:40.292 [2024-07-15 15:56:09.005916] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:40.292 [2024-07-15 15:56:09.005922] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:40.292 [2024-07-15 15:56:09.005964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:40.292 [2024-07-15 15:56:09.005982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:40.292 [2024-07-15 15:56:09.005986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.859 15:56:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:40.859 15:56:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:14:40.859 15:56:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:40.859 15:56:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:40.859 15:56:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:40.859 15:56:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:40.859 15:56:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:41.117 [2024-07-15 15:56:09.859084] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:41.117 15:56:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:41.375 15:56:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:41.375 15:56:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:41.375 15:56:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:41.375 15:56:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:41.634 15:56:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:41.893 15:56:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=822e82ca-6904-4913-93f9-75fc4059ac65 00:14:41.893 15:56:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 822e82ca-6904-4913-93f9-75fc4059ac65 lvol 20 00:14:42.151 15:56:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=648c9afd-07ad-47b4-b03b-980c8f2170a8 00:14:42.151 15:56:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:42.151 15:56:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 648c9afd-07ad-47b4-b03b-980c8f2170a8 00:14:42.410 15:56:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:42.669 [2024-07-15 15:56:11.345846] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:42.669 15:56:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:42.669 15:56:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3710549 00:14:42.669 15:56:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:42.669 15:56:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:42.669 EAL: No free 2048 kB hugepages reported on node 1 00:14:44.047 15:56:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 648c9afd-07ad-47b4-b03b-980c8f2170a8 MY_SNAPSHOT 00:14:44.047 15:56:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=9c585dd9-ada5-4cbf-9036-ecfaf7ff6f6b 00:14:44.047 15:56:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 648c9afd-07ad-47b4-b03b-980c8f2170a8 30 00:14:44.307 15:56:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 9c585dd9-ada5-4cbf-9036-ecfaf7ff6f6b MY_CLONE 00:14:44.566 15:56:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=689d5e5f-c77a-4472-be6e-0020d662fdd7 00:14:44.566 15:56:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 689d5e5f-c77a-4472-be6e-0020d662fdd7 00:14:45.133 15:56:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3710549 00:14:53.251 Initializing NVMe Controllers 00:14:53.251 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:53.251 Controller IO queue size 128, less than required. 00:14:53.251 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:53.251 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:53.251 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:53.251 Initialization complete. Launching workers. 00:14:53.251 ======================================================== 00:14:53.251 Latency(us) 00:14:53.251 Device Information : IOPS MiB/s Average min max 00:14:53.251 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12376.30 48.34 10347.04 1780.24 60210.97 00:14:53.251 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12344.20 48.22 10375.35 3697.17 50204.82 00:14:53.251 ======================================================== 00:14:53.251 Total : 24720.50 96.56 10361.17 1780.24 60210.97 00:14:53.251 00:14:53.251 15:56:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:53.251 15:56:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 648c9afd-07ad-47b4-b03b-980c8f2170a8 00:14:53.510 15:56:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 822e82ca-6904-4913-93f9-75fc4059ac65 00:14:53.769 15:56:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:53.769 15:56:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:53.769 15:56:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:53.769 15:56:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:53.769 15:56:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:14:53.769 15:56:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:53.769 15:56:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:14:53.769 15:56:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:53.769 15:56:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:53.769 rmmod nvme_tcp 00:14:53.769 rmmod nvme_fabrics 00:14:53.769 rmmod nvme_keyring 00:14:53.769 15:56:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:53.769 15:56:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:14:53.769 15:56:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:14:53.769 15:56:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 3710061 ']' 00:14:53.769 15:56:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 3710061 00:14:53.769 15:56:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 3710061 ']' 00:14:53.769 15:56:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 3710061 00:14:53.770 15:56:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:14:53.770 15:56:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:53.770 15:56:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3710061 00:14:53.770 15:56:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:53.770 15:56:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:53.770 15:56:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3710061' 00:14:53.770 killing process with pid 3710061 00:14:53.770 15:56:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 3710061 00:14:53.770 15:56:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 3710061 00:14:54.029 15:56:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:54.029 15:56:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:54.029 15:56:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:54.029 15:56:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:54.029 15:56:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:54.029 15:56:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:54.029 15:56:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:54.029 15:56:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.565 15:56:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:56.565 00:14:56.565 real 0m20.856s 00:14:56.565 user 1m3.572s 00:14:56.565 sys 0m6.167s 00:14:56.565 15:56:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:56.565 15:56:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:56.565 ************************************ 00:14:56.565 END TEST nvmf_lvol 00:14:56.565 ************************************ 00:14:56.565 15:56:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:56.566 15:56:24 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:56.566 15:56:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:56.566 15:56:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:56.566 15:56:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:56.566 ************************************ 00:14:56.566 START TEST nvmf_lvs_grow 00:14:56.566 ************************************ 00:14:56.566 15:56:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:56.566 * Looking for test storage... 00:14:56.566 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:56.566 15:56:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:56.566 15:56:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:14:56.566 15:56:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:56.566 15:56:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:56.566 15:56:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:56.566 15:56:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:56.566 15:56:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:56.566 15:56:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:56.566 15:56:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:56.566 15:56:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:56.566 15:56:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:56.566 15:56:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:56.566 15:56:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:56.566 15:56:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:56.566 15:56:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:56.566 15:56:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:56.566 15:56:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:56.566 15:56:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:56.566 15:56:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:56.566 15:56:25 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:56.566 15:56:25 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:56.566 15:56:25 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:56.566 15:56:25 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.566 15:56:25 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.566 15:56:25 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.566 15:56:25 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:14:56.566 15:56:25 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.566 15:56:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:14:56.566 15:56:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:56.566 15:56:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:56.566 15:56:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:56.566 15:56:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:56.566 15:56:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:56.566 15:56:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:56.566 15:56:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:56.566 15:56:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:56.566 15:56:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:56.566 15:56:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:56.566 15:56:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:14:56.566 15:56:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:56.566 15:56:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:56.566 15:56:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:56.566 15:56:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:56.566 15:56:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:56.566 15:56:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.566 15:56:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:56.566 15:56:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.566 15:56:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:56.566 15:56:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:56.566 15:56:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:14:56.566 15:56:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:01.840 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:01.840 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:15:01.840 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:01.840 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:01.840 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:01.840 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:01.840 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:01.840 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:15:01.840 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:01.840 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:15:01.840 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:15:01.840 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:15:01.840 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:15:01.840 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:15:01.840 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:15:01.840 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:01.840 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:01.840 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:01.840 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:01.840 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:01.840 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:01.840 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:01.840 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:01.841 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:01.841 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:01.841 Found net devices under 0000:86:00.0: cvl_0_0 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:01.841 Found net devices under 0000:86:00.1: cvl_0_1 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:01.841 15:56:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:01.841 15:56:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:01.841 15:56:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:01.841 15:56:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:01.841 15:56:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:01.841 15:56:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:01.841 15:56:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:01.841 15:56:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:01.841 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:01.841 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:15:01.841 00:15:01.841 --- 10.0.0.2 ping statistics --- 00:15:01.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:01.841 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:15:01.841 15:56:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:01.841 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:01.841 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:15:01.841 00:15:01.841 --- 10.0.0.1 ping statistics --- 00:15:01.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:01.841 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:15:01.841 15:56:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:01.841 15:56:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:15:01.841 15:56:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:01.841 15:56:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:01.841 15:56:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:01.841 15:56:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:01.841 15:56:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:01.841 15:56:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:01.841 15:56:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:01.841 15:56:30 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:15:01.841 15:56:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:01.841 15:56:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:01.841 15:56:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:01.841 15:56:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:01.841 15:56:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=3715898 00:15:01.841 15:56:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 3715898 00:15:01.841 15:56:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 3715898 ']' 00:15:01.841 15:56:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.841 15:56:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:01.841 15:56:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.841 15:56:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:01.841 15:56:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:01.841 [2024-07-15 15:56:30.232588] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:15:01.841 [2024-07-15 15:56:30.232633] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:01.841 EAL: No free 2048 kB hugepages reported on node 1 00:15:01.841 [2024-07-15 15:56:30.285902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.841 [2024-07-15 15:56:30.367346] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:01.841 [2024-07-15 15:56:30.367381] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:01.841 [2024-07-15 15:56:30.367389] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:01.841 [2024-07-15 15:56:30.367395] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:01.841 [2024-07-15 15:56:30.367407] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:01.841 [2024-07-15 15:56:30.367424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.459 15:56:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:02.459 15:56:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:15:02.459 15:56:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:02.459 15:56:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:02.459 15:56:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:02.459 15:56:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:02.459 15:56:31 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:02.459 [2024-07-15 15:56:31.231725] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:02.459 15:56:31 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:15:02.459 15:56:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:02.459 15:56:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:02.459 15:56:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:02.459 ************************************ 00:15:02.459 START TEST lvs_grow_clean 00:15:02.459 ************************************ 00:15:02.459 15:56:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:15:02.459 15:56:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:02.459 15:56:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:02.459 15:56:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:02.459 15:56:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:02.459 15:56:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:02.459 15:56:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:02.459 15:56:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:02.459 15:56:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:02.459 15:56:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:02.718 15:56:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:02.718 15:56:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:02.718 15:56:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=5fdd629a-9f64-4436-9c31-ef9cac5899b9 00:15:02.977 15:56:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5fdd629a-9f64-4436-9c31-ef9cac5899b9 00:15:02.977 15:56:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:02.977 15:56:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:02.977 15:56:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:02.977 15:56:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5fdd629a-9f64-4436-9c31-ef9cac5899b9 lvol 150 00:15:03.236 15:56:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=ead2afe2-6a81-4465-a71b-de2f41d746fc 00:15:03.236 15:56:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:03.236 15:56:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:03.236 [2024-07-15 15:56:32.153656] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:03.236 [2024-07-15 15:56:32.153705] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:03.236 true 00:15:03.494 15:56:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5fdd629a-9f64-4436-9c31-ef9cac5899b9 00:15:03.494 15:56:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:03.494 15:56:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:03.494 15:56:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:03.751 15:56:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ead2afe2-6a81-4465-a71b-de2f41d746fc 00:15:04.009 15:56:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:04.009 [2024-07-15 15:56:32.855764] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:04.009 15:56:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:04.268 15:56:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3716406 00:15:04.268 15:56:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:04.268 15:56:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:04.268 15:56:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3716406 /var/tmp/bdevperf.sock 00:15:04.268 15:56:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 3716406 ']' 00:15:04.268 15:56:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:04.268 15:56:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:04.268 15:56:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:04.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:04.268 15:56:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:04.268 15:56:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:15:04.268 [2024-07-15 15:56:33.081160] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:15:04.268 [2024-07-15 15:56:33.081207] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3716406 ] 00:15:04.268 EAL: No free 2048 kB hugepages reported on node 1 00:15:04.268 [2024-07-15 15:56:33.133798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.526 [2024-07-15 15:56:33.213689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:05.093 15:56:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:05.093 15:56:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:15:05.093 15:56:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:05.660 Nvme0n1 00:15:05.661 15:56:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:05.661 [ 00:15:05.661 { 00:15:05.661 "name": "Nvme0n1", 00:15:05.661 "aliases": [ 00:15:05.661 "ead2afe2-6a81-4465-a71b-de2f41d746fc" 00:15:05.661 ], 00:15:05.661 "product_name": "NVMe disk", 00:15:05.661 "block_size": 4096, 00:15:05.661 "num_blocks": 38912, 00:15:05.661 "uuid": "ead2afe2-6a81-4465-a71b-de2f41d746fc", 00:15:05.661 "assigned_rate_limits": { 00:15:05.661 "rw_ios_per_sec": 0, 00:15:05.661 "rw_mbytes_per_sec": 0, 00:15:05.661 "r_mbytes_per_sec": 0, 00:15:05.661 "w_mbytes_per_sec": 0 00:15:05.661 }, 00:15:05.661 "claimed": false, 00:15:05.661 "zoned": false, 00:15:05.661 "supported_io_types": { 00:15:05.661 "read": true, 00:15:05.661 "write": true, 00:15:05.661 "unmap": true, 00:15:05.661 "flush": true, 00:15:05.661 "reset": true, 00:15:05.661 "nvme_admin": true, 00:15:05.661 "nvme_io": true, 00:15:05.661 "nvme_io_md": false, 00:15:05.661 "write_zeroes": true, 00:15:05.661 "zcopy": false, 00:15:05.661 "get_zone_info": false, 00:15:05.661 "zone_management": false, 00:15:05.661 "zone_append": false, 00:15:05.661 "compare": true, 00:15:05.661 "compare_and_write": true, 00:15:05.661 "abort": true, 00:15:05.661 "seek_hole": false, 00:15:05.661 "seek_data": false, 00:15:05.661 "copy": true, 00:15:05.661 "nvme_iov_md": false 00:15:05.661 }, 00:15:05.661 "memory_domains": [ 00:15:05.661 { 00:15:05.661 "dma_device_id": "system", 00:15:05.661 "dma_device_type": 1 00:15:05.661 } 00:15:05.661 ], 00:15:05.661 "driver_specific": { 00:15:05.661 "nvme": [ 00:15:05.661 { 00:15:05.661 "trid": { 00:15:05.661 "trtype": "TCP", 00:15:05.661 "adrfam": "IPv4", 00:15:05.661 "traddr": "10.0.0.2", 00:15:05.661 "trsvcid": "4420", 00:15:05.661 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:05.661 }, 00:15:05.661 "ctrlr_data": { 00:15:05.661 "cntlid": 1, 00:15:05.661 "vendor_id": "0x8086", 00:15:05.661 "model_number": "SPDK bdev Controller", 00:15:05.661 "serial_number": "SPDK0", 00:15:05.661 "firmware_revision": "24.09", 00:15:05.661 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:05.661 "oacs": { 00:15:05.661 "security": 0, 00:15:05.661 "format": 0, 00:15:05.661 "firmware": 0, 00:15:05.661 "ns_manage": 0 00:15:05.661 }, 00:15:05.661 "multi_ctrlr": true, 00:15:05.661 "ana_reporting": false 00:15:05.661 }, 00:15:05.661 "vs": { 00:15:05.661 "nvme_version": "1.3" 00:15:05.661 }, 00:15:05.661 "ns_data": { 00:15:05.661 "id": 1, 00:15:05.661 "can_share": true 00:15:05.661 } 00:15:05.661 } 00:15:05.661 ], 00:15:05.661 "mp_policy": "active_passive" 00:15:05.661 } 00:15:05.661 } 00:15:05.661 ] 00:15:05.661 15:56:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3716644 00:15:05.661 15:56:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:05.661 15:56:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:05.661 Running I/O for 10 seconds... 00:15:07.037 Latency(us) 00:15:07.037 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:07.037 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:07.037 Nvme0n1 : 1.00 22030.00 86.05 0.00 0.00 0.00 0.00 0.00 00:15:07.037 =================================================================================================================== 00:15:07.037 Total : 22030.00 86.05 0.00 0.00 0.00 0.00 0.00 00:15:07.037 00:15:07.603 15:56:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5fdd629a-9f64-4436-9c31-ef9cac5899b9 00:15:07.861 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:07.861 Nvme0n1 : 2.00 22215.00 86.78 0.00 0.00 0.00 0.00 0.00 00:15:07.861 =================================================================================================================== 00:15:07.861 Total : 22215.00 86.78 0.00 0.00 0.00 0.00 0.00 00:15:07.861 00:15:07.861 true 00:15:07.861 15:56:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5fdd629a-9f64-4436-9c31-ef9cac5899b9 00:15:07.861 15:56:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:08.139 15:56:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:08.139 15:56:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:08.139 15:56:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3716644 00:15:08.705 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:08.705 Nvme0n1 : 3.00 22263.33 86.97 0.00 0.00 0.00 0.00 0.00 00:15:08.705 =================================================================================================================== 00:15:08.705 Total : 22263.33 86.97 0.00 0.00 0.00 0.00 0.00 00:15:08.705 00:15:09.639 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:09.639 Nvme0n1 : 4.00 22323.50 87.20 0.00 0.00 0.00 0.00 0.00 00:15:09.639 =================================================================================================================== 00:15:09.639 Total : 22323.50 87.20 0.00 0.00 0.00 0.00 0.00 00:15:09.639 00:15:11.018 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:11.018 Nvme0n1 : 5.00 22372.40 87.39 0.00 0.00 0.00 0.00 0.00 00:15:11.018 =================================================================================================================== 00:15:11.018 Total : 22372.40 87.39 0.00 0.00 0.00 0.00 0.00 00:15:11.018 00:15:11.955 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:11.955 Nvme0n1 : 6.00 22359.67 87.34 0.00 0.00 0.00 0.00 0.00 00:15:11.955 =================================================================================================================== 00:15:11.955 Total : 22359.67 87.34 0.00 0.00 0.00 0.00 0.00 00:15:11.955 00:15:12.891 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:12.891 Nvme0n1 : 7.00 22391.71 87.47 0.00 0.00 0.00 0.00 0.00 00:15:12.891 =================================================================================================================== 00:15:12.891 Total : 22391.71 87.47 0.00 0.00 0.00 0.00 0.00 00:15:12.891 00:15:13.829 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:13.829 Nvme0n1 : 8.00 22409.75 87.54 0.00 0.00 0.00 0.00 0.00 00:15:13.829 =================================================================================================================== 00:15:13.829 Total : 22409.75 87.54 0.00 0.00 0.00 0.00 0.00 00:15:13.829 00:15:14.767 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:14.767 Nvme0n1 : 9.00 22434.44 87.63 0.00 0.00 0.00 0.00 0.00 00:15:14.767 =================================================================================================================== 00:15:14.767 Total : 22434.44 87.63 0.00 0.00 0.00 0.00 0.00 00:15:14.767 00:15:15.713 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:15.713 Nvme0n1 : 10.00 22452.60 87.71 0.00 0.00 0.00 0.00 0.00 00:15:15.713 =================================================================================================================== 00:15:15.713 Total : 22452.60 87.71 0.00 0.00 0.00 0.00 0.00 00:15:15.713 00:15:15.713 00:15:15.713 Latency(us) 00:15:15.713 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:15.713 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:15.713 Nvme0n1 : 10.01 22452.11 87.70 0.00 0.00 5696.96 4416.56 15044.79 00:15:15.713 =================================================================================================================== 00:15:15.713 Total : 22452.11 87.70 0.00 0.00 5696.96 4416.56 15044.79 00:15:15.713 0 00:15:15.713 15:56:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3716406 00:15:15.713 15:56:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 3716406 ']' 00:15:15.713 15:56:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 3716406 00:15:15.713 15:56:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:15:15.713 15:56:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:15.713 15:56:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3716406 00:15:15.977 15:56:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:15.977 15:56:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:15.977 15:56:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3716406' 00:15:15.977 killing process with pid 3716406 00:15:15.977 15:56:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 3716406 00:15:15.977 Received shutdown signal, test time was about 10.000000 seconds 00:15:15.977 00:15:15.977 Latency(us) 00:15:15.977 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:15.977 =================================================================================================================== 00:15:15.977 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:15.977 15:56:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 3716406 00:15:15.977 15:56:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:16.235 15:56:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:16.494 15:56:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5fdd629a-9f64-4436-9c31-ef9cac5899b9 00:15:16.494 15:56:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:16.494 15:56:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:16.494 15:56:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:15:16.494 15:56:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:16.752 [2024-07-15 15:56:45.565855] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:16.752 15:56:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5fdd629a-9f64-4436-9c31-ef9cac5899b9 00:15:16.752 15:56:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:15:16.753 15:56:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5fdd629a-9f64-4436-9c31-ef9cac5899b9 00:15:16.753 15:56:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:16.753 15:56:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:16.753 15:56:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:16.753 15:56:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:16.753 15:56:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:16.753 15:56:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:16.753 15:56:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:16.753 15:56:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:16.753 15:56:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5fdd629a-9f64-4436-9c31-ef9cac5899b9 00:15:17.012 request: 00:15:17.012 { 00:15:17.012 "uuid": "5fdd629a-9f64-4436-9c31-ef9cac5899b9", 00:15:17.012 "method": "bdev_lvol_get_lvstores", 00:15:17.012 "req_id": 1 00:15:17.012 } 00:15:17.012 Got JSON-RPC error response 00:15:17.012 response: 00:15:17.012 { 00:15:17.012 "code": -19, 00:15:17.012 "message": "No such device" 00:15:17.012 } 00:15:17.012 15:56:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:15:17.012 15:56:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:17.012 15:56:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:17.012 15:56:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:17.012 15:56:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:17.012 aio_bdev 00:15:17.012 15:56:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ead2afe2-6a81-4465-a71b-de2f41d746fc 00:15:17.012 15:56:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=ead2afe2-6a81-4465-a71b-de2f41d746fc 00:15:17.012 15:56:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:17.012 15:56:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:15:17.012 15:56:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:17.012 15:56:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:17.012 15:56:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:17.270 15:56:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ead2afe2-6a81-4465-a71b-de2f41d746fc -t 2000 00:15:17.529 [ 00:15:17.529 { 00:15:17.529 "name": "ead2afe2-6a81-4465-a71b-de2f41d746fc", 00:15:17.529 "aliases": [ 00:15:17.529 "lvs/lvol" 00:15:17.529 ], 00:15:17.529 "product_name": "Logical Volume", 00:15:17.529 "block_size": 4096, 00:15:17.529 "num_blocks": 38912, 00:15:17.529 "uuid": "ead2afe2-6a81-4465-a71b-de2f41d746fc", 00:15:17.529 "assigned_rate_limits": { 00:15:17.529 "rw_ios_per_sec": 0, 00:15:17.529 "rw_mbytes_per_sec": 0, 00:15:17.529 "r_mbytes_per_sec": 0, 00:15:17.529 "w_mbytes_per_sec": 0 00:15:17.529 }, 00:15:17.529 "claimed": false, 00:15:17.529 "zoned": false, 00:15:17.529 "supported_io_types": { 00:15:17.529 "read": true, 00:15:17.529 "write": true, 00:15:17.529 "unmap": true, 00:15:17.529 "flush": false, 00:15:17.529 "reset": true, 00:15:17.529 "nvme_admin": false, 00:15:17.529 "nvme_io": false, 00:15:17.529 "nvme_io_md": false, 00:15:17.529 "write_zeroes": true, 00:15:17.529 "zcopy": false, 00:15:17.529 "get_zone_info": false, 00:15:17.529 "zone_management": false, 00:15:17.529 "zone_append": false, 00:15:17.530 "compare": false, 00:15:17.530 "compare_and_write": false, 00:15:17.530 "abort": false, 00:15:17.530 "seek_hole": true, 00:15:17.530 "seek_data": true, 00:15:17.530 "copy": false, 00:15:17.530 "nvme_iov_md": false 00:15:17.530 }, 00:15:17.530 "driver_specific": { 00:15:17.530 "lvol": { 00:15:17.530 "lvol_store_uuid": "5fdd629a-9f64-4436-9c31-ef9cac5899b9", 00:15:17.530 "base_bdev": "aio_bdev", 00:15:17.530 "thin_provision": false, 00:15:17.530 "num_allocated_clusters": 38, 00:15:17.530 "snapshot": false, 00:15:17.530 "clone": false, 00:15:17.530 "esnap_clone": false 00:15:17.530 } 00:15:17.530 } 00:15:17.530 } 00:15:17.530 ] 00:15:17.530 15:56:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:15:17.530 15:56:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5fdd629a-9f64-4436-9c31-ef9cac5899b9 00:15:17.530 15:56:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:17.530 15:56:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:17.530 15:56:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5fdd629a-9f64-4436-9c31-ef9cac5899b9 00:15:17.530 15:56:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:17.788 15:56:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:17.788 15:56:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ead2afe2-6a81-4465-a71b-de2f41d746fc 00:15:18.047 15:56:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5fdd629a-9f64-4436-9c31-ef9cac5899b9 00:15:18.047 15:56:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:18.306 15:56:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:18.306 00:15:18.306 real 0m15.857s 00:15:18.306 user 0m15.535s 00:15:18.306 sys 0m1.423s 00:15:18.306 15:56:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:18.306 15:56:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:15:18.306 ************************************ 00:15:18.306 END TEST lvs_grow_clean 00:15:18.306 ************************************ 00:15:18.306 15:56:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:15:18.306 15:56:47 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:15:18.306 15:56:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:18.306 15:56:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:18.306 15:56:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:18.306 ************************************ 00:15:18.306 START TEST lvs_grow_dirty 00:15:18.306 ************************************ 00:15:18.306 15:56:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:15:18.306 15:56:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:18.306 15:56:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:18.306 15:56:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:18.306 15:56:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:18.306 15:56:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:18.306 15:56:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:18.306 15:56:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:18.306 15:56:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:18.306 15:56:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:18.565 15:56:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:18.565 15:56:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:18.824 15:56:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=749e3d78-8a65-42f6-9bf2-9c7addbee114 00:15:18.824 15:56:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 749e3d78-8a65-42f6-9bf2-9c7addbee114 00:15:18.824 15:56:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:18.824 15:56:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:18.824 15:56:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:18.824 15:56:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 749e3d78-8a65-42f6-9bf2-9c7addbee114 lvol 150 00:15:19.083 15:56:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=85f08095-be51-46d5-8f2e-f6f8e277928b 00:15:19.083 15:56:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:19.083 15:56:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:19.342 [2024-07-15 15:56:48.081940] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:19.342 [2024-07-15 15:56:48.081987] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:19.342 true 00:15:19.342 15:56:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 749e3d78-8a65-42f6-9bf2-9c7addbee114 00:15:19.342 15:56:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:19.342 15:56:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:19.342 15:56:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:19.601 15:56:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 85f08095-be51-46d5-8f2e-f6f8e277928b 00:15:19.860 15:56:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:19.860 [2024-07-15 15:56:48.755975] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:19.860 15:56:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:20.121 15:56:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3719006 00:15:20.121 15:56:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:20.121 15:56:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:20.121 15:56:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3719006 /var/tmp/bdevperf.sock 00:15:20.121 15:56:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 3719006 ']' 00:15:20.121 15:56:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:20.121 15:56:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:20.121 15:56:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:20.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:20.121 15:56:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:20.121 15:56:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:20.121 [2024-07-15 15:56:48.960337] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:15:20.121 [2024-07-15 15:56:48.960387] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3719006 ] 00:15:20.121 EAL: No free 2048 kB hugepages reported on node 1 00:15:20.121 [2024-07-15 15:56:49.012540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.424 [2024-07-15 15:56:49.092607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:20.991 15:56:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:20.991 15:56:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:15:20.991 15:56:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:21.250 Nvme0n1 00:15:21.250 15:56:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:21.508 [ 00:15:21.508 { 00:15:21.508 "name": "Nvme0n1", 00:15:21.508 "aliases": [ 00:15:21.508 "85f08095-be51-46d5-8f2e-f6f8e277928b" 00:15:21.508 ], 00:15:21.508 "product_name": "NVMe disk", 00:15:21.508 "block_size": 4096, 00:15:21.508 "num_blocks": 38912, 00:15:21.508 "uuid": "85f08095-be51-46d5-8f2e-f6f8e277928b", 00:15:21.508 "assigned_rate_limits": { 00:15:21.508 "rw_ios_per_sec": 0, 00:15:21.508 "rw_mbytes_per_sec": 0, 00:15:21.508 "r_mbytes_per_sec": 0, 00:15:21.508 "w_mbytes_per_sec": 0 00:15:21.508 }, 00:15:21.508 "claimed": false, 00:15:21.508 "zoned": false, 00:15:21.508 "supported_io_types": { 00:15:21.508 "read": true, 00:15:21.508 "write": true, 00:15:21.508 "unmap": true, 00:15:21.508 "flush": true, 00:15:21.508 "reset": true, 00:15:21.508 "nvme_admin": true, 00:15:21.508 "nvme_io": true, 00:15:21.508 "nvme_io_md": false, 00:15:21.508 "write_zeroes": true, 00:15:21.508 "zcopy": false, 00:15:21.508 "get_zone_info": false, 00:15:21.508 "zone_management": false, 00:15:21.508 "zone_append": false, 00:15:21.508 "compare": true, 00:15:21.508 "compare_and_write": true, 00:15:21.508 "abort": true, 00:15:21.508 "seek_hole": false, 00:15:21.508 "seek_data": false, 00:15:21.508 "copy": true, 00:15:21.508 "nvme_iov_md": false 00:15:21.508 }, 00:15:21.509 "memory_domains": [ 00:15:21.509 { 00:15:21.509 "dma_device_id": "system", 00:15:21.509 "dma_device_type": 1 00:15:21.509 } 00:15:21.509 ], 00:15:21.509 "driver_specific": { 00:15:21.509 "nvme": [ 00:15:21.509 { 00:15:21.509 "trid": { 00:15:21.509 "trtype": "TCP", 00:15:21.509 "adrfam": "IPv4", 00:15:21.509 "traddr": "10.0.0.2", 00:15:21.509 "trsvcid": "4420", 00:15:21.509 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:21.509 }, 00:15:21.509 "ctrlr_data": { 00:15:21.509 "cntlid": 1, 00:15:21.509 "vendor_id": "0x8086", 00:15:21.509 "model_number": "SPDK bdev Controller", 00:15:21.509 "serial_number": "SPDK0", 00:15:21.509 "firmware_revision": "24.09", 00:15:21.509 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:21.509 "oacs": { 00:15:21.509 "security": 0, 00:15:21.509 "format": 0, 00:15:21.509 "firmware": 0, 00:15:21.509 "ns_manage": 0 00:15:21.509 }, 00:15:21.509 "multi_ctrlr": true, 00:15:21.509 "ana_reporting": false 00:15:21.509 }, 00:15:21.509 "vs": { 00:15:21.509 "nvme_version": "1.3" 00:15:21.509 }, 00:15:21.509 "ns_data": { 00:15:21.509 "id": 1, 00:15:21.509 "can_share": true 00:15:21.509 } 00:15:21.509 } 00:15:21.509 ], 00:15:21.509 "mp_policy": "active_passive" 00:15:21.509 } 00:15:21.509 } 00:15:21.509 ] 00:15:21.509 15:56:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3719238 00:15:21.509 15:56:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:21.509 15:56:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:21.768 Running I/O for 10 seconds... 00:15:22.705 Latency(us) 00:15:22.705 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:22.705 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:22.705 Nvme0n1 : 1.00 22114.00 86.38 0.00 0.00 0.00 0.00 0.00 00:15:22.705 =================================================================================================================== 00:15:22.705 Total : 22114.00 86.38 0.00 0.00 0.00 0.00 0.00 00:15:22.705 00:15:23.640 15:56:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 749e3d78-8a65-42f6-9bf2-9c7addbee114 00:15:23.641 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:23.641 Nvme0n1 : 2.00 22261.00 86.96 0.00 0.00 0.00 0.00 0.00 00:15:23.641 =================================================================================================================== 00:15:23.641 Total : 22261.00 86.96 0.00 0.00 0.00 0.00 0.00 00:15:23.641 00:15:23.641 true 00:15:23.641 15:56:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 749e3d78-8a65-42f6-9bf2-9c7addbee114 00:15:23.641 15:56:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:23.899 15:56:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:23.899 15:56:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:23.899 15:56:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3719238 00:15:24.834 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:24.834 Nvme0n1 : 3.00 22307.33 87.14 0.00 0.00 0.00 0.00 0.00 00:15:24.834 =================================================================================================================== 00:15:24.834 Total : 22307.33 87.14 0.00 0.00 0.00 0.00 0.00 00:15:24.834 00:15:25.770 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:25.770 Nvme0n1 : 4.00 22366.50 87.37 0.00 0.00 0.00 0.00 0.00 00:15:25.770 =================================================================================================================== 00:15:25.770 Total : 22366.50 87.37 0.00 0.00 0.00 0.00 0.00 00:15:25.770 00:15:26.708 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:26.708 Nvme0n1 : 5.00 22410.00 87.54 0.00 0.00 0.00 0.00 0.00 00:15:26.708 =================================================================================================================== 00:15:26.708 Total : 22410.00 87.54 0.00 0.00 0.00 0.00 0.00 00:15:26.708 00:15:27.645 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:27.645 Nvme0n1 : 6.00 22447.00 87.68 0.00 0.00 0.00 0.00 0.00 00:15:27.645 =================================================================================================================== 00:15:27.645 Total : 22447.00 87.68 0.00 0.00 0.00 0.00 0.00 00:15:27.645 00:15:28.582 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:28.582 Nvme0n1 : 7.00 22479.14 87.81 0.00 0.00 0.00 0.00 0.00 00:15:28.582 =================================================================================================================== 00:15:28.582 Total : 22479.14 87.81 0.00 0.00 0.00 0.00 0.00 00:15:28.582 00:15:29.955 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:29.955 Nvme0n1 : 8.00 22502.25 87.90 0.00 0.00 0.00 0.00 0.00 00:15:29.955 =================================================================================================================== 00:15:29.955 Total : 22502.25 87.90 0.00 0.00 0.00 0.00 0.00 00:15:29.955 00:15:30.888 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:30.888 Nvme0n1 : 9.00 22519.33 87.97 0.00 0.00 0.00 0.00 0.00 00:15:30.888 =================================================================================================================== 00:15:30.888 Total : 22519.33 87.97 0.00 0.00 0.00 0.00 0.00 00:15:30.888 00:15:31.823 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:31.823 Nvme0n1 : 10.00 22499.40 87.89 0.00 0.00 0.00 0.00 0.00 00:15:31.823 =================================================================================================================== 00:15:31.823 Total : 22499.40 87.89 0.00 0.00 0.00 0.00 0.00 00:15:31.823 00:15:31.823 00:15:31.823 Latency(us) 00:15:31.823 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:31.823 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:31.823 Nvme0n1 : 10.01 22499.17 87.89 0.00 0.00 5685.04 4359.57 12423.35 00:15:31.823 =================================================================================================================== 00:15:31.823 Total : 22499.17 87.89 0.00 0.00 5685.04 4359.57 12423.35 00:15:31.823 0 00:15:31.823 15:57:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3719006 00:15:31.823 15:57:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 3719006 ']' 00:15:31.823 15:57:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 3719006 00:15:31.823 15:57:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:15:31.823 15:57:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:31.823 15:57:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3719006 00:15:31.823 15:57:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:31.823 15:57:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:31.823 15:57:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3719006' 00:15:31.823 killing process with pid 3719006 00:15:31.823 15:57:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 3719006 00:15:31.823 Received shutdown signal, test time was about 10.000000 seconds 00:15:31.823 00:15:31.823 Latency(us) 00:15:31.823 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:31.823 =================================================================================================================== 00:15:31.823 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:31.823 15:57:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 3719006 00:15:31.823 15:57:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:32.081 15:57:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:32.339 15:57:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 749e3d78-8a65-42f6-9bf2-9c7addbee114 00:15:32.339 15:57:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:32.599 15:57:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:32.599 15:57:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:15:32.599 15:57:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3715898 00:15:32.599 15:57:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3715898 00:15:32.599 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3715898 Killed "${NVMF_APP[@]}" "$@" 00:15:32.599 15:57:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:15:32.599 15:57:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:15:32.599 15:57:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:32.599 15:57:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:32.599 15:57:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:32.599 15:57:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=3721157 00:15:32.599 15:57:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 3721157 00:15:32.599 15:57:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:32.599 15:57:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 3721157 ']' 00:15:32.599 15:57:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.599 15:57:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:32.599 15:57:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.599 15:57:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:32.599 15:57:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:32.599 [2024-07-15 15:57:01.368586] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:15:32.599 [2024-07-15 15:57:01.368631] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:32.599 EAL: No free 2048 kB hugepages reported on node 1 00:15:32.599 [2024-07-15 15:57:01.426665] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.599 [2024-07-15 15:57:01.506133] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:32.599 [2024-07-15 15:57:01.506168] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:32.599 [2024-07-15 15:57:01.506176] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:32.599 [2024-07-15 15:57:01.506181] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:32.599 [2024-07-15 15:57:01.506187] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:32.599 [2024-07-15 15:57:01.506202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.535 15:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:33.535 15:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:15:33.535 15:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:33.535 15:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:33.535 15:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:33.535 15:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:33.535 15:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:33.535 [2024-07-15 15:57:02.363870] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:33.535 [2024-07-15 15:57:02.363949] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:33.535 [2024-07-15 15:57:02.363972] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:33.535 15:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:15:33.535 15:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 85f08095-be51-46d5-8f2e-f6f8e277928b 00:15:33.536 15:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=85f08095-be51-46d5-8f2e-f6f8e277928b 00:15:33.536 15:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:33.536 15:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:15:33.536 15:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:33.536 15:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:33.536 15:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:33.794 15:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 85f08095-be51-46d5-8f2e-f6f8e277928b -t 2000 00:15:33.794 [ 00:15:33.794 { 00:15:33.794 "name": "85f08095-be51-46d5-8f2e-f6f8e277928b", 00:15:33.794 "aliases": [ 00:15:33.794 "lvs/lvol" 00:15:33.794 ], 00:15:33.794 "product_name": "Logical Volume", 00:15:33.794 "block_size": 4096, 00:15:33.794 "num_blocks": 38912, 00:15:33.794 "uuid": "85f08095-be51-46d5-8f2e-f6f8e277928b", 00:15:33.794 "assigned_rate_limits": { 00:15:33.794 "rw_ios_per_sec": 0, 00:15:33.794 "rw_mbytes_per_sec": 0, 00:15:33.794 "r_mbytes_per_sec": 0, 00:15:33.794 "w_mbytes_per_sec": 0 00:15:33.794 }, 00:15:33.794 "claimed": false, 00:15:33.794 "zoned": false, 00:15:33.794 "supported_io_types": { 00:15:33.794 "read": true, 00:15:33.794 "write": true, 00:15:33.794 "unmap": true, 00:15:33.794 "flush": false, 00:15:33.794 "reset": true, 00:15:33.794 "nvme_admin": false, 00:15:33.794 "nvme_io": false, 00:15:33.794 "nvme_io_md": false, 00:15:33.794 "write_zeroes": true, 00:15:33.794 "zcopy": false, 00:15:33.794 "get_zone_info": false, 00:15:33.794 "zone_management": false, 00:15:33.794 "zone_append": false, 00:15:33.794 "compare": false, 00:15:33.794 "compare_and_write": false, 00:15:33.794 "abort": false, 00:15:33.794 "seek_hole": true, 00:15:33.794 "seek_data": true, 00:15:33.794 "copy": false, 00:15:33.794 "nvme_iov_md": false 00:15:33.794 }, 00:15:33.794 "driver_specific": { 00:15:33.794 "lvol": { 00:15:33.794 "lvol_store_uuid": "749e3d78-8a65-42f6-9bf2-9c7addbee114", 00:15:33.794 "base_bdev": "aio_bdev", 00:15:33.794 "thin_provision": false, 00:15:33.794 "num_allocated_clusters": 38, 00:15:33.794 "snapshot": false, 00:15:33.794 "clone": false, 00:15:33.794 "esnap_clone": false 00:15:33.794 } 00:15:33.794 } 00:15:33.794 } 00:15:33.794 ] 00:15:33.794 15:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:15:33.794 15:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 749e3d78-8a65-42f6-9bf2-9c7addbee114 00:15:33.794 15:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:15:34.053 15:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:15:34.053 15:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 749e3d78-8a65-42f6-9bf2-9c7addbee114 00:15:34.053 15:57:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:15:34.312 15:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:15:34.312 15:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:34.312 [2024-07-15 15:57:03.208252] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:34.571 15:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 749e3d78-8a65-42f6-9bf2-9c7addbee114 00:15:34.571 15:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:15:34.571 15:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 749e3d78-8a65-42f6-9bf2-9c7addbee114 00:15:34.571 15:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:34.571 15:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:34.571 15:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:34.571 15:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:34.571 15:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:34.571 15:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:34.571 15:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:34.571 15:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:34.571 15:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 749e3d78-8a65-42f6-9bf2-9c7addbee114 00:15:34.571 request: 00:15:34.571 { 00:15:34.571 "uuid": "749e3d78-8a65-42f6-9bf2-9c7addbee114", 00:15:34.571 "method": "bdev_lvol_get_lvstores", 00:15:34.571 "req_id": 1 00:15:34.571 } 00:15:34.571 Got JSON-RPC error response 00:15:34.571 response: 00:15:34.571 { 00:15:34.571 "code": -19, 00:15:34.571 "message": "No such device" 00:15:34.571 } 00:15:34.571 15:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:15:34.571 15:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:34.572 15:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:34.572 15:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:34.572 15:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:34.830 aio_bdev 00:15:34.830 15:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 85f08095-be51-46d5-8f2e-f6f8e277928b 00:15:34.830 15:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=85f08095-be51-46d5-8f2e-f6f8e277928b 00:15:34.830 15:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:34.830 15:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:15:34.830 15:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:34.830 15:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:34.830 15:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:35.088 15:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 85f08095-be51-46d5-8f2e-f6f8e277928b -t 2000 00:15:35.088 [ 00:15:35.088 { 00:15:35.088 "name": "85f08095-be51-46d5-8f2e-f6f8e277928b", 00:15:35.088 "aliases": [ 00:15:35.088 "lvs/lvol" 00:15:35.088 ], 00:15:35.088 "product_name": "Logical Volume", 00:15:35.088 "block_size": 4096, 00:15:35.088 "num_blocks": 38912, 00:15:35.088 "uuid": "85f08095-be51-46d5-8f2e-f6f8e277928b", 00:15:35.088 "assigned_rate_limits": { 00:15:35.088 "rw_ios_per_sec": 0, 00:15:35.088 "rw_mbytes_per_sec": 0, 00:15:35.088 "r_mbytes_per_sec": 0, 00:15:35.088 "w_mbytes_per_sec": 0 00:15:35.088 }, 00:15:35.088 "claimed": false, 00:15:35.088 "zoned": false, 00:15:35.088 "supported_io_types": { 00:15:35.088 "read": true, 00:15:35.088 "write": true, 00:15:35.088 "unmap": true, 00:15:35.088 "flush": false, 00:15:35.088 "reset": true, 00:15:35.088 "nvme_admin": false, 00:15:35.088 "nvme_io": false, 00:15:35.088 "nvme_io_md": false, 00:15:35.088 "write_zeroes": true, 00:15:35.088 "zcopy": false, 00:15:35.088 "get_zone_info": false, 00:15:35.088 "zone_management": false, 00:15:35.088 "zone_append": false, 00:15:35.088 "compare": false, 00:15:35.088 "compare_and_write": false, 00:15:35.088 "abort": false, 00:15:35.088 "seek_hole": true, 00:15:35.088 "seek_data": true, 00:15:35.088 "copy": false, 00:15:35.088 "nvme_iov_md": false 00:15:35.088 }, 00:15:35.088 "driver_specific": { 00:15:35.088 "lvol": { 00:15:35.088 "lvol_store_uuid": "749e3d78-8a65-42f6-9bf2-9c7addbee114", 00:15:35.088 "base_bdev": "aio_bdev", 00:15:35.088 "thin_provision": false, 00:15:35.088 "num_allocated_clusters": 38, 00:15:35.088 "snapshot": false, 00:15:35.088 "clone": false, 00:15:35.088 "esnap_clone": false 00:15:35.088 } 00:15:35.088 } 00:15:35.088 } 00:15:35.088 ] 00:15:35.088 15:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:15:35.088 15:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 749e3d78-8a65-42f6-9bf2-9c7addbee114 00:15:35.088 15:57:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:35.346 15:57:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:35.346 15:57:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 749e3d78-8a65-42f6-9bf2-9c7addbee114 00:15:35.346 15:57:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:35.346 15:57:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:35.346 15:57:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 85f08095-be51-46d5-8f2e-f6f8e277928b 00:15:35.606 15:57:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 749e3d78-8a65-42f6-9bf2-9c7addbee114 00:15:35.865 15:57:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:36.123 15:57:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:36.123 00:15:36.123 real 0m17.643s 00:15:36.123 user 0m44.880s 00:15:36.123 sys 0m4.157s 00:15:36.123 15:57:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:36.123 15:57:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:36.123 ************************************ 00:15:36.124 END TEST lvs_grow_dirty 00:15:36.124 ************************************ 00:15:36.124 15:57:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:15:36.124 15:57:04 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:36.124 15:57:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:15:36.124 15:57:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:15:36.124 15:57:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:36.124 15:57:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:36.124 15:57:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:36.124 15:57:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:36.124 15:57:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:36.124 15:57:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:36.124 nvmf_trace.0 00:15:36.124 15:57:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:15:36.124 15:57:04 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:36.124 15:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:36.124 15:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:15:36.124 15:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:36.124 15:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:15:36.124 15:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:36.124 15:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:36.124 rmmod nvme_tcp 00:15:36.124 rmmod nvme_fabrics 00:15:36.124 rmmod nvme_keyring 00:15:36.124 15:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:36.124 15:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:15:36.124 15:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:15:36.124 15:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 3721157 ']' 00:15:36.124 15:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 3721157 00:15:36.124 15:57:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 3721157 ']' 00:15:36.124 15:57:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 3721157 00:15:36.124 15:57:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:15:36.124 15:57:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:36.124 15:57:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3721157 00:15:36.124 15:57:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:36.124 15:57:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:36.124 15:57:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3721157' 00:15:36.124 killing process with pid 3721157 00:15:36.124 15:57:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 3721157 00:15:36.124 15:57:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 3721157 00:15:36.383 15:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:36.383 15:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:36.383 15:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:36.383 15:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:36.383 15:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:36.383 15:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:36.383 15:57:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:36.383 15:57:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:38.919 15:57:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:38.920 00:15:38.920 real 0m42.304s 00:15:38.920 user 1m5.959s 00:15:38.920 sys 0m9.864s 00:15:38.920 15:57:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:38.920 15:57:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:38.920 ************************************ 00:15:38.920 END TEST nvmf_lvs_grow 00:15:38.920 ************************************ 00:15:38.920 15:57:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:38.920 15:57:07 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:38.920 15:57:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:38.920 15:57:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:38.920 15:57:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:38.920 ************************************ 00:15:38.920 START TEST nvmf_bdev_io_wait 00:15:38.920 ************************************ 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:38.920 * Looking for test storage... 00:15:38.920 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:15:38.920 15:57:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:44.290 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:44.290 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:44.290 Found net devices under 0000:86:00.0: cvl_0_0 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:44.290 Found net devices under 0000:86:00.1: cvl_0_1 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:44.290 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:44.290 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:15:44.290 00:15:44.290 --- 10.0.0.2 ping statistics --- 00:15:44.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:44.290 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:44.290 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:44.290 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:15:44.290 00:15:44.290 --- 10.0.0.1 ping statistics --- 00:15:44.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:44.290 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=3725645 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 3725645 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 3725645 ']' 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.290 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:44.291 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.291 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:44.291 15:57:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:44.291 [2024-07-15 15:57:12.657705] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:15:44.291 [2024-07-15 15:57:12.657749] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:44.291 EAL: No free 2048 kB hugepages reported on node 1 00:15:44.291 [2024-07-15 15:57:12.717816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:44.291 [2024-07-15 15:57:12.799457] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:44.291 [2024-07-15 15:57:12.799492] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:44.291 [2024-07-15 15:57:12.799499] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:44.291 [2024-07-15 15:57:12.799506] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:44.291 [2024-07-15 15:57:12.799511] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:44.291 [2024-07-15 15:57:12.799543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:44.291 [2024-07-15 15:57:12.799569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:44.291 [2024-07-15 15:57:12.799657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:44.291 [2024-07-15 15:57:12.799659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.548 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:44.548 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:15:44.548 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:44.548 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:44.548 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:44.807 [2024-07-15 15:57:13.581103] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:44.807 Malloc0 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:44.807 [2024-07-15 15:57:13.638706] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3725893 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3725895 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:44.807 { 00:15:44.807 "params": { 00:15:44.807 "name": "Nvme$subsystem", 00:15:44.807 "trtype": "$TEST_TRANSPORT", 00:15:44.807 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:44.807 "adrfam": "ipv4", 00:15:44.807 "trsvcid": "$NVMF_PORT", 00:15:44.807 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:44.807 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:44.807 "hdgst": ${hdgst:-false}, 00:15:44.807 "ddgst": ${ddgst:-false} 00:15:44.807 }, 00:15:44.807 "method": "bdev_nvme_attach_controller" 00:15:44.807 } 00:15:44.807 EOF 00:15:44.807 )") 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3725897 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:44.807 { 00:15:44.807 "params": { 00:15:44.807 "name": "Nvme$subsystem", 00:15:44.807 "trtype": "$TEST_TRANSPORT", 00:15:44.807 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:44.807 "adrfam": "ipv4", 00:15:44.807 "trsvcid": "$NVMF_PORT", 00:15:44.807 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:44.807 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:44.807 "hdgst": ${hdgst:-false}, 00:15:44.807 "ddgst": ${ddgst:-false} 00:15:44.807 }, 00:15:44.807 "method": "bdev_nvme_attach_controller" 00:15:44.807 } 00:15:44.807 EOF 00:15:44.807 )") 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3725900 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:44.807 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:44.808 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:44.808 { 00:15:44.808 "params": { 00:15:44.808 "name": "Nvme$subsystem", 00:15:44.808 "trtype": "$TEST_TRANSPORT", 00:15:44.808 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:44.808 "adrfam": "ipv4", 00:15:44.808 "trsvcid": "$NVMF_PORT", 00:15:44.808 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:44.808 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:44.808 "hdgst": ${hdgst:-false}, 00:15:44.808 "ddgst": ${ddgst:-false} 00:15:44.808 }, 00:15:44.808 "method": "bdev_nvme_attach_controller" 00:15:44.808 } 00:15:44.808 EOF 00:15:44.808 )") 00:15:44.808 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:44.808 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:44.808 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:44.808 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:44.808 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:44.808 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:44.808 { 00:15:44.808 "params": { 00:15:44.808 "name": "Nvme$subsystem", 00:15:44.808 "trtype": "$TEST_TRANSPORT", 00:15:44.808 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:44.808 "adrfam": "ipv4", 00:15:44.808 "trsvcid": "$NVMF_PORT", 00:15:44.808 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:44.808 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:44.808 "hdgst": ${hdgst:-false}, 00:15:44.808 "ddgst": ${ddgst:-false} 00:15:44.808 }, 00:15:44.808 "method": "bdev_nvme_attach_controller" 00:15:44.808 } 00:15:44.808 EOF 00:15:44.808 )") 00:15:44.808 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:44.808 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3725893 00:15:44.808 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:44.808 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:44.808 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:44.808 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:44.808 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:44.808 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:44.808 "params": { 00:15:44.808 "name": "Nvme1", 00:15:44.808 "trtype": "tcp", 00:15:44.808 "traddr": "10.0.0.2", 00:15:44.808 "adrfam": "ipv4", 00:15:44.808 "trsvcid": "4420", 00:15:44.808 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:44.808 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:44.808 "hdgst": false, 00:15:44.808 "ddgst": false 00:15:44.808 }, 00:15:44.808 "method": "bdev_nvme_attach_controller" 00:15:44.808 }' 00:15:44.808 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:44.808 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:44.808 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:44.808 "params": { 00:15:44.808 "name": "Nvme1", 00:15:44.808 "trtype": "tcp", 00:15:44.808 "traddr": "10.0.0.2", 00:15:44.808 "adrfam": "ipv4", 00:15:44.808 "trsvcid": "4420", 00:15:44.808 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:44.808 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:44.808 "hdgst": false, 00:15:44.808 "ddgst": false 00:15:44.808 }, 00:15:44.808 "method": "bdev_nvme_attach_controller" 00:15:44.808 }' 00:15:44.808 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:44.808 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:44.808 "params": { 00:15:44.808 "name": "Nvme1", 00:15:44.808 "trtype": "tcp", 00:15:44.808 "traddr": "10.0.0.2", 00:15:44.808 "adrfam": "ipv4", 00:15:44.808 "trsvcid": "4420", 00:15:44.808 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:44.808 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:44.808 "hdgst": false, 00:15:44.808 "ddgst": false 00:15:44.808 }, 00:15:44.808 "method": "bdev_nvme_attach_controller" 00:15:44.808 }' 00:15:44.808 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:44.808 15:57:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:44.808 "params": { 00:15:44.808 "name": "Nvme1", 00:15:44.808 "trtype": "tcp", 00:15:44.808 "traddr": "10.0.0.2", 00:15:44.808 "adrfam": "ipv4", 00:15:44.808 "trsvcid": "4420", 00:15:44.808 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:44.808 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:44.808 "hdgst": false, 00:15:44.808 "ddgst": false 00:15:44.808 }, 00:15:44.808 "method": "bdev_nvme_attach_controller" 00:15:44.808 }' 00:15:44.808 [2024-07-15 15:57:13.686588] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:15:44.808 [2024-07-15 15:57:13.686638] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:44.808 [2024-07-15 15:57:13.688713] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:15:44.808 [2024-07-15 15:57:13.688752] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:44.808 [2024-07-15 15:57:13.689476] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:15:44.808 [2024-07-15 15:57:13.689513] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:44.808 [2024-07-15 15:57:13.692010] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:15:44.808 [2024-07-15 15:57:13.692048] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:44.808 EAL: No free 2048 kB hugepages reported on node 1 00:15:45.066 EAL: No free 2048 kB hugepages reported on node 1 00:15:45.066 [2024-07-15 15:57:13.856855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.066 EAL: No free 2048 kB hugepages reported on node 1 00:15:45.066 [2024-07-15 15:57:13.934877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:45.066 [2024-07-15 15:57:13.948380] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.323 EAL: No free 2048 kB hugepages reported on node 1 00:15:45.323 [2024-07-15 15:57:14.026434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:15:45.323 [2024-07-15 15:57:14.046252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.323 [2024-07-15 15:57:14.122099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:15:45.323 [2024-07-15 15:57:14.145900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.323 [2024-07-15 15:57:14.234464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:15:45.580 Running I/O for 1 seconds... 00:15:45.580 Running I/O for 1 seconds... 00:15:45.580 Running I/O for 1 seconds... 00:15:45.837 Running I/O for 1 seconds... 00:15:46.403 00:15:46.403 Latency(us) 00:15:46.403 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.403 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:46.403 Nvme1n1 : 1.01 8182.15 31.96 0.00 0.00 15572.05 6553.60 23478.98 00:15:46.403 =================================================================================================================== 00:15:46.403 Total : 8182.15 31.96 0.00 0.00 15572.05 6553.60 23478.98 00:15:46.661 00:15:46.661 Latency(us) 00:15:46.661 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.661 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:46.661 Nvme1n1 : 1.01 10937.39 42.72 0.00 0.00 11655.47 7579.38 24048.86 00:15:46.661 =================================================================================================================== 00:15:46.661 Total : 10937.39 42.72 0.00 0.00 11655.47 7579.38 24048.86 00:15:46.661 00:15:46.661 Latency(us) 00:15:46.661 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.661 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:46.661 Nvme1n1 : 1.00 8750.48 34.18 0.00 0.00 14601.67 3348.03 38523.77 00:15:46.661 =================================================================================================================== 00:15:46.661 Total : 8750.48 34.18 0.00 0.00 14601.67 3348.03 38523.77 00:15:46.661 15:57:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3725895 00:15:46.661 00:15:46.661 Latency(us) 00:15:46.661 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.661 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:46.661 Nvme1n1 : 1.00 245542.19 959.15 0.00 0.00 519.46 209.25 658.92 00:15:46.661 =================================================================================================================== 00:15:46.661 Total : 245542.19 959.15 0.00 0.00 519.46 209.25 658.92 00:15:46.661 15:57:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3725897 00:15:46.920 15:57:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3725900 00:15:46.920 15:57:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:46.920 15:57:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.920 15:57:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:46.920 15:57:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.920 15:57:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:46.920 15:57:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:46.920 15:57:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:46.920 15:57:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:15:46.920 15:57:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:46.920 15:57:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:15:46.920 15:57:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:46.920 15:57:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:46.920 rmmod nvme_tcp 00:15:46.920 rmmod nvme_fabrics 00:15:46.920 rmmod nvme_keyring 00:15:46.920 15:57:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:46.920 15:57:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:15:46.920 15:57:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:15:46.920 15:57:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 3725645 ']' 00:15:46.920 15:57:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 3725645 00:15:46.920 15:57:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 3725645 ']' 00:15:46.920 15:57:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 3725645 00:15:46.920 15:57:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:15:46.920 15:57:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:46.920 15:57:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3725645 00:15:47.179 15:57:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:47.179 15:57:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:47.179 15:57:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3725645' 00:15:47.179 killing process with pid 3725645 00:15:47.179 15:57:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 3725645 00:15:47.179 15:57:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 3725645 00:15:47.179 15:57:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:47.179 15:57:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:47.179 15:57:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:47.179 15:57:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:47.179 15:57:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:47.179 15:57:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.179 15:57:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:47.179 15:57:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:49.713 15:57:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:49.713 00:15:49.713 real 0m10.801s 00:15:49.713 user 0m20.037s 00:15:49.713 sys 0m5.544s 00:15:49.713 15:57:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:49.713 15:57:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:49.713 ************************************ 00:15:49.713 END TEST nvmf_bdev_io_wait 00:15:49.713 ************************************ 00:15:49.713 15:57:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:49.713 15:57:18 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:49.713 15:57:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:49.713 15:57:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:49.713 15:57:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:49.713 ************************************ 00:15:49.713 START TEST nvmf_queue_depth 00:15:49.713 ************************************ 00:15:49.713 15:57:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:49.713 * Looking for test storage... 00:15:49.713 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:49.713 15:57:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:49.713 15:57:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:15:49.713 15:57:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:49.713 15:57:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:49.713 15:57:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:49.713 15:57:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:49.713 15:57:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:49.713 15:57:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:49.713 15:57:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:49.713 15:57:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:49.713 15:57:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:49.713 15:57:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:49.713 15:57:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:49.713 15:57:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:49.713 15:57:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:49.713 15:57:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:49.713 15:57:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:49.713 15:57:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:49.713 15:57:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:49.713 15:57:18 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:49.713 15:57:18 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:49.713 15:57:18 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:49.713 15:57:18 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.713 15:57:18 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.713 15:57:18 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.714 15:57:18 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:15:49.714 15:57:18 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.714 15:57:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:15:49.714 15:57:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:49.714 15:57:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:49.714 15:57:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:49.714 15:57:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:49.714 15:57:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:49.714 15:57:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:49.714 15:57:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:49.714 15:57:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:49.714 15:57:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:49.714 15:57:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:49.714 15:57:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:49.714 15:57:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:49.714 15:57:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:49.714 15:57:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:49.714 15:57:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:49.714 15:57:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:49.714 15:57:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:49.714 15:57:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:49.714 15:57:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:49.714 15:57:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:49.714 15:57:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:49.714 15:57:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:49.714 15:57:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:15:49.714 15:57:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:54.981 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:54.981 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:54.981 Found net devices under 0000:86:00.0: cvl_0_0 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:54.981 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:54.982 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:54.982 Found net devices under 0000:86:00.1: cvl_0_1 00:15:54.982 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:54.982 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:54.982 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:15:54.982 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:54.982 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:54.982 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:54.982 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:54.982 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:54.982 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:54.982 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:54.982 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:54.982 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:54.982 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:54.982 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:54.982 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:54.982 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:54.982 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:54.982 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:54.982 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:54.982 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:54.982 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:54.982 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:54.982 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:54.982 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:54.982 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:54.982 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:54.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:54.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:15:54.982 00:15:54.982 --- 10.0.0.2 ping statistics --- 00:15:54.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.982 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:15:54.982 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:54.982 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:54.982 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:15:54.982 00:15:54.982 --- 10.0.0.1 ping statistics --- 00:15:54.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.982 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:15:54.982 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:54.982 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:15:54.982 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:54.982 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:54.982 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:54.982 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:54.982 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:54.982 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:54.982 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:54.982 15:57:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:54.982 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:54.982 15:57:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:54.982 15:57:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:54.982 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=3729680 00:15:54.982 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 3729680 00:15:54.982 15:57:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 3729680 ']' 00:15:54.982 15:57:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.982 15:57:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:54.982 15:57:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.982 15:57:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:54.982 15:57:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:54.982 15:57:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:54.982 [2024-07-15 15:57:23.716808] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:15:54.982 [2024-07-15 15:57:23.716853] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:54.982 EAL: No free 2048 kB hugepages reported on node 1 00:15:54.982 [2024-07-15 15:57:23.774585] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.982 [2024-07-15 15:57:23.852812] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:54.982 [2024-07-15 15:57:23.852844] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:54.982 [2024-07-15 15:57:23.852851] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:54.982 [2024-07-15 15:57:23.852857] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:54.982 [2024-07-15 15:57:23.852862] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:54.982 [2024-07-15 15:57:23.852878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:55.919 15:57:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:55.920 15:57:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:15:55.920 15:57:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:55.920 15:57:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:55.920 15:57:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:55.920 15:57:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:55.920 15:57:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:55.920 15:57:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.920 15:57:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:55.920 [2024-07-15 15:57:24.539159] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:55.920 15:57:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.920 15:57:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:55.920 15:57:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.920 15:57:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:55.920 Malloc0 00:15:55.920 15:57:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.920 15:57:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:55.920 15:57:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.920 15:57:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:55.920 15:57:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.920 15:57:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:55.920 15:57:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.920 15:57:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:55.920 15:57:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.920 15:57:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:55.920 15:57:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.920 15:57:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:55.920 [2024-07-15 15:57:24.591283] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:55.920 15:57:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.920 15:57:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3729919 00:15:55.920 15:57:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:55.920 15:57:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3729919 /var/tmp/bdevperf.sock 00:15:55.920 15:57:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 3729919 ']' 00:15:55.920 15:57:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:55.920 15:57:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:55.920 15:57:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:55.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:55.920 15:57:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:55.920 15:57:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:55.920 15:57:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:55.920 [2024-07-15 15:57:24.642050] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:15:55.920 [2024-07-15 15:57:24.642093] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3729919 ] 00:15:55.920 EAL: No free 2048 kB hugepages reported on node 1 00:15:55.920 [2024-07-15 15:57:24.697195] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.920 [2024-07-15 15:57:24.771268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.857 15:57:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:56.857 15:57:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:15:56.857 15:57:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:56.857 15:57:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.857 15:57:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:56.857 NVMe0n1 00:15:56.857 15:57:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.857 15:57:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:56.857 Running I/O for 10 seconds... 00:16:06.867 00:16:06.867 Latency(us) 00:16:06.867 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:06.867 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:16:06.867 Verification LBA range: start 0x0 length 0x4000 00:16:06.867 NVMe0n1 : 10.06 12360.66 48.28 0.00 0.00 82550.78 19489.84 55392.17 00:16:06.867 =================================================================================================================== 00:16:06.867 Total : 12360.66 48.28 0.00 0.00 82550.78 19489.84 55392.17 00:16:06.867 0 00:16:06.867 15:57:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3729919 00:16:06.867 15:57:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 3729919 ']' 00:16:06.867 15:57:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 3729919 00:16:06.867 15:57:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:16:06.867 15:57:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:06.867 15:57:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3729919 00:16:06.867 15:57:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:06.867 15:57:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:06.867 15:57:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3729919' 00:16:06.867 killing process with pid 3729919 00:16:06.867 15:57:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 3729919 00:16:06.867 Received shutdown signal, test time was about 10.000000 seconds 00:16:06.867 00:16:06.867 Latency(us) 00:16:06.867 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:06.867 =================================================================================================================== 00:16:06.867 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:06.867 15:57:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 3729919 00:16:07.126 15:57:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:07.126 15:57:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:16:07.126 15:57:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:07.126 15:57:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:16:07.126 15:57:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:07.126 15:57:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:16:07.126 15:57:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:07.126 15:57:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:07.126 rmmod nvme_tcp 00:16:07.126 rmmod nvme_fabrics 00:16:07.126 rmmod nvme_keyring 00:16:07.126 15:57:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:07.126 15:57:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:16:07.126 15:57:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:16:07.126 15:57:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 3729680 ']' 00:16:07.126 15:57:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 3729680 00:16:07.126 15:57:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 3729680 ']' 00:16:07.126 15:57:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 3729680 00:16:07.126 15:57:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:16:07.126 15:57:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:07.126 15:57:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3729680 00:16:07.126 15:57:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:07.126 15:57:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:07.126 15:57:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3729680' 00:16:07.126 killing process with pid 3729680 00:16:07.126 15:57:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 3729680 00:16:07.127 15:57:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 3729680 00:16:07.386 15:57:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:07.386 15:57:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:07.386 15:57:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:07.386 15:57:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:07.386 15:57:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:07.386 15:57:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:07.386 15:57:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:07.386 15:57:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:09.922 15:57:38 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:09.922 00:16:09.922 real 0m20.055s 00:16:09.922 user 0m24.613s 00:16:09.922 sys 0m5.504s 00:16:09.922 15:57:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:09.922 15:57:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:09.922 ************************************ 00:16:09.922 END TEST nvmf_queue_depth 00:16:09.922 ************************************ 00:16:09.922 15:57:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:09.922 15:57:38 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:09.922 15:57:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:09.922 15:57:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:09.922 15:57:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:09.922 ************************************ 00:16:09.922 START TEST nvmf_target_multipath 00:16:09.922 ************************************ 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:09.922 * Looking for test storage... 00:16:09.922 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:16:09.922 15:57:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:15.190 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:15.190 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:16:15.190 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:15.190 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:15.190 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:15.190 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:15.191 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:15.191 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:15.191 Found net devices under 0000:86:00.0: cvl_0_0 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:15.191 Found net devices under 0000:86:00.1: cvl_0_1 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:15.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:15.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:16:15.191 00:16:15.191 --- 10.0.0.2 ping statistics --- 00:16:15.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.191 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:15.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:15.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:16:15.191 00:16:15.191 --- 10.0.0.1 ping statistics --- 00:16:15.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.191 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:16:15.191 only one NIC for nvmf test 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:15.191 rmmod nvme_tcp 00:16:15.191 rmmod nvme_fabrics 00:16:15.191 rmmod nvme_keyring 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:15.191 15:57:43 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:17.095 15:57:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:17.095 15:57:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:16:17.095 15:57:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:16:17.095 15:57:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:17.095 15:57:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:16:17.095 15:57:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:17.095 15:57:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:16:17.095 15:57:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:17.095 15:57:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:17.095 15:57:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:17.095 15:57:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:16:17.095 15:57:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:16:17.095 15:57:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:17.095 15:57:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:17.095 15:57:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:17.095 15:57:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:17.095 15:57:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:17.095 15:57:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:17.095 15:57:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:17.095 15:57:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:17.095 15:57:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:17.095 15:57:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:17.095 00:16:17.095 real 0m7.708s 00:16:17.095 user 0m1.535s 00:16:17.095 sys 0m4.145s 00:16:17.095 15:57:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:17.095 15:57:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:17.095 ************************************ 00:16:17.095 END TEST nvmf_target_multipath 00:16:17.095 ************************************ 00:16:17.355 15:57:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:17.355 15:57:46 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:17.355 15:57:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:17.355 15:57:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:17.355 15:57:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:17.355 ************************************ 00:16:17.355 START TEST nvmf_zcopy 00:16:17.355 ************************************ 00:16:17.355 15:57:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:17.355 * Looking for test storage... 00:16:17.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:17.355 15:57:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:17.355 15:57:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:16:17.355 15:57:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:17.355 15:57:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:17.355 15:57:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:17.355 15:57:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:17.355 15:57:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:17.355 15:57:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:17.355 15:57:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:17.355 15:57:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:17.355 15:57:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:17.355 15:57:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:17.355 15:57:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:17.355 15:57:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:17.355 15:57:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:17.355 15:57:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:17.355 15:57:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:17.355 15:57:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:17.355 15:57:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:17.355 15:57:46 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:17.355 15:57:46 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:17.355 15:57:46 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:17.355 15:57:46 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.355 15:57:46 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.355 15:57:46 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.355 15:57:46 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:16:17.355 15:57:46 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.355 15:57:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:16:17.355 15:57:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:17.355 15:57:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:17.355 15:57:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:17.355 15:57:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:17.355 15:57:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:17.355 15:57:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:17.355 15:57:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:17.355 15:57:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:17.355 15:57:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:16:17.355 15:57:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:17.355 15:57:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:17.355 15:57:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:17.355 15:57:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:17.355 15:57:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:17.355 15:57:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:17.355 15:57:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:17.355 15:57:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:17.355 15:57:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:17.355 15:57:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:17.355 15:57:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:16:17.355 15:57:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:22.627 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:22.627 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:22.627 Found net devices under 0000:86:00.0: cvl_0_0 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:22.627 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:22.627 Found net devices under 0000:86:00.1: cvl_0_1 00:16:22.628 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:22.628 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:22.628 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:16:22.628 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:22.628 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:22.628 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:22.628 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:22.628 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:22.628 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:22.628 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:22.888 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:22.888 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:22.888 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:22.888 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:22.888 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:22.888 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:22.888 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:22.888 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:22.888 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:22.888 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:22.888 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:22.888 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:22.888 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:22.888 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:22.888 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:22.888 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:22.888 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:22.888 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:16:22.888 00:16:22.888 --- 10.0.0.2 ping statistics --- 00:16:22.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.888 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:16:22.888 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:22.888 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:22.888 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:16:22.888 00:16:22.888 --- 10.0.0.1 ping statistics --- 00:16:22.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.888 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:16:22.888 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:22.888 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:16:22.888 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:22.888 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:22.888 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:22.888 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:22.888 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:22.888 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:22.888 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:22.888 15:57:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:16:22.888 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:22.888 15:57:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:22.888 15:57:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:22.888 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=3738569 00:16:22.888 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 3738569 00:16:22.888 15:57:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:22.888 15:57:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 3738569 ']' 00:16:22.888 15:57:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.888 15:57:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:22.888 15:57:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.888 15:57:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:22.888 15:57:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:23.148 [2024-07-15 15:57:51.861236] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:16:23.148 [2024-07-15 15:57:51.861293] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:23.148 EAL: No free 2048 kB hugepages reported on node 1 00:16:23.148 [2024-07-15 15:57:51.917642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.148 [2024-07-15 15:57:51.993751] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:23.148 [2024-07-15 15:57:51.993791] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:23.148 [2024-07-15 15:57:51.993798] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:23.148 [2024-07-15 15:57:51.993805] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:23.148 [2024-07-15 15:57:51.993810] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:23.148 [2024-07-15 15:57:51.993827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:24.086 15:57:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:24.086 15:57:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:16:24.086 15:57:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:24.086 15:57:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:24.086 15:57:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:24.086 15:57:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:24.086 15:57:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:16:24.086 15:57:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:16:24.086 15:57:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.086 15:57:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:24.086 [2024-07-15 15:57:52.697728] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:24.086 15:57:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.086 15:57:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:24.086 15:57:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.086 15:57:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:24.086 15:57:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.086 15:57:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:24.086 15:57:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.086 15:57:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:24.086 [2024-07-15 15:57:52.713850] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:24.086 15:57:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.086 15:57:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:24.086 15:57:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.086 15:57:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:24.086 15:57:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.086 15:57:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:16:24.086 15:57:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.086 15:57:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:24.086 malloc0 00:16:24.086 15:57:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.086 15:57:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:24.086 15:57:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.086 15:57:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:24.086 15:57:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.086 15:57:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:16:24.086 15:57:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:16:24.086 15:57:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:24.086 15:57:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:24.086 15:57:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:24.086 15:57:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:24.086 { 00:16:24.086 "params": { 00:16:24.086 "name": "Nvme$subsystem", 00:16:24.086 "trtype": "$TEST_TRANSPORT", 00:16:24.086 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:24.086 "adrfam": "ipv4", 00:16:24.086 "trsvcid": "$NVMF_PORT", 00:16:24.086 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:24.086 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:24.086 "hdgst": ${hdgst:-false}, 00:16:24.086 "ddgst": ${ddgst:-false} 00:16:24.086 }, 00:16:24.086 "method": "bdev_nvme_attach_controller" 00:16:24.086 } 00:16:24.086 EOF 00:16:24.086 )") 00:16:24.086 15:57:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:24.086 15:57:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:24.086 15:57:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:24.086 15:57:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:24.086 "params": { 00:16:24.086 "name": "Nvme1", 00:16:24.086 "trtype": "tcp", 00:16:24.086 "traddr": "10.0.0.2", 00:16:24.086 "adrfam": "ipv4", 00:16:24.086 "trsvcid": "4420", 00:16:24.086 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:24.086 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:24.086 "hdgst": false, 00:16:24.086 "ddgst": false 00:16:24.086 }, 00:16:24.086 "method": "bdev_nvme_attach_controller" 00:16:24.086 }' 00:16:24.086 [2024-07-15 15:57:52.789913] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:16:24.086 [2024-07-15 15:57:52.789957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3738805 ] 00:16:24.086 EAL: No free 2048 kB hugepages reported on node 1 00:16:24.086 [2024-07-15 15:57:52.840010] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.086 [2024-07-15 15:57:52.913639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.345 Running I/O for 10 seconds... 00:16:34.325 00:16:34.325 Latency(us) 00:16:34.326 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:34.326 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:16:34.326 Verification LBA range: start 0x0 length 0x1000 00:16:34.326 Nvme1n1 : 10.01 8619.82 67.34 0.00 0.00 14806.06 1809.36 25188.62 00:16:34.326 =================================================================================================================== 00:16:34.326 Total : 8619.82 67.34 0.00 0.00 14806.06 1809.36 25188.62 00:16:34.584 15:58:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3740437 00:16:34.584 15:58:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:16:34.584 15:58:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:34.584 15:58:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:16:34.584 15:58:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:16:34.584 [2024-07-15 15:58:03.414125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.584 [2024-07-15 15:58:03.414156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.584 15:58:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:34.584 15:58:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:34.584 15:58:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:34.584 15:58:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:34.584 { 00:16:34.584 "params": { 00:16:34.584 "name": "Nvme$subsystem", 00:16:34.584 "trtype": "$TEST_TRANSPORT", 00:16:34.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:34.584 "adrfam": "ipv4", 00:16:34.584 "trsvcid": "$NVMF_PORT", 00:16:34.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:34.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:34.584 "hdgst": ${hdgst:-false}, 00:16:34.584 "ddgst": ${ddgst:-false} 00:16:34.584 }, 00:16:34.584 "method": "bdev_nvme_attach_controller" 00:16:34.584 } 00:16:34.584 EOF 00:16:34.584 )") 00:16:34.584 15:58:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:34.584 15:58:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:34.584 [2024-07-15 15:58:03.422106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.584 [2024-07-15 15:58:03.422117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.584 15:58:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:34.584 15:58:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:34.584 "params": { 00:16:34.584 "name": "Nvme1", 00:16:34.584 "trtype": "tcp", 00:16:34.584 "traddr": "10.0.0.2", 00:16:34.584 "adrfam": "ipv4", 00:16:34.584 "trsvcid": "4420", 00:16:34.584 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:34.584 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:34.584 "hdgst": false, 00:16:34.584 "ddgst": false 00:16:34.584 }, 00:16:34.584 "method": "bdev_nvme_attach_controller" 00:16:34.584 }' 00:16:34.584 [2024-07-15 15:58:03.430124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.584 [2024-07-15 15:58:03.430135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.584 [2024-07-15 15:58:03.438143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.584 [2024-07-15 15:58:03.438153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.584 [2024-07-15 15:58:03.446165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.584 [2024-07-15 15:58:03.446175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.584 [2024-07-15 15:58:03.454185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.584 [2024-07-15 15:58:03.454194] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.584 [2024-07-15 15:58:03.454471] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:16:34.584 [2024-07-15 15:58:03.454513] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3740437 ] 00:16:34.584 [2024-07-15 15:58:03.462205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.584 [2024-07-15 15:58:03.462215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.584 [2024-07-15 15:58:03.470233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.584 [2024-07-15 15:58:03.470243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.584 EAL: No free 2048 kB hugepages reported on node 1 00:16:34.584 [2024-07-15 15:58:03.478256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.584 [2024-07-15 15:58:03.478265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.584 [2024-07-15 15:58:03.486275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.584 [2024-07-15 15:58:03.486285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.584 [2024-07-15 15:58:03.494294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.584 [2024-07-15 15:58:03.494303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.584 [2024-07-15 15:58:03.502315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.584 [2024-07-15 15:58:03.502325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.584 [2024-07-15 15:58:03.508570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.584 [2024-07-15 15:58:03.510336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.584 [2024-07-15 15:58:03.510345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.843 [2024-07-15 15:58:03.518362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.843 [2024-07-15 15:58:03.518373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.843 [2024-07-15 15:58:03.526379] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.843 [2024-07-15 15:58:03.526389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.843 [2024-07-15 15:58:03.534403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.843 [2024-07-15 15:58:03.534413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.843 [2024-07-15 15:58:03.542424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.843 [2024-07-15 15:58:03.542434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.843 [2024-07-15 15:58:03.550448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.843 [2024-07-15 15:58:03.550467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.843 [2024-07-15 15:58:03.558470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.843 [2024-07-15 15:58:03.558482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.843 [2024-07-15 15:58:03.566489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.843 [2024-07-15 15:58:03.566498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.843 [2024-07-15 15:58:03.574509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.843 [2024-07-15 15:58:03.574519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.843 [2024-07-15 15:58:03.582530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.843 [2024-07-15 15:58:03.582541] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.843 [2024-07-15 15:58:03.585363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.843 [2024-07-15 15:58:03.590553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.843 [2024-07-15 15:58:03.590579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.843 [2024-07-15 15:58:03.598585] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.843 [2024-07-15 15:58:03.598602] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.843 [2024-07-15 15:58:03.606602] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.843 [2024-07-15 15:58:03.606618] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.843 [2024-07-15 15:58:03.614621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.843 [2024-07-15 15:58:03.614634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.843 [2024-07-15 15:58:03.622640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.843 [2024-07-15 15:58:03.622652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.843 [2024-07-15 15:58:03.630661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.843 [2024-07-15 15:58:03.630673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.843 [2024-07-15 15:58:03.638681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.843 [2024-07-15 15:58:03.638692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.843 [2024-07-15 15:58:03.646709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.843 [2024-07-15 15:58:03.646722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.843 [2024-07-15 15:58:03.654726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.843 [2024-07-15 15:58:03.654736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.843 [2024-07-15 15:58:03.662748] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.843 [2024-07-15 15:58:03.662758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.843 [2024-07-15 15:58:03.670768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.843 [2024-07-15 15:58:03.670777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.843 [2024-07-15 15:58:03.678808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.843 [2024-07-15 15:58:03.678828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.843 [2024-07-15 15:58:03.686824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.843 [2024-07-15 15:58:03.686838] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.843 [2024-07-15 15:58:03.694843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.843 [2024-07-15 15:58:03.694856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.843 [2024-07-15 15:58:03.702863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.843 [2024-07-15 15:58:03.702877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.843 [2024-07-15 15:58:03.710887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.843 [2024-07-15 15:58:03.710900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.843 [2024-07-15 15:58:03.718909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.843 [2024-07-15 15:58:03.718919] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.843 [2024-07-15 15:58:03.726929] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.843 [2024-07-15 15:58:03.726940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.843 [2024-07-15 15:58:03.734950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.843 [2024-07-15 15:58:03.734961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.843 [2024-07-15 15:58:03.742972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.843 [2024-07-15 15:58:03.742987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.843 [2024-07-15 15:58:03.750995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.843 [2024-07-15 15:58:03.751008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.843 [2024-07-15 15:58:03.759017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.843 [2024-07-15 15:58:03.759031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.843 [2024-07-15 15:58:03.767038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.843 [2024-07-15 15:58:03.767047] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.106 [2024-07-15 15:58:03.813415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.106 [2024-07-15 15:58:03.813431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.106 [2024-07-15 15:58:03.819183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.106 [2024-07-15 15:58:03.819193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.106 Running I/O for 5 seconds... 00:16:35.106 [2024-07-15 15:58:03.827203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.106 [2024-07-15 15:58:03.827212] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.106 [2024-07-15 15:58:03.839209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.106 [2024-07-15 15:58:03.839233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.106 [2024-07-15 15:58:03.846675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.106 [2024-07-15 15:58:03.846694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.106 [2024-07-15 15:58:03.855552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.106 [2024-07-15 15:58:03.855570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.106 [2024-07-15 15:58:03.864365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.106 [2024-07-15 15:58:03.864382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.106 [2024-07-15 15:58:03.873653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.106 [2024-07-15 15:58:03.873670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.106 [2024-07-15 15:58:03.882920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.106 [2024-07-15 15:58:03.882937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.106 [2024-07-15 15:58:03.892198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.106 [2024-07-15 15:58:03.892215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.106 [2024-07-15 15:58:03.901855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.106 [2024-07-15 15:58:03.901873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.106 [2024-07-15 15:58:03.911208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.106 [2024-07-15 15:58:03.911231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.106 [2024-07-15 15:58:03.920433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.106 [2024-07-15 15:58:03.920451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.106 [2024-07-15 15:58:03.929516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.106 [2024-07-15 15:58:03.929533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.106 [2024-07-15 15:58:03.939326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.106 [2024-07-15 15:58:03.939344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.106 [2024-07-15 15:58:03.948002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.106 [2024-07-15 15:58:03.948019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.106 [2024-07-15 15:58:03.957319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.106 [2024-07-15 15:58:03.957338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.106 [2024-07-15 15:58:03.966516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.106 [2024-07-15 15:58:03.966533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.106 [2024-07-15 15:58:03.975849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.106 [2024-07-15 15:58:03.975866] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.106 [2024-07-15 15:58:03.982720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.106 [2024-07-15 15:58:03.982737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.106 [2024-07-15 15:58:03.993788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.106 [2024-07-15 15:58:03.993805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.106 [2024-07-15 15:58:04.002842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.106 [2024-07-15 15:58:04.002859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.106 [2024-07-15 15:58:04.011581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.106 [2024-07-15 15:58:04.011601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.106 [2024-07-15 15:58:04.020866] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.106 [2024-07-15 15:58:04.020885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.106 [2024-07-15 15:58:04.030207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.106 [2024-07-15 15:58:04.030230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.106 [2024-07-15 15:58:04.038896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.106 [2024-07-15 15:58:04.038914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.365 [2024-07-15 15:58:04.047530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.365 [2024-07-15 15:58:04.047547] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.365 [2024-07-15 15:58:04.056118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.365 [2024-07-15 15:58:04.056135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.365 [2024-07-15 15:58:04.065274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.365 [2024-07-15 15:58:04.065291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.365 [2024-07-15 15:58:04.074065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.365 [2024-07-15 15:58:04.074081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.365 [2024-07-15 15:58:04.083211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.365 [2024-07-15 15:58:04.083235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.365 [2024-07-15 15:58:04.091947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.365 [2024-07-15 15:58:04.091965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.365 [2024-07-15 15:58:04.101025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.365 [2024-07-15 15:58:04.101041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.365 [2024-07-15 15:58:04.109590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.365 [2024-07-15 15:58:04.109607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.365 [2024-07-15 15:58:04.119555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.365 [2024-07-15 15:58:04.119573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.365 [2024-07-15 15:58:04.128212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.365 [2024-07-15 15:58:04.128235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.365 [2024-07-15 15:58:04.137466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.365 [2024-07-15 15:58:04.137483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.365 [2024-07-15 15:58:04.146002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.365 [2024-07-15 15:58:04.146020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.365 [2024-07-15 15:58:04.155283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.365 [2024-07-15 15:58:04.155300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.365 [2024-07-15 15:58:04.164990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.365 [2024-07-15 15:58:04.165007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.365 [2024-07-15 15:58:04.173849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.365 [2024-07-15 15:58:04.173866] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.365 [2024-07-15 15:58:04.182272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.365 [2024-07-15 15:58:04.182305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.365 [2024-07-15 15:58:04.191564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.365 [2024-07-15 15:58:04.191581] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.365 [2024-07-15 15:58:04.200292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.365 [2024-07-15 15:58:04.200310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.365 [2024-07-15 15:58:04.209375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.365 [2024-07-15 15:58:04.209393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.365 [2024-07-15 15:58:04.218524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.365 [2024-07-15 15:58:04.218541] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.365 [2024-07-15 15:58:04.227582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.365 [2024-07-15 15:58:04.227600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.365 [2024-07-15 15:58:04.236979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.365 [2024-07-15 15:58:04.236997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.365 [2024-07-15 15:58:04.246276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.365 [2024-07-15 15:58:04.246293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.365 [2024-07-15 15:58:04.255504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.365 [2024-07-15 15:58:04.255521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.365 [2024-07-15 15:58:04.264794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.365 [2024-07-15 15:58:04.264811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.365 [2024-07-15 15:58:04.273434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.365 [2024-07-15 15:58:04.273451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.365 [2024-07-15 15:58:04.281950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.365 [2024-07-15 15:58:04.281972] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.365 [2024-07-15 15:58:04.291282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.365 [2024-07-15 15:58:04.291300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.624 [2024-07-15 15:58:04.300566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.624 [2024-07-15 15:58:04.300584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.624 [2024-07-15 15:58:04.309930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.624 [2024-07-15 15:58:04.309948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.624 [2024-07-15 15:58:04.319087] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.624 [2024-07-15 15:58:04.319105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.624 [2024-07-15 15:58:04.327598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.624 [2024-07-15 15:58:04.327615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.624 [2024-07-15 15:58:04.336871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.624 [2024-07-15 15:58:04.336888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.624 [2024-07-15 15:58:04.346648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.624 [2024-07-15 15:58:04.346666] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.624 [2024-07-15 15:58:04.355325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.624 [2024-07-15 15:58:04.355342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.624 [2024-07-15 15:58:04.364540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.624 [2024-07-15 15:58:04.364558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.624 [2024-07-15 15:58:04.373840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.624 [2024-07-15 15:58:04.373857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.624 [2024-07-15 15:58:04.383011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.624 [2024-07-15 15:58:04.383028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.624 [2024-07-15 15:58:04.392085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.624 [2024-07-15 15:58:04.392102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.624 [2024-07-15 15:58:04.400857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.624 [2024-07-15 15:58:04.400875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.624 [2024-07-15 15:58:04.410307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.624 [2024-07-15 15:58:04.410324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.624 [2024-07-15 15:58:04.419055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.624 [2024-07-15 15:58:04.419073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.624 [2024-07-15 15:58:04.427663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.624 [2024-07-15 15:58:04.427681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.624 [2024-07-15 15:58:04.436743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.624 [2024-07-15 15:58:04.436761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.624 [2024-07-15 15:58:04.445885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.624 [2024-07-15 15:58:04.445902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.624 [2024-07-15 15:58:04.455218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.624 [2024-07-15 15:58:04.455245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.624 [2024-07-15 15:58:04.464564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.624 [2024-07-15 15:58:04.464581] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.624 [2024-07-15 15:58:04.473068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.624 [2024-07-15 15:58:04.473085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.624 [2024-07-15 15:58:04.482324] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.624 [2024-07-15 15:58:04.482342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.624 [2024-07-15 15:58:04.489271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.624 [2024-07-15 15:58:04.489288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.624 [2024-07-15 15:58:04.500260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.624 [2024-07-15 15:58:04.500279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.624 [2024-07-15 15:58:04.509021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.624 [2024-07-15 15:58:04.509038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.624 [2024-07-15 15:58:04.518213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.624 [2024-07-15 15:58:04.518236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.624 [2024-07-15 15:58:04.526907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.624 [2024-07-15 15:58:04.526925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.624 [2024-07-15 15:58:04.536095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.624 [2024-07-15 15:58:04.536113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.624 [2024-07-15 15:58:04.544623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.624 [2024-07-15 15:58:04.544641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.624 [2024-07-15 15:58:04.553378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.624 [2024-07-15 15:58:04.553396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.884 [2024-07-15 15:58:04.562088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.884 [2024-07-15 15:58:04.562105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.884 [2024-07-15 15:58:04.568972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.884 [2024-07-15 15:58:04.568989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.884 [2024-07-15 15:58:04.580180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.884 [2024-07-15 15:58:04.580198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.884 [2024-07-15 15:58:04.589011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.884 [2024-07-15 15:58:04.589028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.884 [2024-07-15 15:58:04.598277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.884 [2024-07-15 15:58:04.598295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.884 [2024-07-15 15:58:04.606825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.884 [2024-07-15 15:58:04.606842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.884 [2024-07-15 15:58:04.616174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.884 [2024-07-15 15:58:04.616191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.884 [2024-07-15 15:58:04.625386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.884 [2024-07-15 15:58:04.625441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.884 [2024-07-15 15:58:04.634685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.884 [2024-07-15 15:58:04.634702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.884 [2024-07-15 15:58:04.643433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.884 [2024-07-15 15:58:04.643451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.884 [2024-07-15 15:58:04.652478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.884 [2024-07-15 15:58:04.652496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.884 [2024-07-15 15:58:04.661120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.884 [2024-07-15 15:58:04.661138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.884 [2024-07-15 15:58:04.670395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.884 [2024-07-15 15:58:04.670412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.884 [2024-07-15 15:58:04.679668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.884 [2024-07-15 15:58:04.679685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.884 [2024-07-15 15:58:04.688757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.884 [2024-07-15 15:58:04.688774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.884 [2024-07-15 15:58:04.698147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.884 [2024-07-15 15:58:04.698164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.884 [2024-07-15 15:58:04.707410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.884 [2024-07-15 15:58:04.707427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.884 [2024-07-15 15:58:04.715994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.884 [2024-07-15 15:58:04.716011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.884 [2024-07-15 15:58:04.725453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.884 [2024-07-15 15:58:04.725471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.884 [2024-07-15 15:58:04.734776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.884 [2024-07-15 15:58:04.734793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.884 [2024-07-15 15:58:04.741534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.884 [2024-07-15 15:58:04.741551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.884 [2024-07-15 15:58:04.752677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.884 [2024-07-15 15:58:04.752694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.884 [2024-07-15 15:58:04.761375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.884 [2024-07-15 15:58:04.761394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.884 [2024-07-15 15:58:04.770165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.884 [2024-07-15 15:58:04.770182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.884 [2024-07-15 15:58:04.778974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.884 [2024-07-15 15:58:04.778992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.884 [2024-07-15 15:58:04.788203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.884 [2024-07-15 15:58:04.788222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.884 [2024-07-15 15:58:04.795185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.884 [2024-07-15 15:58:04.795206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.884 [2024-07-15 15:58:04.805386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.884 [2024-07-15 15:58:04.805406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.884 [2024-07-15 15:58:04.814412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.884 [2024-07-15 15:58:04.814430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.143 [2024-07-15 15:58:04.823604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.143 [2024-07-15 15:58:04.823622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.143 [2024-07-15 15:58:04.832926] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.143 [2024-07-15 15:58:04.832945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.143 [2024-07-15 15:58:04.842057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.143 [2024-07-15 15:58:04.842074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.143 [2024-07-15 15:58:04.851592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.143 [2024-07-15 15:58:04.851610] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.143 [2024-07-15 15:58:04.860913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.143 [2024-07-15 15:58:04.860931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.143 [2024-07-15 15:58:04.869639] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.143 [2024-07-15 15:58:04.869657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.143 [2024-07-15 15:58:04.878297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.143 [2024-07-15 15:58:04.878314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.143 [2024-07-15 15:58:04.885646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.143 [2024-07-15 15:58:04.885664] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.143 [2024-07-15 15:58:04.896085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.143 [2024-07-15 15:58:04.896103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.143 [2024-07-15 15:58:04.903373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.143 [2024-07-15 15:58:04.903391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.143 [2024-07-15 15:58:04.913755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.143 [2024-07-15 15:58:04.913773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.143 [2024-07-15 15:58:04.922884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.143 [2024-07-15 15:58:04.922903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.143 [2024-07-15 15:58:04.931583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.143 [2024-07-15 15:58:04.931601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.143 [2024-07-15 15:58:04.940737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.143 [2024-07-15 15:58:04.940754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.143 [2024-07-15 15:58:04.950340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.143 [2024-07-15 15:58:04.950358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.143 [2024-07-15 15:58:04.959676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.143 [2024-07-15 15:58:04.959694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.143 [2024-07-15 15:58:04.968375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.144 [2024-07-15 15:58:04.968396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.144 [2024-07-15 15:58:04.976986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.144 [2024-07-15 15:58:04.977005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.144 [2024-07-15 15:58:04.986354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.144 [2024-07-15 15:58:04.986372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.144 [2024-07-15 15:58:04.995452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.144 [2024-07-15 15:58:04.995470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.144 [2024-07-15 15:58:05.004662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.144 [2024-07-15 15:58:05.004680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.144 [2024-07-15 15:58:05.013202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.144 [2024-07-15 15:58:05.013219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.144 [2024-07-15 15:58:05.022594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.144 [2024-07-15 15:58:05.022611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.144 [2024-07-15 15:58:05.029526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.144 [2024-07-15 15:58:05.029545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.144 [2024-07-15 15:58:05.039800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.144 [2024-07-15 15:58:05.039818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.144 [2024-07-15 15:58:05.048366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.144 [2024-07-15 15:58:05.048384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.144 [2024-07-15 15:58:05.057148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.144 [2024-07-15 15:58:05.057166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.144 [2024-07-15 15:58:05.066495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.144 [2024-07-15 15:58:05.066513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.144 [2024-07-15 15:58:05.075772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.144 [2024-07-15 15:58:05.075790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.403 [2024-07-15 15:58:05.082678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.403 [2024-07-15 15:58:05.082695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.403 [2024-07-15 15:58:05.092865] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.403 [2024-07-15 15:58:05.092883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.403 [2024-07-15 15:58:05.102343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.403 [2024-07-15 15:58:05.102362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.403 [2024-07-15 15:58:05.111302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.403 [2024-07-15 15:58:05.111320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.403 [2024-07-15 15:58:05.119690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.403 [2024-07-15 15:58:05.119708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.403 [2024-07-15 15:58:05.128392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.403 [2024-07-15 15:58:05.128410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.403 [2024-07-15 15:58:05.137978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.403 [2024-07-15 15:58:05.137996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.403 [2024-07-15 15:58:05.147377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.403 [2024-07-15 15:58:05.147395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.403 [2024-07-15 15:58:05.155898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.403 [2024-07-15 15:58:05.155915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.403 [2024-07-15 15:58:05.165042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.403 [2024-07-15 15:58:05.165060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.403 [2024-07-15 15:58:05.174337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.403 [2024-07-15 15:58:05.174354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.403 [2024-07-15 15:58:05.183797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.403 [2024-07-15 15:58:05.183815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.403 [2024-07-15 15:58:05.192572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.403 [2024-07-15 15:58:05.192589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.403 [2024-07-15 15:58:05.199442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.403 [2024-07-15 15:58:05.199459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.403 [2024-07-15 15:58:05.210573] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.403 [2024-07-15 15:58:05.210590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.403 [2024-07-15 15:58:05.219429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.403 [2024-07-15 15:58:05.219447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.403 [2024-07-15 15:58:05.227903] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.403 [2024-07-15 15:58:05.227920] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.403 [2024-07-15 15:58:05.237003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.403 [2024-07-15 15:58:05.237021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.403 [2024-07-15 15:58:05.246367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.403 [2024-07-15 15:58:05.246385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.403 [2024-07-15 15:58:05.255677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.403 [2024-07-15 15:58:05.255695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.403 [2024-07-15 15:58:05.264386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.403 [2024-07-15 15:58:05.264403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.403 [2024-07-15 15:58:05.273706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.403 [2024-07-15 15:58:05.273723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.403 [2024-07-15 15:58:05.282778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.403 [2024-07-15 15:58:05.282795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.403 [2024-07-15 15:58:05.292189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.403 [2024-07-15 15:58:05.292206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.403 [2024-07-15 15:58:05.301494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.403 [2024-07-15 15:58:05.301512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.403 [2024-07-15 15:58:05.310856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.403 [2024-07-15 15:58:05.310874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.403 [2024-07-15 15:58:05.319992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.403 [2024-07-15 15:58:05.320009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.403 [2024-07-15 15:58:05.328846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.403 [2024-07-15 15:58:05.328864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.662 [2024-07-15 15:58:05.338251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.662 [2024-07-15 15:58:05.338268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.662 [2024-07-15 15:58:05.347689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.662 [2024-07-15 15:58:05.347707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.662 [2024-07-15 15:58:05.356534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.662 [2024-07-15 15:58:05.356551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.662 [2024-07-15 15:58:05.366087] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.662 [2024-07-15 15:58:05.366105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.662 [2024-07-15 15:58:05.375394] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.662 [2024-07-15 15:58:05.375411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.662 [2024-07-15 15:58:05.384702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.662 [2024-07-15 15:58:05.384720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.662 [2024-07-15 15:58:05.394071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.662 [2024-07-15 15:58:05.394088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.662 [2024-07-15 15:58:05.402744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.662 [2024-07-15 15:58:05.402760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.662 [2024-07-15 15:58:05.411385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.662 [2024-07-15 15:58:05.411402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.662 [2024-07-15 15:58:05.420552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.662 [2024-07-15 15:58:05.420569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.662 [2024-07-15 15:58:05.430107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.662 [2024-07-15 15:58:05.430125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.662 [2024-07-15 15:58:05.438767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.662 [2024-07-15 15:58:05.438786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.662 [2024-07-15 15:58:05.448073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.662 [2024-07-15 15:58:05.448090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.662 [2024-07-15 15:58:05.457430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.662 [2024-07-15 15:58:05.457448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.662 [2024-07-15 15:58:05.466787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.662 [2024-07-15 15:58:05.466804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.662 [2024-07-15 15:58:05.476030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.662 [2024-07-15 15:58:05.476048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.662 [2024-07-15 15:58:05.485022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.662 [2024-07-15 15:58:05.485040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.663 [2024-07-15 15:58:05.494307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.663 [2024-07-15 15:58:05.494324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.663 [2024-07-15 15:58:05.503657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.663 [2024-07-15 15:58:05.503675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.663 [2024-07-15 15:58:05.512281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.663 [2024-07-15 15:58:05.512299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.663 [2024-07-15 15:58:05.521090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.663 [2024-07-15 15:58:05.521107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.663 [2024-07-15 15:58:05.529832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.663 [2024-07-15 15:58:05.529849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.663 [2024-07-15 15:58:05.539088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.663 [2024-07-15 15:58:05.539105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.663 [2024-07-15 15:58:05.547749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.663 [2024-07-15 15:58:05.547766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.663 [2024-07-15 15:58:05.556968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.663 [2024-07-15 15:58:05.556986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.663 [2024-07-15 15:58:05.565678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.663 [2024-07-15 15:58:05.565695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.663 [2024-07-15 15:58:05.574316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.663 [2024-07-15 15:58:05.574333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.663 [2024-07-15 15:58:05.583001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.663 [2024-07-15 15:58:05.583019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.663 [2024-07-15 15:58:05.591690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.663 [2024-07-15 15:58:05.591708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.922 [2024-07-15 15:58:05.600985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.922 [2024-07-15 15:58:05.601002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.922 [2024-07-15 15:58:05.610212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.922 [2024-07-15 15:58:05.610237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.922 [2024-07-15 15:58:05.619508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.922 [2024-07-15 15:58:05.619525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.922 [2024-07-15 15:58:05.628606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.922 [2024-07-15 15:58:05.628623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.922 [2024-07-15 15:58:05.637943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.922 [2024-07-15 15:58:05.637959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.922 [2024-07-15 15:58:05.644896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.922 [2024-07-15 15:58:05.644917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.922 [2024-07-15 15:58:05.655259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.922 [2024-07-15 15:58:05.655276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.922 [2024-07-15 15:58:05.663912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.922 [2024-07-15 15:58:05.663932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.922 [2024-07-15 15:58:05.673407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.922 [2024-07-15 15:58:05.673424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.922 [2024-07-15 15:58:05.682160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.922 [2024-07-15 15:58:05.682177] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.922 [2024-07-15 15:58:05.691468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.922 [2024-07-15 15:58:05.691496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.922 [2024-07-15 15:58:05.700815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.922 [2024-07-15 15:58:05.700832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.922 [2024-07-15 15:58:05.709833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.922 [2024-07-15 15:58:05.709850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.922 [2024-07-15 15:58:05.716850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.922 [2024-07-15 15:58:05.716867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.922 [2024-07-15 15:58:05.727765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.922 [2024-07-15 15:58:05.727782] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.922 [2024-07-15 15:58:05.737101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.922 [2024-07-15 15:58:05.737118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.922 [2024-07-15 15:58:05.745674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.922 [2024-07-15 15:58:05.745692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.922 [2024-07-15 15:58:05.754783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.922 [2024-07-15 15:58:05.754801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.922 [2024-07-15 15:58:05.764216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.922 [2024-07-15 15:58:05.764239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.922 [2024-07-15 15:58:05.773658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.922 [2024-07-15 15:58:05.773674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.922 [2024-07-15 15:58:05.782670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.922 [2024-07-15 15:58:05.782686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.922 [2024-07-15 15:58:05.791803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.922 [2024-07-15 15:58:05.791820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.922 [2024-07-15 15:58:05.801157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.922 [2024-07-15 15:58:05.801175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.922 [2024-07-15 15:58:05.809703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.922 [2024-07-15 15:58:05.809720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.922 [2024-07-15 15:58:05.818969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.922 [2024-07-15 15:58:05.818989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.922 [2024-07-15 15:58:05.827444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.922 [2024-07-15 15:58:05.827461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.922 [2024-07-15 15:58:05.836124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.922 [2024-07-15 15:58:05.836142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.922 [2024-07-15 15:58:05.844823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.922 [2024-07-15 15:58:05.844840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.922 [2024-07-15 15:58:05.853508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.922 [2024-07-15 15:58:05.853525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.182 [2024-07-15 15:58:05.862951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.182 [2024-07-15 15:58:05.862968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.182 [2024-07-15 15:58:05.872439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.182 [2024-07-15 15:58:05.872468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.182 [2024-07-15 15:58:05.881743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.182 [2024-07-15 15:58:05.881761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.182 [2024-07-15 15:58:05.890787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.182 [2024-07-15 15:58:05.890805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.182 [2024-07-15 15:58:05.899962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.182 [2024-07-15 15:58:05.899980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.182 [2024-07-15 15:58:05.908791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.182 [2024-07-15 15:58:05.908809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.182 [2024-07-15 15:58:05.918059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.182 [2024-07-15 15:58:05.918076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.182 [2024-07-15 15:58:05.926718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.182 [2024-07-15 15:58:05.926735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.182 [2024-07-15 15:58:05.935783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.182 [2024-07-15 15:58:05.935800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.182 [2024-07-15 15:58:05.944758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.182 [2024-07-15 15:58:05.944776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.182 [2024-07-15 15:58:05.953872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.182 [2024-07-15 15:58:05.953889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.182 [2024-07-15 15:58:05.963094] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.182 [2024-07-15 15:58:05.963111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.182 [2024-07-15 15:58:05.972244] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.182 [2024-07-15 15:58:05.972262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.182 [2024-07-15 15:58:05.980928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.182 [2024-07-15 15:58:05.980945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.182 [2024-07-15 15:58:05.989974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.182 [2024-07-15 15:58:05.989995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.182 [2024-07-15 15:58:05.998633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.182 [2024-07-15 15:58:05.998650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.182 [2024-07-15 15:58:06.007969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.182 [2024-07-15 15:58:06.007987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.182 [2024-07-15 15:58:06.016675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.182 [2024-07-15 15:58:06.016693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.182 [2024-07-15 15:58:06.025890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.182 [2024-07-15 15:58:06.025907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.182 [2024-07-15 15:58:06.035097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.182 [2024-07-15 15:58:06.035114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.182 [2024-07-15 15:58:06.044163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.182 [2024-07-15 15:58:06.044182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.182 [2024-07-15 15:58:06.053195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.182 [2024-07-15 15:58:06.053214] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.182 [2024-07-15 15:58:06.062566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.182 [2024-07-15 15:58:06.062583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.182 [2024-07-15 15:58:06.071832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.182 [2024-07-15 15:58:06.071849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.182 [2024-07-15 15:58:06.081260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.182 [2024-07-15 15:58:06.081277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.182 [2024-07-15 15:58:06.090537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.182 [2024-07-15 15:58:06.090554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.182 [2024-07-15 15:58:06.100357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.182 [2024-07-15 15:58:06.100375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.182 [2024-07-15 15:58:06.109349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.182 [2024-07-15 15:58:06.109367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.442 [2024-07-15 15:58:06.117990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.442 [2024-07-15 15:58:06.118007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.442 [2024-07-15 15:58:06.127335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.442 [2024-07-15 15:58:06.127353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.442 [2024-07-15 15:58:06.136708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.442 [2024-07-15 15:58:06.136725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.442 [2024-07-15 15:58:06.145363] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.442 [2024-07-15 15:58:06.145380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.442 [2024-07-15 15:58:06.153956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.442 [2024-07-15 15:58:06.153974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.442 [2024-07-15 15:58:06.162595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.442 [2024-07-15 15:58:06.162618] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.442 [2024-07-15 15:58:06.171402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.442 [2024-07-15 15:58:06.171419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.442 [2024-07-15 15:58:06.180102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.442 [2024-07-15 15:58:06.180120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.442 [2024-07-15 15:58:06.189392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.442 [2024-07-15 15:58:06.189410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.442 [2024-07-15 15:58:06.198160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.442 [2024-07-15 15:58:06.198178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.442 [2024-07-15 15:58:06.207709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.442 [2024-07-15 15:58:06.207729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.442 [2024-07-15 15:58:06.217014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.442 [2024-07-15 15:58:06.217032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.442 [2024-07-15 15:58:06.225740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.442 [2024-07-15 15:58:06.225758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.442 [2024-07-15 15:58:06.235138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.442 [2024-07-15 15:58:06.235156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.442 [2024-07-15 15:58:06.243793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.442 [2024-07-15 15:58:06.243811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.442 [2024-07-15 15:58:06.253024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.442 [2024-07-15 15:58:06.253042] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.442 [2024-07-15 15:58:06.262322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.442 [2024-07-15 15:58:06.262339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.442 [2024-07-15 15:58:06.271319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.442 [2024-07-15 15:58:06.271338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.442 [2024-07-15 15:58:06.280661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.442 [2024-07-15 15:58:06.280679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.442 [2024-07-15 15:58:06.290069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.442 [2024-07-15 15:58:06.290088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.442 [2024-07-15 15:58:06.299138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.442 [2024-07-15 15:58:06.299156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.442 [2024-07-15 15:58:06.307744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.442 [2024-07-15 15:58:06.307763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.442 [2024-07-15 15:58:06.316426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.442 [2024-07-15 15:58:06.316444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.442 [2024-07-15 15:58:06.325023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.442 [2024-07-15 15:58:06.325041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.442 [2024-07-15 15:58:06.333582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.442 [2024-07-15 15:58:06.333603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.442 [2024-07-15 15:58:06.342871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.442 [2024-07-15 15:58:06.342889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.442 [2024-07-15 15:58:06.351952] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.442 [2024-07-15 15:58:06.351970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.442 [2024-07-15 15:58:06.361205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.442 [2024-07-15 15:58:06.361223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.442 [2024-07-15 15:58:06.370507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.442 [2024-07-15 15:58:06.370525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.702 [2024-07-15 15:58:06.379279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.702 [2024-07-15 15:58:06.379297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.702 [2024-07-15 15:58:06.388663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.702 [2024-07-15 15:58:06.388682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.702 [2024-07-15 15:58:06.397744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.702 [2024-07-15 15:58:06.397762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.702 [2024-07-15 15:58:06.406374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.702 [2024-07-15 15:58:06.406391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.702 [2024-07-15 15:58:06.415026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.702 [2024-07-15 15:58:06.415044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.702 [2024-07-15 15:58:06.424320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.702 [2024-07-15 15:58:06.424338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.702 [2024-07-15 15:58:06.432525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.702 [2024-07-15 15:58:06.432542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.702 [2024-07-15 15:58:06.441657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.702 [2024-07-15 15:58:06.441675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.702 [2024-07-15 15:58:06.450971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.702 [2024-07-15 15:58:06.450988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.702 [2024-07-15 15:58:06.460153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.702 [2024-07-15 15:58:06.460171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.702 [2024-07-15 15:58:06.469053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.702 [2024-07-15 15:58:06.469071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.702 [2024-07-15 15:58:06.478216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.702 [2024-07-15 15:58:06.478240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.702 [2024-07-15 15:58:06.487064] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.702 [2024-07-15 15:58:06.487082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.702 [2024-07-15 15:58:06.493930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.702 [2024-07-15 15:58:06.493947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.702 [2024-07-15 15:58:06.504566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.702 [2024-07-15 15:58:06.504583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.702 [2024-07-15 15:58:06.513852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.702 [2024-07-15 15:58:06.513869] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.702 [2024-07-15 15:58:06.523071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.702 [2024-07-15 15:58:06.523089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.702 [2024-07-15 15:58:06.532373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.702 [2024-07-15 15:58:06.532391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.702 [2024-07-15 15:58:06.541354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.702 [2024-07-15 15:58:06.541372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.702 [2024-07-15 15:58:06.550044] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.702 [2024-07-15 15:58:06.550061] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.702 [2024-07-15 15:58:06.559398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.702 [2024-07-15 15:58:06.559416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.702 [2024-07-15 15:58:06.568639] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.702 [2024-07-15 15:58:06.568657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.702 [2024-07-15 15:58:06.577250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.702 [2024-07-15 15:58:06.577267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.702 [2024-07-15 15:58:06.587172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.702 [2024-07-15 15:58:06.587190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.702 [2024-07-15 15:58:06.595854] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.702 [2024-07-15 15:58:06.595871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.702 [2024-07-15 15:58:06.604366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.702 [2024-07-15 15:58:06.604383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.702 [2024-07-15 15:58:06.613219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.702 [2024-07-15 15:58:06.613241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.702 [2024-07-15 15:58:06.620289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.702 [2024-07-15 15:58:06.620306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.702 [2024-07-15 15:58:06.630676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.702 [2024-07-15 15:58:06.630695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.961 [2024-07-15 15:58:06.639212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.961 [2024-07-15 15:58:06.639236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.961 [2024-07-15 15:58:06.648311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.961 [2024-07-15 15:58:06.648328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.961 [2024-07-15 15:58:06.657635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.961 [2024-07-15 15:58:06.657652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.961 [2024-07-15 15:58:06.666183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.961 [2024-07-15 15:58:06.666201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.961 [2024-07-15 15:58:06.674787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.961 [2024-07-15 15:58:06.674804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.961 [2024-07-15 15:58:06.683553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.961 [2024-07-15 15:58:06.683570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.961 [2024-07-15 15:58:06.692712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.961 [2024-07-15 15:58:06.692729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.961 [2024-07-15 15:58:06.701378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.961 [2024-07-15 15:58:06.701396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.961 [2024-07-15 15:58:06.710735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.961 [2024-07-15 15:58:06.710753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.961 [2024-07-15 15:58:06.719696] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.961 [2024-07-15 15:58:06.719713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.961 [2024-07-15 15:58:06.728997] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.961 [2024-07-15 15:58:06.729015] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.961 [2024-07-15 15:58:06.738498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.961 [2024-07-15 15:58:06.738515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.961 [2024-07-15 15:58:06.747201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.961 [2024-07-15 15:58:06.747219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.961 [2024-07-15 15:58:06.756432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.961 [2024-07-15 15:58:06.756449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.961 [2024-07-15 15:58:06.764901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.962 [2024-07-15 15:58:06.764918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.962 [2024-07-15 15:58:06.773725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.962 [2024-07-15 15:58:06.773742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.962 [2024-07-15 15:58:06.783006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.962 [2024-07-15 15:58:06.783024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.962 [2024-07-15 15:58:06.792185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.962 [2024-07-15 15:58:06.792203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.962 [2024-07-15 15:58:06.800834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.962 [2024-07-15 15:58:06.800851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.962 [2024-07-15 15:58:06.809649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.962 [2024-07-15 15:58:06.809665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.962 [2024-07-15 15:58:06.818868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.962 [2024-07-15 15:58:06.818886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.962 [2024-07-15 15:58:06.827625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.962 [2024-07-15 15:58:06.827643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.962 [2024-07-15 15:58:06.837179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.962 [2024-07-15 15:58:06.837197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.962 [2024-07-15 15:58:06.846293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.962 [2024-07-15 15:58:06.846310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.962 [2024-07-15 15:58:06.854808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.962 [2024-07-15 15:58:06.854826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.962 [2024-07-15 15:58:06.863088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.962 [2024-07-15 15:58:06.863105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.962 [2024-07-15 15:58:06.872541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.962 [2024-07-15 15:58:06.872559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.962 [2024-07-15 15:58:06.881262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.962 [2024-07-15 15:58:06.881279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.962 [2024-07-15 15:58:06.890558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.962 [2024-07-15 15:58:06.890575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.221 [2024-07-15 15:58:06.900017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.221 [2024-07-15 15:58:06.900034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.221 [2024-07-15 15:58:06.909246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.221 [2024-07-15 15:58:06.909264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.221 [2024-07-15 15:58:06.918553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.221 [2024-07-15 15:58:06.918571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.221 [2024-07-15 15:58:06.927823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.221 [2024-07-15 15:58:06.927840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.221 [2024-07-15 15:58:06.937040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.221 [2024-07-15 15:58:06.937057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.221 [2024-07-15 15:58:06.946785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.221 [2024-07-15 15:58:06.946803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.221 [2024-07-15 15:58:06.955679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.221 [2024-07-15 15:58:06.955696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.221 [2024-07-15 15:58:06.964979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.221 [2024-07-15 15:58:06.964996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.221 [2024-07-15 15:58:06.974124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.221 [2024-07-15 15:58:06.974142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.221 [2024-07-15 15:58:06.983562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.221 [2024-07-15 15:58:06.983579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.221 [2024-07-15 15:58:06.992735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.221 [2024-07-15 15:58:06.992752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.221 [2024-07-15 15:58:07.001436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.221 [2024-07-15 15:58:07.001454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.221 [2024-07-15 15:58:07.010674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.221 [2024-07-15 15:58:07.010697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.221 [2024-07-15 15:58:07.020056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.221 [2024-07-15 15:58:07.020073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.221 [2024-07-15 15:58:07.029328] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.221 [2024-07-15 15:58:07.029345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.221 [2024-07-15 15:58:07.038043] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.221 [2024-07-15 15:58:07.038060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.221 [2024-07-15 15:58:07.047306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.221 [2024-07-15 15:58:07.047324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.221 [2024-07-15 15:58:07.056560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.221 [2024-07-15 15:58:07.056578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.221 [2024-07-15 15:58:07.065342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.221 [2024-07-15 15:58:07.065361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.221 [2024-07-15 15:58:07.074688] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.221 [2024-07-15 15:58:07.074707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.221 [2024-07-15 15:58:07.083289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.221 [2024-07-15 15:58:07.083306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.221 [2024-07-15 15:58:07.091947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.221 [2024-07-15 15:58:07.091964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.221 [2024-07-15 15:58:07.100557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.221 [2024-07-15 15:58:07.100574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.221 [2024-07-15 15:58:07.109955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.221 [2024-07-15 15:58:07.109972] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.221 [2024-07-15 15:58:07.119309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.221 [2024-07-15 15:58:07.119327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.221 [2024-07-15 15:58:07.128037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.222 [2024-07-15 15:58:07.128055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.222 [2024-07-15 15:58:07.137194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.222 [2024-07-15 15:58:07.137213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.222 [2024-07-15 15:58:07.146411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.222 [2024-07-15 15:58:07.146429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.222 [2024-07-15 15:58:07.153424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.222 [2024-07-15 15:58:07.153442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.481 [2024-07-15 15:58:07.164189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.481 [2024-07-15 15:58:07.164207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.481 [2024-07-15 15:58:07.172969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.481 [2024-07-15 15:58:07.172987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.481 [2024-07-15 15:58:07.182161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.481 [2024-07-15 15:58:07.182182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.481 [2024-07-15 15:58:07.191257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.481 [2024-07-15 15:58:07.191273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.481 [2024-07-15 15:58:07.200506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.481 [2024-07-15 15:58:07.200523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.481 [2024-07-15 15:58:07.209879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.481 [2024-07-15 15:58:07.209897] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.481 [2024-07-15 15:58:07.218935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.481 [2024-07-15 15:58:07.218953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.481 [2024-07-15 15:58:07.227663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.481 [2024-07-15 15:58:07.227680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.481 [2024-07-15 15:58:07.236968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.481 [2024-07-15 15:58:07.236986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.481 [2024-07-15 15:58:07.246170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.481 [2024-07-15 15:58:07.246187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.481 [2024-07-15 15:58:07.254776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.481 [2024-07-15 15:58:07.254794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.481 [2024-07-15 15:58:07.263468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.481 [2024-07-15 15:58:07.263486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.481 [2024-07-15 15:58:07.272648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.481 [2024-07-15 15:58:07.272665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.481 [2024-07-15 15:58:07.281924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.481 [2024-07-15 15:58:07.281941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.481 [2024-07-15 15:58:07.288918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.481 [2024-07-15 15:58:07.288935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.481 [2024-07-15 15:58:07.299214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.481 [2024-07-15 15:58:07.299237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.481 [2024-07-15 15:58:07.307969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.481 [2024-07-15 15:58:07.307986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.481 [2024-07-15 15:58:07.317744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.481 [2024-07-15 15:58:07.317762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.481 [2024-07-15 15:58:07.326503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.481 [2024-07-15 15:58:07.326521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.481 [2024-07-15 15:58:07.335701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.481 [2024-07-15 15:58:07.335719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.481 [2024-07-15 15:58:07.345067] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.481 [2024-07-15 15:58:07.345085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.481 [2024-07-15 15:58:07.354136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.481 [2024-07-15 15:58:07.354157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.481 [2024-07-15 15:58:07.363388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.481 [2024-07-15 15:58:07.363405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.481 [2024-07-15 15:58:07.371900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.481 [2024-07-15 15:58:07.371918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.481 [2024-07-15 15:58:07.380669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.481 [2024-07-15 15:58:07.380686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.481 [2024-07-15 15:58:07.389262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.481 [2024-07-15 15:58:07.389279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.481 [2024-07-15 15:58:07.398481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.481 [2024-07-15 15:58:07.398498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.481 [2024-07-15 15:58:07.407716] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.481 [2024-07-15 15:58:07.407733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.740 [2024-07-15 15:58:07.417009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.740 [2024-07-15 15:58:07.417026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.740 [2024-07-15 15:58:07.426245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.740 [2024-07-15 15:58:07.426263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.740 [2024-07-15 15:58:07.434814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.740 [2024-07-15 15:58:07.434831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.740 [2024-07-15 15:58:07.444197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.740 [2024-07-15 15:58:07.444214] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.740 [2024-07-15 15:58:07.453422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.740 [2024-07-15 15:58:07.453439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.740 [2024-07-15 15:58:07.462146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.740 [2024-07-15 15:58:07.462163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.740 [2024-07-15 15:58:07.471424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.740 [2024-07-15 15:58:07.471441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.740 [2024-07-15 15:58:07.485948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.740 [2024-07-15 15:58:07.485966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.740 [2024-07-15 15:58:07.493654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.740 [2024-07-15 15:58:07.493671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.740 [2024-07-15 15:58:07.503549] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.740 [2024-07-15 15:58:07.503567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.740 [2024-07-15 15:58:07.512490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.740 [2024-07-15 15:58:07.512507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.740 [2024-07-15 15:58:07.521580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.740 [2024-07-15 15:58:07.521597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.740 [2024-07-15 15:58:07.530132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.740 [2024-07-15 15:58:07.530154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.740 [2024-07-15 15:58:07.539908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.740 [2024-07-15 15:58:07.539926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.740 [2024-07-15 15:58:07.548646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.740 [2024-07-15 15:58:07.548663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.740 [2024-07-15 15:58:07.557773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.740 [2024-07-15 15:58:07.557791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.740 [2024-07-15 15:58:07.567116] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.740 [2024-07-15 15:58:07.567135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.740 [2024-07-15 15:58:07.576528] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.740 [2024-07-15 15:58:07.576546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.740 [2024-07-15 15:58:07.585784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.740 [2024-07-15 15:58:07.585803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.740 [2024-07-15 15:58:07.595095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.740 [2024-07-15 15:58:07.595114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.740 [2024-07-15 15:58:07.604379] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.740 [2024-07-15 15:58:07.604399] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.740 [2024-07-15 15:58:07.613508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.740 [2024-07-15 15:58:07.613527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.740 [2024-07-15 15:58:07.622147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.740 [2024-07-15 15:58:07.622166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.740 [2024-07-15 15:58:07.631543] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.740 [2024-07-15 15:58:07.631561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.740 [2024-07-15 15:58:07.640704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.740 [2024-07-15 15:58:07.640722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.740 [2024-07-15 15:58:07.649888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.740 [2024-07-15 15:58:07.649906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.740 [2024-07-15 15:58:07.658694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.740 [2024-07-15 15:58:07.658712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.740 [2024-07-15 15:58:07.667282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.740 [2024-07-15 15:58:07.667300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.000 [2024-07-15 15:58:07.676510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.000 [2024-07-15 15:58:07.676528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.000 [2024-07-15 15:58:07.683838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.000 [2024-07-15 15:58:07.683855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.000 [2024-07-15 15:58:07.693637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.000 [2024-07-15 15:58:07.693655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.000 [2024-07-15 15:58:07.702395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.000 [2024-07-15 15:58:07.702416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.000 [2024-07-15 15:58:07.711073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.000 [2024-07-15 15:58:07.711090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.000 [2024-07-15 15:58:07.720202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.000 [2024-07-15 15:58:07.720220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.000 [2024-07-15 15:58:07.729394] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.000 [2024-07-15 15:58:07.729412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.000 [2024-07-15 15:58:07.738642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.000 [2024-07-15 15:58:07.738659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.000 [2024-07-15 15:58:07.747815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.000 [2024-07-15 15:58:07.747833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.000 [2024-07-15 15:58:07.756555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.000 [2024-07-15 15:58:07.756572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.000 [2024-07-15 15:58:07.765589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.000 [2024-07-15 15:58:07.765606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.000 [2024-07-15 15:58:07.775449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.000 [2024-07-15 15:58:07.775467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.000 [2024-07-15 15:58:07.784320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.000 [2024-07-15 15:58:07.784338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.000 [2024-07-15 15:58:07.793657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.000 [2024-07-15 15:58:07.793675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.000 [2024-07-15 15:58:07.802990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.000 [2024-07-15 15:58:07.803008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.000 [2024-07-15 15:58:07.812298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.000 [2024-07-15 15:58:07.812316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.000 [2024-07-15 15:58:07.821602] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.000 [2024-07-15 15:58:07.821621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.000 [2024-07-15 15:58:07.830874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.000 [2024-07-15 15:58:07.830893] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.000 [2024-07-15 15:58:07.839563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.000 [2024-07-15 15:58:07.839580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.000 [2024-07-15 15:58:07.848877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.000 [2024-07-15 15:58:07.848895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.000 [2024-07-15 15:58:07.857462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.000 [2024-07-15 15:58:07.857479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.000 [2024-07-15 15:58:07.866610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.000 [2024-07-15 15:58:07.866628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.000 [2024-07-15 15:58:07.875785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.000 [2024-07-15 15:58:07.875803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.000 [2024-07-15 15:58:07.884510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.001 [2024-07-15 15:58:07.884535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.001 [2024-07-15 15:58:07.893477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.001 [2024-07-15 15:58:07.893495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.001 [2024-07-15 15:58:07.903685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.001 [2024-07-15 15:58:07.903704] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.001 [2024-07-15 15:58:07.912834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.001 [2024-07-15 15:58:07.912853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.001 [2024-07-15 15:58:07.921590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.001 [2024-07-15 15:58:07.921608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.001 [2024-07-15 15:58:07.930395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.001 [2024-07-15 15:58:07.930413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.260 [2024-07-15 15:58:07.937335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.260 [2024-07-15 15:58:07.937352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.260 [2024-07-15 15:58:07.947762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.260 [2024-07-15 15:58:07.947780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.260 [2024-07-15 15:58:07.956927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.260 [2024-07-15 15:58:07.956945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.260 [2024-07-15 15:58:07.966136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.260 [2024-07-15 15:58:07.966154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.260 [2024-07-15 15:58:07.975451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.260 [2024-07-15 15:58:07.975468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.260 [2024-07-15 15:58:07.983982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.260 [2024-07-15 15:58:07.983999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.260 [2024-07-15 15:58:07.993578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.260 [2024-07-15 15:58:07.993596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.260 [2024-07-15 15:58:08.002956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.260 [2024-07-15 15:58:08.002974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.260 [2024-07-15 15:58:08.012299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.260 [2024-07-15 15:58:08.012317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.260 [2024-07-15 15:58:08.021721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.260 [2024-07-15 15:58:08.021738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.260 [2024-07-15 15:58:08.030324] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.260 [2024-07-15 15:58:08.030341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.260 [2024-07-15 15:58:08.039655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.260 [2024-07-15 15:58:08.039672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.260 [2024-07-15 15:58:08.048305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.260 [2024-07-15 15:58:08.048323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.260 [2024-07-15 15:58:08.056951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.260 [2024-07-15 15:58:08.056969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.260 [2024-07-15 15:58:08.063806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.260 [2024-07-15 15:58:08.063824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.260 [2024-07-15 15:58:08.074935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.260 [2024-07-15 15:58:08.074953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.260 [2024-07-15 15:58:08.083437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.260 [2024-07-15 15:58:08.083456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.260 [2024-07-15 15:58:08.092437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.260 [2024-07-15 15:58:08.092456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.260 [2024-07-15 15:58:08.101723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.260 [2024-07-15 15:58:08.101741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.261 [2024-07-15 15:58:08.110300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.261 [2024-07-15 15:58:08.110317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.261 [2024-07-15 15:58:08.119015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.261 [2024-07-15 15:58:08.119032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.261 [2024-07-15 15:58:08.128336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.261 [2024-07-15 15:58:08.128354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.261 [2024-07-15 15:58:08.137626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.261 [2024-07-15 15:58:08.137643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.261 [2024-07-15 15:58:08.146146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.261 [2024-07-15 15:58:08.146163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.261 [2024-07-15 15:58:08.155419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.261 [2024-07-15 15:58:08.155436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.261 [2024-07-15 15:58:08.164004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.261 [2024-07-15 15:58:08.164023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.261 [2024-07-15 15:58:08.172705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.261 [2024-07-15 15:58:08.172722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.261 [2024-07-15 15:58:08.182081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.261 [2024-07-15 15:58:08.182098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.261 [2024-07-15 15:58:08.191115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.261 [2024-07-15 15:58:08.191133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.520 [2024-07-15 15:58:08.200456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.520 [2024-07-15 15:58:08.200473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.520 [2024-07-15 15:58:08.209270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.520 [2024-07-15 15:58:08.209288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.520 [2024-07-15 15:58:08.218804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.520 [2024-07-15 15:58:08.218821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.520 [2024-07-15 15:58:08.227833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.520 [2024-07-15 15:58:08.227851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.520 [2024-07-15 15:58:08.237247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.520 [2024-07-15 15:58:08.237264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.520 [2024-07-15 15:58:08.246598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.520 [2024-07-15 15:58:08.246616] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.520 [2024-07-15 15:58:08.255143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.520 [2024-07-15 15:58:08.255161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.520 [2024-07-15 15:58:08.264344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.520 [2024-07-15 15:58:08.264362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.520 [2024-07-15 15:58:08.271471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.520 [2024-07-15 15:58:08.271488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.520 [2024-07-15 15:58:08.281598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.520 [2024-07-15 15:58:08.281615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.520 [2024-07-15 15:58:08.290325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.520 [2024-07-15 15:58:08.290342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.520 [2024-07-15 15:58:08.299551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.520 [2024-07-15 15:58:08.299568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.520 [2024-07-15 15:58:08.308694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.520 [2024-07-15 15:58:08.308711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.520 [2024-07-15 15:58:08.318084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.520 [2024-07-15 15:58:08.318101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.520 [2024-07-15 15:58:08.327508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.520 [2024-07-15 15:58:08.327526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.520 [2024-07-15 15:58:08.336290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.520 [2024-07-15 15:58:08.336308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.520 [2024-07-15 15:58:08.345609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.520 [2024-07-15 15:58:08.345626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.520 [2024-07-15 15:58:08.354813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.520 [2024-07-15 15:58:08.354831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.520 [2024-07-15 15:58:08.364059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.520 [2024-07-15 15:58:08.364077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.520 [2024-07-15 15:58:08.372583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.520 [2024-07-15 15:58:08.372599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.520 [2024-07-15 15:58:08.381155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.520 [2024-07-15 15:58:08.381173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.520 [2024-07-15 15:58:08.388663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.520 [2024-07-15 15:58:08.388680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.520 [2024-07-15 15:58:08.398598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.520 [2024-07-15 15:58:08.398616] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.520 [2024-07-15 15:58:08.407469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.520 [2024-07-15 15:58:08.407486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.520 [2024-07-15 15:58:08.416721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.520 [2024-07-15 15:58:08.416739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.520 [2024-07-15 15:58:08.425341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.520 [2024-07-15 15:58:08.425359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.520 [2024-07-15 15:58:08.434170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.520 [2024-07-15 15:58:08.434187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.520 [2024-07-15 15:58:08.441131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.520 [2024-07-15 15:58:08.441147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.521 [2024-07-15 15:58:08.452283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.521 [2024-07-15 15:58:08.452300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.780 [2024-07-15 15:58:08.461011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.780 [2024-07-15 15:58:08.461028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.780 [2024-07-15 15:58:08.470825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.780 [2024-07-15 15:58:08.470843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.780 [2024-07-15 15:58:08.479657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.780 [2024-07-15 15:58:08.479674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.780 [2024-07-15 15:58:08.488919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.780 [2024-07-15 15:58:08.488936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.780 [2024-07-15 15:58:08.497701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.780 [2024-07-15 15:58:08.497718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.780 [2024-07-15 15:58:08.506724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.780 [2024-07-15 15:58:08.506741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.780 [2024-07-15 15:58:08.515509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.780 [2024-07-15 15:58:08.515527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.780 [2024-07-15 15:58:08.522480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.780 [2024-07-15 15:58:08.522497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.780 [2024-07-15 15:58:08.533710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.780 [2024-07-15 15:58:08.533727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.780 [2024-07-15 15:58:08.543045] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.780 [2024-07-15 15:58:08.543062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.780 [2024-07-15 15:58:08.552156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.781 [2024-07-15 15:58:08.552177] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.781 [2024-07-15 15:58:08.561326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.781 [2024-07-15 15:58:08.561344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.781 [2024-07-15 15:58:08.570058] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.781 [2024-07-15 15:58:08.570075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.781 [2024-07-15 15:58:08.578752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.781 [2024-07-15 15:58:08.578768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.781 [2024-07-15 15:58:08.587906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.781 [2024-07-15 15:58:08.587923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.781 [2024-07-15 15:58:08.596574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.781 [2024-07-15 15:58:08.596591] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.781 [2024-07-15 15:58:08.605191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.781 [2024-07-15 15:58:08.605209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.781 [2024-07-15 15:58:08.614633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.781 [2024-07-15 15:58:08.614650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.781 [2024-07-15 15:58:08.623426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.781 [2024-07-15 15:58:08.623444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.781 [2024-07-15 15:58:08.632719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.781 [2024-07-15 15:58:08.632736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.781 [2024-07-15 15:58:08.641235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.781 [2024-07-15 15:58:08.641252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.781 [2024-07-15 15:58:08.649906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.781 [2024-07-15 15:58:08.649923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.781 [2024-07-15 15:58:08.659209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.781 [2024-07-15 15:58:08.659232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.781 [2024-07-15 15:58:08.668939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.781 [2024-07-15 15:58:08.668957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.781 [2024-07-15 15:58:08.677618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.781 [2024-07-15 15:58:08.677635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.781 [2024-07-15 15:58:08.686728] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.781 [2024-07-15 15:58:08.686745] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.781 [2024-07-15 15:58:08.695304] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.781 [2024-07-15 15:58:08.695321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.781 [2024-07-15 15:58:08.704025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.781 [2024-07-15 15:58:08.704042] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.781 [2024-07-15 15:58:08.712620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.781 [2024-07-15 15:58:08.712638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.041 [2024-07-15 15:58:08.722053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.041 [2024-07-15 15:58:08.722075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.041 [2024-07-15 15:58:08.731214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.041 [2024-07-15 15:58:08.731237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.041 [2024-07-15 15:58:08.739899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.041 [2024-07-15 15:58:08.739915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.041 [2024-07-15 15:58:08.749133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.041 [2024-07-15 15:58:08.749151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.041 [2024-07-15 15:58:08.758441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.041 [2024-07-15 15:58:08.758459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.041 [2024-07-15 15:58:08.767710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.041 [2024-07-15 15:58:08.767727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.041 [2024-07-15 15:58:08.777100] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.041 [2024-07-15 15:58:08.777117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.041 [2024-07-15 15:58:08.786337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.041 [2024-07-15 15:58:08.786355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.041 [2024-07-15 15:58:08.795786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.041 [2024-07-15 15:58:08.795804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.041 [2024-07-15 15:58:08.805134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.041 [2024-07-15 15:58:08.805151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.041 [2024-07-15 15:58:08.814400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.041 [2024-07-15 15:58:08.814418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.041 [2024-07-15 15:58:08.822913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.041 [2024-07-15 15:58:08.822930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.041 [2024-07-15 15:58:08.831565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.041 [2024-07-15 15:58:08.831582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.041 [2024-07-15 15:58:08.840082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.041 [2024-07-15 15:58:08.840099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.041 00:16:40.041 Latency(us) 00:16:40.041 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:40.041 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:40.041 Nvme1n1 : 5.01 16617.90 129.83 0.00 0.00 7694.74 3348.03 19717.79 00:16:40.041 =================================================================================================================== 00:16:40.041 Total : 16617.90 129.83 0.00 0.00 7694.74 3348.03 19717.79 00:16:40.041 [2024-07-15 15:58:08.846374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.041 [2024-07-15 15:58:08.846390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.041 [2024-07-15 15:58:08.854393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.041 [2024-07-15 15:58:08.854407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.041 [2024-07-15 15:58:08.862412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.041 [2024-07-15 15:58:08.862429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.041 [2024-07-15 15:58:08.870442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.041 [2024-07-15 15:58:08.870457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.041 [2024-07-15 15:58:08.878462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.041 [2024-07-15 15:58:08.878477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.041 [2024-07-15 15:58:08.886478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.041 [2024-07-15 15:58:08.886491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.041 [2024-07-15 15:58:08.894498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.041 [2024-07-15 15:58:08.894509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.041 [2024-07-15 15:58:08.902517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.041 [2024-07-15 15:58:08.902526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.041 [2024-07-15 15:58:08.910539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.041 [2024-07-15 15:58:08.910549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.041 [2024-07-15 15:58:08.918562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.041 [2024-07-15 15:58:08.918573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.041 [2024-07-15 15:58:08.926584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.041 [2024-07-15 15:58:08.926597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.041 [2024-07-15 15:58:08.934610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.041 [2024-07-15 15:58:08.934625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.041 [2024-07-15 15:58:08.942629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.041 [2024-07-15 15:58:08.942640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.041 [2024-07-15 15:58:08.950648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.041 [2024-07-15 15:58:08.950657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.041 [2024-07-15 15:58:08.958668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.041 [2024-07-15 15:58:08.958676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.041 [2024-07-15 15:58:08.966692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.041 [2024-07-15 15:58:08.966704] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.301 [2024-07-15 15:58:08.974716] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.301 [2024-07-15 15:58:08.974729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.301 [2024-07-15 15:58:08.982734] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.301 [2024-07-15 15:58:08.982745] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.301 [2024-07-15 15:58:08.990756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.301 [2024-07-15 15:58:08.990768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.301 [2024-07-15 15:58:08.998779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.301 [2024-07-15 15:58:08.998790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.301 [2024-07-15 15:58:09.006802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.301 [2024-07-15 15:58:09.006816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.302 [2024-07-15 15:58:09.014823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.302 [2024-07-15 15:58:09.014835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.302 [2024-07-15 15:58:09.022844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.302 [2024-07-15 15:58:09.022855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3740437) - No such process 00:16:40.302 15:58:09 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3740437 00:16:40.302 15:58:09 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:40.302 15:58:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.302 15:58:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:40.302 15:58:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.302 15:58:09 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:40.302 15:58:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.302 15:58:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:40.302 delay0 00:16:40.302 15:58:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.302 15:58:09 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:40.302 15:58:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.302 15:58:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:40.302 15:58:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.302 15:58:09 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:40.302 EAL: No free 2048 kB hugepages reported on node 1 00:16:40.302 [2024-07-15 15:58:09.144626] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:48.419 Initializing NVMe Controllers 00:16:48.419 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:48.419 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:48.419 Initialization complete. Launching workers. 00:16:48.419 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 5583 00:16:48.419 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 5853, failed to submit 50 00:16:48.419 success 5683, unsuccess 170, failed 0 00:16:48.419 15:58:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:48.419 15:58:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:16:48.419 15:58:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:48.419 15:58:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:16:48.419 15:58:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:48.419 15:58:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:16:48.419 15:58:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:48.419 15:58:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:48.419 rmmod nvme_tcp 00:16:48.419 rmmod nvme_fabrics 00:16:48.419 rmmod nvme_keyring 00:16:48.419 15:58:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:48.419 15:58:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:16:48.419 15:58:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:16:48.419 15:58:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 3738569 ']' 00:16:48.419 15:58:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 3738569 00:16:48.419 15:58:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 3738569 ']' 00:16:48.419 15:58:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 3738569 00:16:48.419 15:58:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:16:48.419 15:58:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:48.419 15:58:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3738569 00:16:48.419 15:58:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:48.419 15:58:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:48.419 15:58:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3738569' 00:16:48.419 killing process with pid 3738569 00:16:48.419 15:58:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 3738569 00:16:48.419 15:58:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 3738569 00:16:48.419 15:58:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:48.419 15:58:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:48.419 15:58:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:48.419 15:58:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:48.419 15:58:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:48.419 15:58:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:48.419 15:58:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:48.419 15:58:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.798 15:58:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:49.798 00:16:49.798 real 0m32.245s 00:16:49.798 user 0m43.326s 00:16:49.798 sys 0m10.976s 00:16:49.798 15:58:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:49.798 15:58:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:49.798 ************************************ 00:16:49.798 END TEST nvmf_zcopy 00:16:49.798 ************************************ 00:16:49.798 15:58:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:49.798 15:58:18 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:49.798 15:58:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:49.798 15:58:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:49.798 15:58:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:49.798 ************************************ 00:16:49.798 START TEST nvmf_nmic 00:16:49.798 ************************************ 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:49.798 * Looking for test storage... 00:16:49.798 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:16:49.798 15:58:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:55.067 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:55.067 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:55.067 Found net devices under 0000:86:00.0: cvl_0_0 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:55.067 Found net devices under 0000:86:00.1: cvl_0_1 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:55.067 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:55.067 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:16:55.067 00:16:55.067 --- 10.0.0.2 ping statistics --- 00:16:55.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:55.067 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:55.067 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:55.067 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:16:55.067 00:16:55.067 --- 10.0.0.1 ping statistics --- 00:16:55.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:55.067 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=3745997 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 3745997 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 3745997 ']' 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:55.067 15:58:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:55.067 [2024-07-15 15:58:23.600955] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:16:55.067 [2024-07-15 15:58:23.601000] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:55.067 EAL: No free 2048 kB hugepages reported on node 1 00:16:55.067 [2024-07-15 15:58:23.655217] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:55.067 [2024-07-15 15:58:23.737239] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:55.067 [2024-07-15 15:58:23.737272] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:55.067 [2024-07-15 15:58:23.737279] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:55.067 [2024-07-15 15:58:23.737285] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:55.067 [2024-07-15 15:58:23.737290] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:55.067 [2024-07-15 15:58:23.737328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:55.067 [2024-07-15 15:58:23.737421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:55.067 [2024-07-15 15:58:23.737514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:55.067 [2024-07-15 15:58:23.737515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.633 15:58:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:55.633 15:58:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:16:55.633 15:58:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:55.633 15:58:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:55.633 15:58:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:55.633 15:58:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:55.633 15:58:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:55.633 15:58:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.633 15:58:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:55.633 [2024-07-15 15:58:24.472245] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:55.633 15:58:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.633 15:58:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:55.633 15:58:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.633 15:58:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:55.633 Malloc0 00:16:55.633 15:58:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.633 15:58:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:55.633 15:58:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.633 15:58:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:55.633 15:58:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.633 15:58:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:55.633 15:58:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.633 15:58:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:55.633 15:58:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.633 15:58:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:55.633 15:58:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.633 15:58:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:55.633 [2024-07-15 15:58:24.523948] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:55.633 15:58:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.633 15:58:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:55.633 test case1: single bdev can't be used in multiple subsystems 00:16:55.633 15:58:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:55.633 15:58:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.633 15:58:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:55.633 15:58:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.633 15:58:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:55.633 15:58:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.633 15:58:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:55.633 15:58:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.633 15:58:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:16:55.633 15:58:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:55.633 15:58:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.633 15:58:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:55.633 [2024-07-15 15:58:24.547873] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:55.633 [2024-07-15 15:58:24.547892] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:55.633 [2024-07-15 15:58:24.547899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.633 request: 00:16:55.633 { 00:16:55.633 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:55.633 "namespace": { 00:16:55.633 "bdev_name": "Malloc0", 00:16:55.633 "no_auto_visible": false 00:16:55.633 }, 00:16:55.633 "method": "nvmf_subsystem_add_ns", 00:16:55.633 "req_id": 1 00:16:55.633 } 00:16:55.633 Got JSON-RPC error response 00:16:55.633 response: 00:16:55.633 { 00:16:55.633 "code": -32602, 00:16:55.633 "message": "Invalid parameters" 00:16:55.633 } 00:16:55.633 15:58:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:55.633 15:58:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:16:55.633 15:58:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:55.633 15:58:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:55.633 Adding namespace failed - expected result. 00:16:55.633 15:58:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:55.633 test case2: host connect to nvmf target in multiple paths 00:16:55.633 15:58:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:55.633 15:58:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.633 15:58:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:55.633 [2024-07-15 15:58:24.559988] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:55.633 15:58:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.633 15:58:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:57.007 15:58:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:57.994 15:58:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:57.994 15:58:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:16:57.994 15:58:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:57.994 15:58:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:57.994 15:58:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:16:59.898 15:58:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:59.898 15:58:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:59.898 15:58:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:59.898 15:58:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:59.898 15:58:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:59.898 15:58:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:16:59.898 15:58:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:59.898 [global] 00:16:59.898 thread=1 00:16:59.898 invalidate=1 00:16:59.898 rw=write 00:16:59.898 time_based=1 00:16:59.898 runtime=1 00:16:59.898 ioengine=libaio 00:16:59.898 direct=1 00:16:59.898 bs=4096 00:16:59.898 iodepth=1 00:16:59.898 norandommap=0 00:16:59.898 numjobs=1 00:16:59.898 00:16:59.898 verify_dump=1 00:16:59.898 verify_backlog=512 00:16:59.898 verify_state_save=0 00:16:59.899 do_verify=1 00:16:59.899 verify=crc32c-intel 00:16:59.899 [job0] 00:16:59.899 filename=/dev/nvme0n1 00:16:59.899 Could not set queue depth (nvme0n1) 00:17:00.157 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:00.157 fio-3.35 00:17:00.157 Starting 1 thread 00:17:01.535 00:17:01.535 job0: (groupid=0, jobs=1): err= 0: pid=3747077: Mon Jul 15 15:58:30 2024 00:17:01.535 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:17:01.535 slat (nsec): min=3243, max=75006, avg=7182.45, stdev=2192.14 00:17:01.535 clat (usec): min=226, max=2177, avg=255.03, stdev=52.48 00:17:01.536 lat (usec): min=234, max=2184, avg=262.21, stdev=52.76 00:17:01.536 clat percentiles (usec): 00:17:01.536 | 1.00th=[ 235], 5.00th=[ 239], 10.00th=[ 241], 20.00th=[ 243], 00:17:01.536 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 247], 00:17:01.536 | 70.00th=[ 251], 80.00th=[ 255], 90.00th=[ 289], 95.00th=[ 297], 00:17:01.536 | 99.00th=[ 343], 99.50th=[ 359], 99.90th=[ 635], 99.95th=[ 1205], 00:17:01.536 | 99.99th=[ 2180] 00:17:01.536 write: IOPS=2264, BW=9059KiB/s (9276kB/s)(9068KiB/1001msec); 0 zone resets 00:17:01.536 slat (usec): min=6, max=23617, avg=20.71, stdev=495.82 00:17:01.536 clat (usec): min=144, max=392, avg=179.39, stdev=33.33 00:17:01.536 lat (usec): min=154, max=24010, avg=200.10, stdev=501.38 00:17:01.536 clat percentiles (usec): 00:17:01.536 | 1.00th=[ 151], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 157], 00:17:01.536 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 172], 00:17:01.536 | 70.00th=[ 194], 80.00th=[ 200], 90.00th=[ 208], 95.00th=[ 269], 00:17:01.536 | 99.00th=[ 293], 99.50th=[ 302], 99.90th=[ 351], 99.95th=[ 379], 00:17:01.536 | 99.99th=[ 392] 00:17:01.536 bw ( KiB/s): min= 9336, max= 9336, per=100.00%, avg=9336.00, stdev= 0.00, samples=1 00:17:01.536 iops : min= 2334, max= 2334, avg=2334.00, stdev= 0.00, samples=1 00:17:01.536 lat (usec) : 250=83.01%, 500=16.92%, 750=0.02% 00:17:01.536 lat (msec) : 2=0.02%, 4=0.02% 00:17:01.536 cpu : usr=2.40%, sys=3.60%, ctx=4319, majf=0, minf=2 00:17:01.536 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:01.536 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:01.536 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:01.536 issued rwts: total=2048,2267,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:01.536 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:01.536 00:17:01.536 Run status group 0 (all jobs): 00:17:01.536 READ: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:17:01.536 WRITE: bw=9059KiB/s (9276kB/s), 9059KiB/s-9059KiB/s (9276kB/s-9276kB/s), io=9068KiB (9286kB), run=1001-1001msec 00:17:01.536 00:17:01.536 Disk stats (read/write): 00:17:01.536 nvme0n1: ios=1917/2048, merge=0/0, ticks=1448/343, in_queue=1791, util=98.50% 00:17:01.536 15:58:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:01.536 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:17:01.536 15:58:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:01.536 15:58:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:17:01.536 15:58:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:01.536 15:58:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:01.536 15:58:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:01.536 15:58:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:01.536 15:58:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:17:01.536 15:58:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:17:01.536 15:58:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:17:01.536 15:58:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:01.536 15:58:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:17:01.536 15:58:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:01.536 15:58:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:17:01.536 15:58:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:01.536 15:58:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:01.536 rmmod nvme_tcp 00:17:01.536 rmmod nvme_fabrics 00:17:01.536 rmmod nvme_keyring 00:17:01.536 15:58:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:01.536 15:58:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:17:01.536 15:58:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:17:01.536 15:58:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 3745997 ']' 00:17:01.536 15:58:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 3745997 00:17:01.536 15:58:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 3745997 ']' 00:17:01.536 15:58:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 3745997 00:17:01.536 15:58:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:17:01.796 15:58:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:01.796 15:58:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3745997 00:17:01.796 15:58:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:01.796 15:58:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:01.796 15:58:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3745997' 00:17:01.796 killing process with pid 3745997 00:17:01.796 15:58:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 3745997 00:17:01.796 15:58:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 3745997 00:17:01.796 15:58:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:01.796 15:58:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:01.796 15:58:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:01.796 15:58:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:01.796 15:58:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:01.796 15:58:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.796 15:58:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:01.796 15:58:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:04.332 15:58:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:04.332 00:17:04.332 real 0m14.399s 00:17:04.332 user 0m34.593s 00:17:04.332 sys 0m4.675s 00:17:04.332 15:58:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:04.332 15:58:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:04.332 ************************************ 00:17:04.332 END TEST nvmf_nmic 00:17:04.332 ************************************ 00:17:04.332 15:58:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:04.332 15:58:32 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:04.332 15:58:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:04.332 15:58:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:04.332 15:58:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:04.332 ************************************ 00:17:04.332 START TEST nvmf_fio_target 00:17:04.332 ************************************ 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:04.332 * Looking for test storage... 00:17:04.332 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:04.332 15:58:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:09.608 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:09.608 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:09.608 Found net devices under 0000:86:00.0: cvl_0_0 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:09.608 Found net devices under 0000:86:00.1: cvl_0_1 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:09.608 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:09.609 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:09.609 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:09.609 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:09.609 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:09.609 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:09.609 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:09.609 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:09.609 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:09.609 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:09.609 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:09.609 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:09.609 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:09.609 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:09.609 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:09.609 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:09.609 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:09.609 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:09.609 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:09.609 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:09.890 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:09.890 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:09.890 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:17:09.890 00:17:09.890 --- 10.0.0.2 ping statistics --- 00:17:09.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.890 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:17:09.890 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:09.890 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:09.890 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:17:09.890 00:17:09.890 --- 10.0.0.1 ping statistics --- 00:17:09.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.890 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:17:09.890 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:09.890 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:17:09.890 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:09.890 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:09.890 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:09.890 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:09.890 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:09.890 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:09.890 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:09.890 15:58:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:17:09.890 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:09.890 15:58:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:09.890 15:58:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.890 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=3750822 00:17:09.890 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:09.890 15:58:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 3750822 00:17:09.890 15:58:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 3750822 ']' 00:17:09.890 15:58:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.890 15:58:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:09.890 15:58:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.890 15:58:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:09.890 15:58:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.890 [2024-07-15 15:58:38.648453] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:17:09.890 [2024-07-15 15:58:38.648496] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:09.890 EAL: No free 2048 kB hugepages reported on node 1 00:17:09.890 [2024-07-15 15:58:38.705786] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:09.890 [2024-07-15 15:58:38.778291] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:09.890 [2024-07-15 15:58:38.778331] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:09.890 [2024-07-15 15:58:38.778337] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:09.890 [2024-07-15 15:58:38.778343] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:09.890 [2024-07-15 15:58:38.778348] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:09.890 [2024-07-15 15:58:38.778452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:09.890 [2024-07-15 15:58:38.778570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:09.890 [2024-07-15 15:58:38.778635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:09.890 [2024-07-15 15:58:38.778637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.827 15:58:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:10.827 15:58:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:17:10.827 15:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:10.827 15:58:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:10.827 15:58:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.827 15:58:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:10.827 15:58:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:10.827 [2024-07-15 15:58:39.647709] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:10.827 15:58:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:11.086 15:58:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:17:11.086 15:58:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:11.345 15:58:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:17:11.345 15:58:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:11.603 15:58:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:17:11.603 15:58:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:11.603 15:58:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:17:11.603 15:58:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:17:11.862 15:58:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:12.120 15:58:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:17:12.120 15:58:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:12.120 15:58:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:17:12.379 15:58:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:12.379 15:58:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:17:12.379 15:58:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:17:12.637 15:58:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:12.896 15:58:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:12.896 15:58:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:12.896 15:58:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:12.896 15:58:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:13.154 15:58:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:13.412 [2024-07-15 15:58:42.143025] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:13.412 15:58:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:17:13.669 15:58:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:17:13.669 15:58:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:15.041 15:58:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:17:15.041 15:58:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:17:15.041 15:58:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:15.041 15:58:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:17:15.041 15:58:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:17:15.041 15:58:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:17:16.938 15:58:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:16.938 15:58:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:16.938 15:58:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:16.938 15:58:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:17:16.938 15:58:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:16.938 15:58:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:17:16.938 15:58:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:16.938 [global] 00:17:16.938 thread=1 00:17:16.938 invalidate=1 00:17:16.938 rw=write 00:17:16.938 time_based=1 00:17:16.938 runtime=1 00:17:16.938 ioengine=libaio 00:17:16.938 direct=1 00:17:16.938 bs=4096 00:17:16.938 iodepth=1 00:17:16.938 norandommap=0 00:17:16.938 numjobs=1 00:17:16.938 00:17:16.938 verify_dump=1 00:17:16.938 verify_backlog=512 00:17:16.938 verify_state_save=0 00:17:16.938 do_verify=1 00:17:16.938 verify=crc32c-intel 00:17:16.938 [job0] 00:17:16.938 filename=/dev/nvme0n1 00:17:16.938 [job1] 00:17:16.938 filename=/dev/nvme0n2 00:17:16.938 [job2] 00:17:16.938 filename=/dev/nvme0n3 00:17:16.938 [job3] 00:17:16.938 filename=/dev/nvme0n4 00:17:16.938 Could not set queue depth (nvme0n1) 00:17:16.938 Could not set queue depth (nvme0n2) 00:17:16.938 Could not set queue depth (nvme0n3) 00:17:16.938 Could not set queue depth (nvme0n4) 00:17:17.194 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:17.194 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:17.194 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:17.194 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:17.194 fio-3.35 00:17:17.194 Starting 4 threads 00:17:18.640 00:17:18.640 job0: (groupid=0, jobs=1): err= 0: pid=3752175: Mon Jul 15 15:58:47 2024 00:17:18.640 read: IOPS=1792, BW=7169KiB/s (7341kB/s)(7176KiB/1001msec) 00:17:18.640 slat (nsec): min=6354, max=28355, avg=7845.49, stdev=1537.47 00:17:18.640 clat (usec): min=255, max=40521, avg=323.81, stdev=949.75 00:17:18.640 lat (usec): min=261, max=40530, avg=331.66, stdev=949.77 00:17:18.640 clat percentiles (usec): 00:17:18.640 | 1.00th=[ 273], 5.00th=[ 281], 10.00th=[ 285], 20.00th=[ 289], 00:17:18.640 | 30.00th=[ 293], 40.00th=[ 297], 50.00th=[ 302], 60.00th=[ 306], 00:17:18.640 | 70.00th=[ 306], 80.00th=[ 310], 90.00th=[ 318], 95.00th=[ 322], 00:17:18.640 | 99.00th=[ 371], 99.50th=[ 408], 99.90th=[ 562], 99.95th=[40633], 00:17:18.640 | 99.99th=[40633] 00:17:18.640 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:17:18.640 slat (nsec): min=9431, max=46887, avg=11096.08, stdev=1874.47 00:17:18.640 clat (usec): min=143, max=725, avg=182.11, stdev=30.79 00:17:18.640 lat (usec): min=153, max=736, avg=193.20, stdev=30.97 00:17:18.640 clat percentiles (usec): 00:17:18.640 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 165], 00:17:18.640 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 180], 00:17:18.640 | 70.00th=[ 186], 80.00th=[ 192], 90.00th=[ 208], 95.00th=[ 231], 00:17:18.640 | 99.00th=[ 281], 99.50th=[ 318], 99.90th=[ 498], 99.95th=[ 537], 00:17:18.640 | 99.99th=[ 725] 00:17:18.640 bw ( KiB/s): min= 8192, max= 8192, per=41.08%, avg=8192.00, stdev= 0.00, samples=1 00:17:18.640 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:17:18.640 lat (usec) : 250=51.85%, 500=48.05%, 750=0.08% 00:17:18.640 lat (msec) : 50=0.03% 00:17:18.640 cpu : usr=2.30%, sys=3.50%, ctx=3844, majf=0, minf=1 00:17:18.640 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:18.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:18.640 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:18.640 issued rwts: total=1794,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:18.640 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:18.640 job1: (groupid=0, jobs=1): err= 0: pid=3752176: Mon Jul 15 15:58:47 2024 00:17:18.640 read: IOPS=22, BW=90.5KiB/s (92.6kB/s)(92.0KiB/1017msec) 00:17:18.640 slat (nsec): min=10345, max=25854, avg=21497.83, stdev=3774.17 00:17:18.640 clat (usec): min=300, max=41077, avg=39198.08, stdev=8479.75 00:17:18.640 lat (usec): min=321, max=41087, avg=39219.58, stdev=8479.72 00:17:18.640 clat percentiles (usec): 00:17:18.640 | 1.00th=[ 302], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:17:18.640 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:17:18.640 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:17:18.640 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:17:18.640 | 99.99th=[41157] 00:17:18.640 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:17:18.640 slat (usec): min=8, max=427, avg=13.38, stdev=19.47 00:17:18.640 clat (usec): min=148, max=461, avg=208.26, stdev=30.34 00:17:18.640 lat (usec): min=158, max=609, avg=221.65, stdev=36.37 00:17:18.640 clat percentiles (usec): 00:17:18.640 | 1.00th=[ 161], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 182], 00:17:18.640 | 30.00th=[ 190], 40.00th=[ 198], 50.00th=[ 208], 60.00th=[ 217], 00:17:18.640 | 70.00th=[ 223], 80.00th=[ 229], 90.00th=[ 241], 95.00th=[ 255], 00:17:18.640 | 99.00th=[ 289], 99.50th=[ 306], 99.90th=[ 461], 99.95th=[ 461], 00:17:18.640 | 99.99th=[ 461] 00:17:18.640 bw ( KiB/s): min= 4096, max= 4096, per=20.54%, avg=4096.00, stdev= 0.00, samples=1 00:17:18.640 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:18.640 lat (usec) : 250=89.72%, 500=6.17% 00:17:18.640 lat (msec) : 50=4.11% 00:17:18.640 cpu : usr=0.30%, sys=0.89%, ctx=538, majf=0, minf=1 00:17:18.640 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:18.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:18.640 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:18.640 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:18.640 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:18.640 job2: (groupid=0, jobs=1): err= 0: pid=3752177: Mon Jul 15 15:58:47 2024 00:17:18.640 read: IOPS=147, BW=588KiB/s (602kB/s)(604KiB/1027msec) 00:17:18.640 slat (nsec): min=4803, max=38544, avg=12424.77, stdev=4697.62 00:17:18.640 clat (usec): min=253, max=41986, avg=6045.99, stdev=14109.28 00:17:18.640 lat (usec): min=261, max=42008, avg=6058.42, stdev=14110.40 00:17:18.640 clat percentiles (usec): 00:17:18.640 | 1.00th=[ 273], 5.00th=[ 293], 10.00th=[ 306], 20.00th=[ 322], 00:17:18.640 | 30.00th=[ 343], 40.00th=[ 363], 50.00th=[ 379], 60.00th=[ 449], 00:17:18.640 | 70.00th=[ 478], 80.00th=[ 545], 90.00th=[41157], 95.00th=[41157], 00:17:18.641 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:18.641 | 99.99th=[42206] 00:17:18.641 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:17:18.641 slat (nsec): min=4934, max=31372, avg=12300.09, stdev=2444.40 00:17:18.641 clat (usec): min=156, max=603, avg=200.86, stdev=38.68 00:17:18.641 lat (usec): min=169, max=619, avg=213.16, stdev=38.64 00:17:18.641 clat percentiles (usec): 00:17:18.641 | 1.00th=[ 163], 5.00th=[ 172], 10.00th=[ 178], 20.00th=[ 180], 00:17:18.641 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 198], 00:17:18.641 | 70.00th=[ 204], 80.00th=[ 215], 90.00th=[ 233], 95.00th=[ 247], 00:17:18.641 | 99.00th=[ 347], 99.50th=[ 529], 99.90th=[ 603], 99.95th=[ 603], 00:17:18.641 | 99.99th=[ 603] 00:17:18.641 bw ( KiB/s): min= 4096, max= 4096, per=20.54%, avg=4096.00, stdev= 0.00, samples=1 00:17:18.641 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:18.641 lat (usec) : 250=73.91%, 500=19.91%, 750=2.87%, 1000=0.15% 00:17:18.641 lat (msec) : 50=3.17% 00:17:18.641 cpu : usr=0.39%, sys=0.78%, ctx=666, majf=0, minf=2 00:17:18.641 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:18.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:18.641 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:18.641 issued rwts: total=151,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:18.641 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:18.641 job3: (groupid=0, jobs=1): err= 0: pid=3752178: Mon Jul 15 15:58:47 2024 00:17:18.641 read: IOPS=1669, BW=6677KiB/s (6838kB/s)(6684KiB/1001msec) 00:17:18.641 slat (nsec): min=7412, max=34793, avg=8712.93, stdev=1375.16 00:17:18.641 clat (usec): min=284, max=440, avg=323.49, stdev=19.95 00:17:18.641 lat (usec): min=293, max=449, avg=332.20, stdev=20.11 00:17:18.641 clat percentiles (usec): 00:17:18.641 | 1.00th=[ 293], 5.00th=[ 297], 10.00th=[ 302], 20.00th=[ 306], 00:17:18.641 | 30.00th=[ 314], 40.00th=[ 318], 50.00th=[ 322], 60.00th=[ 326], 00:17:18.641 | 70.00th=[ 334], 80.00th=[ 338], 90.00th=[ 347], 95.00th=[ 355], 00:17:18.641 | 99.00th=[ 400], 99.50th=[ 416], 99.90th=[ 433], 99.95th=[ 441], 00:17:18.641 | 99.99th=[ 441] 00:17:18.641 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:17:18.641 slat (usec): min=6, max=618, avg=11.92, stdev=16.84 00:17:18.641 clat (usec): min=158, max=556, avg=199.94, stdev=33.64 00:17:18.641 lat (usec): min=170, max=823, avg=211.86, stdev=38.20 00:17:18.641 clat percentiles (usec): 00:17:18.641 | 1.00th=[ 165], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 180], 00:17:18.641 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 194], 00:17:18.641 | 70.00th=[ 200], 80.00th=[ 208], 90.00th=[ 239], 95.00th=[ 289], 00:17:18.641 | 99.00th=[ 306], 99.50th=[ 310], 99.90th=[ 486], 99.95th=[ 523], 00:17:18.641 | 99.99th=[ 553] 00:17:18.641 bw ( KiB/s): min= 8192, max= 8192, per=41.08%, avg=8192.00, stdev= 0.00, samples=1 00:17:18.641 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:17:18.641 lat (usec) : 250=50.31%, 500=49.64%, 750=0.05% 00:17:18.641 cpu : usr=3.40%, sys=5.50%, ctx=3722, majf=0, minf=1 00:17:18.641 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:18.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:18.641 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:18.641 issued rwts: total=1671,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:18.641 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:18.641 00:17:18.641 Run status group 0 (all jobs): 00:17:18.641 READ: bw=13.8MiB/s (14.5MB/s), 90.5KiB/s-7169KiB/s (92.6kB/s-7341kB/s), io=14.2MiB (14.9MB), run=1001-1027msec 00:17:18.641 WRITE: bw=19.5MiB/s (20.4MB/s), 1994KiB/s-8184KiB/s (2042kB/s-8380kB/s), io=20.0MiB (21.0MB), run=1001-1027msec 00:17:18.641 00:17:18.641 Disk stats (read/write): 00:17:18.641 nvme0n1: ios=1591/1704, merge=0/0, ticks=531/306, in_queue=837, util=87.07% 00:17:18.641 nvme0n2: ios=64/512, merge=0/0, ticks=811/98, in_queue=909, util=91.26% 00:17:18.641 nvme0n3: ios=203/512, merge=0/0, ticks=1392/94, in_queue=1486, util=93.56% 00:17:18.641 nvme0n4: ios=1590/1629, merge=0/0, ticks=568/303, in_queue=871, util=95.39% 00:17:18.641 15:58:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:17:18.641 [global] 00:17:18.641 thread=1 00:17:18.641 invalidate=1 00:17:18.641 rw=randwrite 00:17:18.641 time_based=1 00:17:18.641 runtime=1 00:17:18.641 ioengine=libaio 00:17:18.641 direct=1 00:17:18.641 bs=4096 00:17:18.641 iodepth=1 00:17:18.641 norandommap=0 00:17:18.641 numjobs=1 00:17:18.641 00:17:18.641 verify_dump=1 00:17:18.641 verify_backlog=512 00:17:18.641 verify_state_save=0 00:17:18.641 do_verify=1 00:17:18.641 verify=crc32c-intel 00:17:18.641 [job0] 00:17:18.641 filename=/dev/nvme0n1 00:17:18.641 [job1] 00:17:18.641 filename=/dev/nvme0n2 00:17:18.641 [job2] 00:17:18.641 filename=/dev/nvme0n3 00:17:18.641 [job3] 00:17:18.641 filename=/dev/nvme0n4 00:17:18.641 Could not set queue depth (nvme0n1) 00:17:18.641 Could not set queue depth (nvme0n2) 00:17:18.641 Could not set queue depth (nvme0n3) 00:17:18.641 Could not set queue depth (nvme0n4) 00:17:18.898 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:18.898 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:18.898 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:18.898 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:18.898 fio-3.35 00:17:18.898 Starting 4 threads 00:17:20.264 00:17:20.264 job0: (groupid=0, jobs=1): err= 0: pid=3752548: Mon Jul 15 15:58:48 2024 00:17:20.264 read: IOPS=519, BW=2077KiB/s (2127kB/s)(2100KiB/1011msec) 00:17:20.264 slat (nsec): min=6168, max=27766, avg=7599.32, stdev=2855.47 00:17:20.264 clat (usec): min=208, max=41963, avg=1547.35, stdev=7020.12 00:17:20.264 lat (usec): min=215, max=41986, avg=1554.95, stdev=7022.43 00:17:20.264 clat percentiles (usec): 00:17:20.264 | 1.00th=[ 212], 5.00th=[ 219], 10.00th=[ 225], 20.00th=[ 231], 00:17:20.264 | 30.00th=[ 243], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 289], 00:17:20.264 | 70.00th=[ 306], 80.00th=[ 424], 90.00th=[ 445], 95.00th=[ 474], 00:17:20.264 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:17:20.264 | 99.99th=[42206] 00:17:20.264 write: IOPS=1012, BW=4051KiB/s (4149kB/s)(4096KiB/1011msec); 0 zone resets 00:17:20.264 slat (nsec): min=8627, max=82547, avg=9843.89, stdev=2591.85 00:17:20.264 clat (usec): min=131, max=336, avg=176.08, stdev=27.75 00:17:20.264 lat (usec): min=141, max=419, avg=185.93, stdev=28.28 00:17:20.264 clat percentiles (usec): 00:17:20.264 | 1.00th=[ 137], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 149], 00:17:20.264 | 30.00th=[ 153], 40.00th=[ 161], 50.00th=[ 178], 60.00th=[ 186], 00:17:20.264 | 70.00th=[ 192], 80.00th=[ 200], 90.00th=[ 212], 95.00th=[ 223], 00:17:20.264 | 99.00th=[ 243], 99.50th=[ 245], 99.90th=[ 297], 99.95th=[ 338], 00:17:20.264 | 99.99th=[ 338] 00:17:20.264 bw ( KiB/s): min= 8192, max= 8192, per=57.77%, avg=8192.00, stdev= 0.00, samples=1 00:17:20.264 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:17:20.264 lat (usec) : 250=77.02%, 500=21.76%, 750=0.19% 00:17:20.264 lat (msec) : 50=1.03% 00:17:20.264 cpu : usr=0.99%, sys=1.19%, ctx=1551, majf=0, minf=2 00:17:20.264 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:20.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:20.264 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:20.264 issued rwts: total=525,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:20.264 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:20.264 job1: (groupid=0, jobs=1): err= 0: pid=3752549: Mon Jul 15 15:58:48 2024 00:17:20.264 read: IOPS=451, BW=1804KiB/s (1848kB/s)(1808KiB/1002msec) 00:17:20.264 slat (nsec): min=6318, max=26114, avg=7853.35, stdev=3329.71 00:17:20.264 clat (usec): min=207, max=42048, avg=1980.34, stdev=8238.29 00:17:20.264 lat (usec): min=215, max=42070, avg=1988.19, stdev=8240.90 00:17:20.264 clat percentiles (usec): 00:17:20.264 | 1.00th=[ 215], 5.00th=[ 219], 10.00th=[ 223], 20.00th=[ 227], 00:17:20.264 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 245], 00:17:20.264 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[ 265], 95.00th=[ 367], 00:17:20.264 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:20.264 | 99.99th=[42206] 00:17:20.264 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:17:20.264 slat (nsec): min=8892, max=41543, avg=10134.86, stdev=1889.93 00:17:20.264 clat (usec): min=132, max=286, avg=184.60, stdev=23.04 00:17:20.264 lat (usec): min=142, max=296, avg=194.73, stdev=23.14 00:17:20.264 clat percentiles (usec): 00:17:20.264 | 1.00th=[ 145], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 165], 00:17:20.264 | 30.00th=[ 172], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 188], 00:17:20.264 | 70.00th=[ 194], 80.00th=[ 202], 90.00th=[ 215], 95.00th=[ 231], 00:17:20.264 | 99.00th=[ 251], 99.50th=[ 277], 99.90th=[ 289], 99.95th=[ 289], 00:17:20.264 | 99.99th=[ 289] 00:17:20.264 bw ( KiB/s): min= 4096, max= 4096, per=28.89%, avg=4096.00, stdev= 0.00, samples=1 00:17:20.264 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:20.264 lat (usec) : 250=87.24%, 500=10.58%, 750=0.10% 00:17:20.264 lat (msec) : 10=0.10%, 50=1.97% 00:17:20.264 cpu : usr=0.20%, sys=1.10%, ctx=966, majf=0, minf=1 00:17:20.264 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:20.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:20.264 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:20.264 issued rwts: total=452,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:20.264 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:20.264 job2: (groupid=0, jobs=1): err= 0: pid=3752550: Mon Jul 15 15:58:48 2024 00:17:20.264 read: IOPS=1228, BW=4915KiB/s (5033kB/s)(4920KiB/1001msec) 00:17:20.264 slat (nsec): min=6774, max=26372, avg=7758.82, stdev=1609.31 00:17:20.264 clat (usec): min=209, max=41962, avg=547.94, stdev=3499.99 00:17:20.264 lat (usec): min=216, max=41982, avg=555.70, stdev=3501.04 00:17:20.264 clat percentiles (usec): 00:17:20.264 | 1.00th=[ 217], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 229], 00:17:20.264 | 30.00th=[ 233], 40.00th=[ 235], 50.00th=[ 239], 60.00th=[ 241], 00:17:20.264 | 70.00th=[ 245], 80.00th=[ 249], 90.00th=[ 253], 95.00th=[ 260], 00:17:20.264 | 99.00th=[ 388], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:17:20.264 | 99.99th=[42206] 00:17:20.264 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:17:20.264 slat (usec): min=9, max=26919, avg=28.70, stdev=686.58 00:17:20.264 clat (usec): min=137, max=780, avg=171.89, stdev=35.20 00:17:20.264 lat (usec): min=148, max=27456, avg=200.59, stdev=696.74 00:17:20.264 clat percentiles (usec): 00:17:20.264 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 153], 00:17:20.264 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 169], 00:17:20.264 | 70.00th=[ 180], 80.00th=[ 190], 90.00th=[ 202], 95.00th=[ 212], 00:17:20.264 | 99.00th=[ 265], 99.50th=[ 306], 99.90th=[ 668], 99.95th=[ 783], 00:17:20.264 | 99.99th=[ 783] 00:17:20.264 bw ( KiB/s): min= 4096, max= 4096, per=28.89%, avg=4096.00, stdev= 0.00, samples=1 00:17:20.264 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:20.264 lat (usec) : 250=91.97%, 500=7.48%, 750=0.11%, 1000=0.04% 00:17:20.264 lat (msec) : 4=0.04%, 20=0.04%, 50=0.33% 00:17:20.264 cpu : usr=1.80%, sys=2.40%, ctx=2768, majf=0, minf=1 00:17:20.264 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:20.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:20.264 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:20.264 issued rwts: total=1230,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:20.264 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:20.264 job3: (groupid=0, jobs=1): err= 0: pid=3752551: Mon Jul 15 15:58:48 2024 00:17:20.264 read: IOPS=426, BW=1705KiB/s (1746kB/s)(1720KiB/1009msec) 00:17:20.264 slat (nsec): min=6309, max=30394, avg=7845.20, stdev=3092.41 00:17:20.264 clat (usec): min=225, max=42056, avg=2084.56, stdev=8431.75 00:17:20.264 lat (usec): min=232, max=42078, avg=2092.40, stdev=8434.53 00:17:20.264 clat percentiles (usec): 00:17:20.264 | 1.00th=[ 231], 5.00th=[ 237], 10.00th=[ 239], 20.00th=[ 245], 00:17:20.264 | 30.00th=[ 249], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 260], 00:17:20.264 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 453], 95.00th=[ 469], 00:17:20.264 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:20.264 | 99.99th=[42206] 00:17:20.264 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:17:20.264 slat (nsec): min=8731, max=35160, avg=10171.06, stdev=1636.06 00:17:20.264 clat (usec): min=142, max=670, avg=197.59, stdev=43.86 00:17:20.265 lat (usec): min=152, max=680, avg=207.77, stdev=44.14 00:17:20.265 clat percentiles (usec): 00:17:20.265 | 1.00th=[ 155], 5.00th=[ 165], 10.00th=[ 172], 20.00th=[ 178], 00:17:20.265 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 192], 60.00th=[ 196], 00:17:20.265 | 70.00th=[ 200], 80.00th=[ 206], 90.00th=[ 223], 95.00th=[ 243], 00:17:20.265 | 99.00th=[ 338], 99.50th=[ 594], 99.90th=[ 668], 99.95th=[ 668], 00:17:20.265 | 99.99th=[ 668] 00:17:20.265 bw ( KiB/s): min= 4096, max= 4096, per=28.89%, avg=4096.00, stdev= 0.00, samples=1 00:17:20.265 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:20.265 lat (usec) : 250=69.11%, 500=28.45%, 750=0.42% 00:17:20.265 lat (msec) : 50=2.02% 00:17:20.265 cpu : usr=0.50%, sys=0.79%, ctx=942, majf=0, minf=1 00:17:20.265 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:20.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:20.265 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:20.265 issued rwts: total=430,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:20.265 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:20.265 00:17:20.265 Run status group 0 (all jobs): 00:17:20.265 READ: bw=10.2MiB/s (10.7MB/s), 1705KiB/s-4915KiB/s (1746kB/s-5033kB/s), io=10.3MiB (10.8MB), run=1001-1011msec 00:17:20.265 WRITE: bw=13.8MiB/s (14.5MB/s), 2030KiB/s-6138KiB/s (2078kB/s-6285kB/s), io=14.0MiB (14.7MB), run=1001-1011msec 00:17:20.265 00:17:20.265 Disk stats (read/write): 00:17:20.265 nvme0n1: ios=571/1024, merge=0/0, ticks=668/181, in_queue=849, util=86.57% 00:17:20.265 nvme0n2: ios=470/512, merge=0/0, ticks=1630/92, in_queue=1722, util=89.39% 00:17:20.265 nvme0n3: ios=1042/1024, merge=0/0, ticks=936/166, in_queue=1102, util=94.46% 00:17:20.265 nvme0n4: ios=483/512, merge=0/0, ticks=804/102, in_queue=906, util=95.79% 00:17:20.265 15:58:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:17:20.265 [global] 00:17:20.265 thread=1 00:17:20.265 invalidate=1 00:17:20.265 rw=write 00:17:20.265 time_based=1 00:17:20.265 runtime=1 00:17:20.265 ioengine=libaio 00:17:20.265 direct=1 00:17:20.265 bs=4096 00:17:20.265 iodepth=128 00:17:20.265 norandommap=0 00:17:20.265 numjobs=1 00:17:20.265 00:17:20.265 verify_dump=1 00:17:20.265 verify_backlog=512 00:17:20.265 verify_state_save=0 00:17:20.265 do_verify=1 00:17:20.265 verify=crc32c-intel 00:17:20.265 [job0] 00:17:20.265 filename=/dev/nvme0n1 00:17:20.265 [job1] 00:17:20.265 filename=/dev/nvme0n2 00:17:20.265 [job2] 00:17:20.265 filename=/dev/nvme0n3 00:17:20.265 [job3] 00:17:20.265 filename=/dev/nvme0n4 00:17:20.265 Could not set queue depth (nvme0n1) 00:17:20.265 Could not set queue depth (nvme0n2) 00:17:20.265 Could not set queue depth (nvme0n3) 00:17:20.265 Could not set queue depth (nvme0n4) 00:17:20.265 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:20.265 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:20.265 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:20.265 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:20.265 fio-3.35 00:17:20.265 Starting 4 threads 00:17:21.633 00:17:21.633 job0: (groupid=0, jobs=1): err= 0: pid=3752925: Mon Jul 15 15:58:50 2024 00:17:21.633 read: IOPS=3541, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1012msec) 00:17:21.633 slat (nsec): min=1025, max=17473k, avg=144290.59, stdev=1010707.47 00:17:21.633 clat (msec): min=3, max=102, avg=17.56, stdev=16.21 00:17:21.633 lat (msec): min=3, max=102, avg=17.71, stdev=16.31 00:17:21.633 clat percentiles (msec): 00:17:21.633 | 1.00th=[ 6], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 10], 00:17:21.633 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 14], 60.00th=[ 16], 00:17:21.633 | 70.00th=[ 17], 80.00th=[ 18], 90.00th=[ 24], 95.00th=[ 48], 00:17:21.633 | 99.00th=[ 97], 99.50th=[ 100], 99.90th=[ 103], 99.95th=[ 103], 00:17:21.633 | 99.99th=[ 103] 00:17:21.633 write: IOPS=4005, BW=15.6MiB/s (16.4MB/s)(15.8MiB/1012msec); 0 zone resets 00:17:21.633 slat (nsec): min=1861, max=10592k, avg=105298.54, stdev=559342.17 00:17:21.633 clat (usec): min=372, max=63857, avg=16093.21, stdev=10548.11 00:17:21.633 lat (usec): min=376, max=63865, avg=16198.51, stdev=10592.21 00:17:21.633 clat percentiles (usec): 00:17:21.633 | 1.00th=[ 586], 5.00th=[ 2442], 10.00th=[ 6652], 20.00th=[ 9634], 00:17:21.633 | 30.00th=[10159], 40.00th=[11338], 50.00th=[14353], 60.00th=[16319], 00:17:21.633 | 70.00th=[17695], 80.00th=[20579], 90.00th=[28443], 95.00th=[39060], 00:17:21.633 | 99.00th=[57410], 99.50th=[61080], 99.90th=[63701], 99.95th=[63701], 00:17:21.633 | 99.99th=[63701] 00:17:21.633 bw ( KiB/s): min=15328, max=16080, per=23.25%, avg=15704.00, stdev=531.74, samples=2 00:17:21.633 iops : min= 3832, max= 4020, avg=3926.00, stdev=132.94, samples=2 00:17:21.633 lat (usec) : 500=0.10%, 750=0.80% 00:17:21.633 lat (msec) : 2=0.69%, 4=1.91%, 10=21.54%, 20=56.94%, 50=14.61% 00:17:21.633 lat (msec) : 100=3.29%, 250=0.12% 00:17:21.633 cpu : usr=1.88%, sys=3.07%, ctx=514, majf=0, minf=1 00:17:21.633 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:17:21.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:21.633 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:21.633 issued rwts: total=3584,4054,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:21.633 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:21.633 job1: (groupid=0, jobs=1): err= 0: pid=3752926: Mon Jul 15 15:58:50 2024 00:17:21.633 read: IOPS=4009, BW=15.7MiB/s (16.4MB/s)(15.7MiB/1004msec) 00:17:21.633 slat (nsec): min=1265, max=8957.0k, avg=82743.51, stdev=520427.25 00:17:21.633 clat (usec): min=1104, max=49915, avg=11199.35, stdev=7423.03 00:17:21.633 lat (usec): min=2037, max=49926, avg=11282.10, stdev=7440.94 00:17:21.633 clat percentiles (usec): 00:17:21.633 | 1.00th=[ 3523], 5.00th=[ 4424], 10.00th=[ 6783], 20.00th=[ 8291], 00:17:21.633 | 30.00th=[ 8848], 40.00th=[ 9372], 50.00th=[ 9896], 60.00th=[10159], 00:17:21.633 | 70.00th=[10290], 80.00th=[10945], 90.00th=[16712], 95.00th=[21627], 00:17:21.633 | 99.00th=[48497], 99.50th=[50070], 99.90th=[50070], 99.95th=[50070], 00:17:21.633 | 99.99th=[50070] 00:17:21.633 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:17:21.633 slat (usec): min=2, max=63431, avg=153.17, stdev=1738.10 00:17:21.633 clat (usec): min=410, max=197242, avg=14546.52, stdev=13414.86 00:17:21.633 lat (usec): min=427, max=197275, avg=14699.69, stdev=13772.34 00:17:21.633 clat percentiles (usec): 00:17:21.633 | 1.00th=[ 1172], 5.00th=[ 4113], 10.00th=[ 6063], 20.00th=[ 8225], 00:17:21.633 | 30.00th=[ 9110], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[ 10159], 00:17:21.633 | 70.00th=[ 11207], 80.00th=[ 22676], 90.00th=[ 28967], 95.00th=[ 44303], 00:17:21.633 | 99.00th=[ 50070], 99.50th=[ 56361], 99.90th=[168821], 99.95th=[198181], 00:17:21.633 | 99.99th=[198181] 00:17:21.633 bw ( KiB/s): min=12288, max=20480, per=24.26%, avg=16384.00, stdev=5792.62, samples=2 00:17:21.633 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:17:21.633 lat (usec) : 500=0.01%, 750=0.17%, 1000=0.23% 00:17:21.633 lat (msec) : 2=0.95%, 4=2.50%, 10=50.87%, 20=31.16%, 50=13.51% 00:17:21.633 lat (msec) : 100=0.41%, 250=0.18% 00:17:21.633 cpu : usr=3.49%, sys=3.99%, ctx=405, majf=0, minf=1 00:17:21.633 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:17:21.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:21.633 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:21.633 issued rwts: total=4026,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:21.633 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:21.633 job2: (groupid=0, jobs=1): err= 0: pid=3752928: Mon Jul 15 15:58:50 2024 00:17:21.633 read: IOPS=4072, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec) 00:17:21.633 slat (nsec): min=1127, max=14109k, avg=127475.73, stdev=830775.45 00:17:21.633 clat (usec): min=3536, max=51167, avg=15463.62, stdev=7380.07 00:17:21.633 lat (usec): min=4587, max=51175, avg=15591.09, stdev=7434.03 00:17:21.633 clat percentiles (usec): 00:17:21.633 | 1.00th=[ 8094], 5.00th=[ 9503], 10.00th=[10552], 20.00th=[11469], 00:17:21.633 | 30.00th=[11731], 40.00th=[12387], 50.00th=[12911], 60.00th=[14353], 00:17:21.633 | 70.00th=[16712], 80.00th=[17433], 90.00th=[21365], 95.00th=[28181], 00:17:21.633 | 99.00th=[46924], 99.50th=[49021], 99.90th=[51119], 99.95th=[51119], 00:17:21.633 | 99.99th=[51119] 00:17:21.633 write: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec); 0 zone resets 00:17:21.633 slat (nsec): min=1946, max=9034.3k, avg=97369.03, stdev=585198.61 00:17:21.633 clat (usec): min=1411, max=41298, avg=13808.48, stdev=5546.35 00:17:21.633 lat (usec): min=1424, max=41313, avg=13905.85, stdev=5569.99 00:17:21.633 clat percentiles (usec): 00:17:21.633 | 1.00th=[ 4490], 5.00th=[ 6652], 10.00th=[ 8029], 20.00th=[ 9896], 00:17:21.633 | 30.00th=[10945], 40.00th=[11469], 50.00th=[12387], 60.00th=[13042], 00:17:21.633 | 70.00th=[15926], 80.00th=[17171], 90.00th=[22152], 95.00th=[24511], 00:17:21.633 | 99.00th=[31851], 99.50th=[34866], 99.90th=[41157], 99.95th=[41157], 00:17:21.633 | 99.99th=[41157] 00:17:21.633 bw ( KiB/s): min=15400, max=20480, per=26.57%, avg=17940.00, stdev=3592.10, samples=2 00:17:21.633 iops : min= 3850, max= 5120, avg=4485.00, stdev=898.03, samples=2 00:17:21.633 lat (msec) : 2=0.15%, 4=0.17%, 10=14.07%, 20=73.09%, 50=12.39% 00:17:21.633 lat (msec) : 100=0.14% 00:17:21.633 cpu : usr=1.99%, sys=5.17%, ctx=418, majf=0, minf=1 00:17:21.633 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:17:21.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:21.633 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:21.633 issued rwts: total=4101,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:21.633 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:21.633 job3: (groupid=0, jobs=1): err= 0: pid=3752929: Mon Jul 15 15:58:50 2024 00:17:21.633 read: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec) 00:17:21.633 slat (nsec): min=1094, max=28523k, avg=142899.15, stdev=1041214.48 00:17:21.633 clat (usec): min=3823, max=63881, avg=17933.58, stdev=11488.10 00:17:21.633 lat (usec): min=3828, max=63908, avg=18076.47, stdev=11555.51 00:17:21.633 clat percentiles (usec): 00:17:21.633 | 1.00th=[ 5932], 5.00th=[ 8455], 10.00th=[ 8979], 20.00th=[10028], 00:17:21.633 | 30.00th=[11338], 40.00th=[11863], 50.00th=[12780], 60.00th=[15270], 00:17:21.633 | 70.00th=[19530], 80.00th=[23462], 90.00th=[36963], 95.00th=[45876], 00:17:21.633 | 99.00th=[56361], 99.50th=[56361], 99.90th=[58459], 99.95th=[58459], 00:17:21.633 | 99.99th=[63701] 00:17:21.633 write: IOPS=4288, BW=16.8MiB/s (17.6MB/s)(16.9MiB/1009msec); 0 zone resets 00:17:21.633 slat (usec): min=2, max=15263, avg=88.06, stdev=579.43 00:17:21.633 clat (usec): min=940, max=62706, avg=12596.40, stdev=6169.28 00:17:21.633 lat (usec): min=976, max=62709, avg=12684.46, stdev=6196.20 00:17:21.633 clat percentiles (usec): 00:17:21.633 | 1.00th=[ 3097], 5.00th=[ 6194], 10.00th=[ 7504], 20.00th=[ 8356], 00:17:21.633 | 30.00th=[ 9765], 40.00th=[10945], 50.00th=[11338], 60.00th=[11731], 00:17:21.633 | 70.00th=[13304], 80.00th=[15664], 90.00th=[18220], 95.00th=[24511], 00:17:21.633 | 99.00th=[31851], 99.50th=[42730], 99.90th=[55837], 99.95th=[55837], 00:17:21.633 | 99.99th=[62653] 00:17:21.633 bw ( KiB/s): min=13112, max=20480, per=24.87%, avg=16796.00, stdev=5209.96, samples=2 00:17:21.633 iops : min= 3278, max= 5120, avg=4199.00, stdev=1302.49, samples=2 00:17:21.633 lat (usec) : 1000=0.01% 00:17:21.633 lat (msec) : 2=0.25%, 4=1.16%, 10=24.46%, 20=56.05%, 50=16.69% 00:17:21.633 lat (msec) : 100=1.38% 00:17:21.634 cpu : usr=2.98%, sys=5.16%, ctx=358, majf=0, minf=1 00:17:21.634 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:17:21.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:21.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:21.634 issued rwts: total=4096,4327,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:21.634 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:21.634 00:17:21.634 Run status group 0 (all jobs): 00:17:21.634 READ: bw=61.0MiB/s (64.0MB/s), 13.8MiB/s-15.9MiB/s (14.5MB/s-16.7MB/s), io=61.7MiB (64.7MB), run=1004-1012msec 00:17:21.634 WRITE: bw=65.9MiB/s (69.1MB/s), 15.6MiB/s-17.9MiB/s (16.4MB/s-18.7MB/s), io=66.7MiB (70.0MB), run=1004-1012msec 00:17:21.634 00:17:21.634 Disk stats (read/write): 00:17:21.634 nvme0n1: ios=3103/3469, merge=0/0, ticks=28902/21756, in_queue=50658, util=99.10% 00:17:21.634 nvme0n2: ios=2580/3072, merge=0/0, ticks=21254/41137, in_queue=62391, util=97.84% 00:17:21.634 nvme0n3: ios=3242/3584, merge=0/0, ticks=32760/30610, in_queue=63370, util=98.48% 00:17:21.634 nvme0n4: ios=3324/3584, merge=0/0, ticks=31765/23019, in_queue=54784, util=87.51% 00:17:21.634 15:58:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:17:21.634 [global] 00:17:21.634 thread=1 00:17:21.634 invalidate=1 00:17:21.634 rw=randwrite 00:17:21.634 time_based=1 00:17:21.634 runtime=1 00:17:21.634 ioengine=libaio 00:17:21.634 direct=1 00:17:21.634 bs=4096 00:17:21.634 iodepth=128 00:17:21.634 norandommap=0 00:17:21.634 numjobs=1 00:17:21.634 00:17:21.634 verify_dump=1 00:17:21.634 verify_backlog=512 00:17:21.634 verify_state_save=0 00:17:21.634 do_verify=1 00:17:21.634 verify=crc32c-intel 00:17:21.634 [job0] 00:17:21.634 filename=/dev/nvme0n1 00:17:21.634 [job1] 00:17:21.634 filename=/dev/nvme0n2 00:17:21.634 [job2] 00:17:21.634 filename=/dev/nvme0n3 00:17:21.634 [job3] 00:17:21.634 filename=/dev/nvme0n4 00:17:21.634 Could not set queue depth (nvme0n1) 00:17:21.634 Could not set queue depth (nvme0n2) 00:17:21.634 Could not set queue depth (nvme0n3) 00:17:21.634 Could not set queue depth (nvme0n4) 00:17:21.890 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:21.890 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:21.890 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:21.890 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:21.890 fio-3.35 00:17:21.890 Starting 4 threads 00:17:23.259 00:17:23.259 job0: (groupid=0, jobs=1): err= 0: pid=3753297: Mon Jul 15 15:58:52 2024 00:17:23.259 read: IOPS=2565, BW=10.0MiB/s (10.5MB/s)(10.5MiB/1047msec) 00:17:23.259 slat (nsec): min=1334, max=20756k, avg=200373.67, stdev=1355756.97 00:17:23.259 clat (msec): min=3, max=100, avg=24.44, stdev=20.03 00:17:23.259 lat (msec): min=3, max=100, avg=24.64, stdev=20.13 00:17:23.259 clat percentiles (msec): 00:17:23.259 | 1.00th=[ 6], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 11], 00:17:23.259 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 18], 60.00th=[ 21], 00:17:23.259 | 70.00th=[ 29], 80.00th=[ 36], 90.00th=[ 52], 95.00th=[ 72], 00:17:23.259 | 99.00th=[ 97], 99.50th=[ 99], 99.90th=[ 101], 99.95th=[ 101], 00:17:23.259 | 99.99th=[ 101] 00:17:23.259 write: IOPS=2934, BW=11.5MiB/s (12.0MB/s)(12.0MiB/1047msec); 0 zone resets 00:17:23.259 slat (usec): min=2, max=17289, avg=145.33, stdev=793.31 00:17:23.259 clat (msec): min=2, max=100, avg=21.79, stdev=16.06 00:17:23.259 lat (msec): min=2, max=100, avg=21.93, stdev=16.15 00:17:23.259 clat percentiles (msec): 00:17:23.259 | 1.00th=[ 4], 5.00th=[ 8], 10.00th=[ 12], 20.00th=[ 17], 00:17:23.259 | 30.00th=[ 17], 40.00th=[ 17], 50.00th=[ 18], 60.00th=[ 18], 00:17:23.259 | 70.00th=[ 18], 80.00th=[ 25], 90.00th=[ 33], 95.00th=[ 64], 00:17:23.259 | 99.00th=[ 92], 99.50th=[ 93], 99.90th=[ 99], 99.95th=[ 101], 00:17:23.259 | 99.99th=[ 101] 00:17:23.259 bw ( KiB/s): min= 8384, max=16176, per=20.23%, avg=12280.00, stdev=5509.78, samples=2 00:17:23.259 iops : min= 2096, max= 4044, avg=3070.00, stdev=1377.44, samples=2 00:17:23.259 lat (msec) : 4=1.15%, 10=12.82%, 20=54.22%, 50=23.34%, 100=8.35% 00:17:23.259 lat (msec) : 250=0.12% 00:17:23.259 cpu : usr=2.77%, sys=2.96%, ctx=395, majf=0, minf=1 00:17:23.259 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:17:23.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:23.259 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:23.259 issued rwts: total=2686,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:23.259 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:23.259 job1: (groupid=0, jobs=1): err= 0: pid=3753301: Mon Jul 15 15:58:52 2024 00:17:23.259 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:17:23.259 slat (nsec): min=1280, max=15596k, avg=82489.23, stdev=556418.87 00:17:23.259 clat (usec): min=4487, max=56481, avg=10237.13, stdev=5426.21 00:17:23.259 lat (usec): min=4882, max=56506, avg=10319.62, stdev=5481.39 00:17:23.259 clat percentiles (usec): 00:17:23.259 | 1.00th=[ 5538], 5.00th=[ 6587], 10.00th=[ 7111], 20.00th=[ 7635], 00:17:23.259 | 30.00th=[ 7963], 40.00th=[ 8356], 50.00th=[ 8979], 60.00th=[ 9896], 00:17:23.259 | 70.00th=[10421], 80.00th=[11207], 90.00th=[12518], 95.00th=[18220], 00:17:23.259 | 99.00th=[41157], 99.50th=[49546], 99.90th=[49546], 99.95th=[49546], 00:17:23.259 | 99.99th=[56361] 00:17:23.259 write: IOPS=5154, BW=20.1MiB/s (21.1MB/s)(20.2MiB/1002msec); 0 zone resets 00:17:23.259 slat (nsec): min=1984, max=18580k, avg=106762.21, stdev=789600.99 00:17:23.259 clat (usec): min=538, max=54546, avg=14416.11, stdev=10430.32 00:17:23.259 lat (usec): min=3044, max=54568, avg=14522.87, stdev=10506.14 00:17:23.259 clat percentiles (usec): 00:17:23.259 | 1.00th=[ 4424], 5.00th=[ 6718], 10.00th=[ 7439], 20.00th=[ 7767], 00:17:23.259 | 30.00th=[ 7963], 40.00th=[ 8160], 50.00th=[ 9241], 60.00th=[10421], 00:17:23.259 | 70.00th=[15926], 80.00th=[17433], 90.00th=[31589], 95.00th=[39060], 00:17:23.259 | 99.00th=[49546], 99.50th=[50594], 99.90th=[51119], 99.95th=[51119], 00:17:23.259 | 99.99th=[54789] 00:17:23.259 bw ( KiB/s): min=15480, max=25480, per=33.73%, avg=20480.00, stdev=7071.07, samples=2 00:17:23.259 iops : min= 3870, max= 6370, avg=5120.00, stdev=1767.77, samples=2 00:17:23.259 lat (usec) : 750=0.01% 00:17:23.259 lat (msec) : 4=0.05%, 10=58.84%, 20=29.43%, 50=11.28%, 100=0.39% 00:17:23.259 cpu : usr=4.20%, sys=4.70%, ctx=447, majf=0, minf=1 00:17:23.259 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:17:23.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:23.259 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:23.259 issued rwts: total=5120,5165,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:23.259 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:23.259 job2: (groupid=0, jobs=1): err= 0: pid=3753308: Mon Jul 15 15:58:52 2024 00:17:23.259 read: IOPS=3057, BW=11.9MiB/s (12.5MB/s)(12.5MiB/1046msec) 00:17:23.259 slat (nsec): min=1361, max=18405k, avg=137997.65, stdev=1025974.56 00:17:23.259 clat (usec): min=4784, max=81976, avg=17839.23, stdev=12843.48 00:17:23.260 lat (usec): min=4795, max=81982, avg=17977.23, stdev=12921.32 00:17:23.260 clat percentiles (usec): 00:17:23.260 | 1.00th=[ 9241], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[ 9896], 00:17:23.260 | 30.00th=[10421], 40.00th=[12518], 50.00th=[12911], 60.00th=[13829], 00:17:23.260 | 70.00th=[17433], 80.00th=[23200], 90.00th=[31065], 95.00th=[39060], 00:17:23.260 | 99.00th=[76022], 99.50th=[79168], 99.90th=[82314], 99.95th=[82314], 00:17:23.260 | 99.99th=[82314] 00:17:23.260 write: IOPS=3426, BW=13.4MiB/s (14.0MB/s)(14.0MiB/1046msec); 0 zone resets 00:17:23.260 slat (usec): min=2, max=14077, avg=148.82, stdev=788.73 00:17:23.260 clat (usec): min=1554, max=98197, avg=21111.04, stdev=15834.42 00:17:23.260 lat (usec): min=1569, max=98209, avg=21259.86, stdev=15937.69 00:17:23.260 clat percentiles (usec): 00:17:23.260 | 1.00th=[ 3851], 5.00th=[ 7701], 10.00th=[ 8717], 20.00th=[ 9896], 00:17:23.260 | 30.00th=[11994], 40.00th=[15795], 50.00th=[16319], 60.00th=[17433], 00:17:23.260 | 70.00th=[25297], 80.00th=[28967], 90.00th=[34866], 95.00th=[47449], 00:17:23.260 | 99.00th=[92799], 99.50th=[95945], 99.90th=[98042], 99.95th=[98042], 00:17:23.260 | 99.99th=[98042] 00:17:23.260 bw ( KiB/s): min=12272, max=16384, per=23.60%, avg=14328.00, stdev=2907.62, samples=2 00:17:23.260 iops : min= 3068, max= 4096, avg=3582.00, stdev=726.91, samples=2 00:17:23.260 lat (msec) : 2=0.03%, 4=0.62%, 10=20.44%, 20=48.85%, 50=25.64% 00:17:23.260 lat (msec) : 100=4.42% 00:17:23.260 cpu : usr=2.39%, sys=4.50%, ctx=353, majf=0, minf=1 00:17:23.260 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:17:23.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:23.260 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:23.260 issued rwts: total=3198,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:23.260 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:23.260 job3: (groupid=0, jobs=1): err= 0: pid=3753311: Mon Jul 15 15:58:52 2024 00:17:23.260 read: IOPS=3541, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1012msec) 00:17:23.260 slat (nsec): min=1188, max=28006k, avg=124854.35, stdev=975771.24 00:17:23.260 clat (usec): min=5869, max=47247, avg=15420.19, stdev=6311.03 00:17:23.260 lat (usec): min=5879, max=73678, avg=15545.04, stdev=6401.84 00:17:23.260 clat percentiles (usec): 00:17:23.260 | 1.00th=[ 8979], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10552], 00:17:23.260 | 30.00th=[10945], 40.00th=[11600], 50.00th=[13435], 60.00th=[15926], 00:17:23.260 | 70.00th=[16712], 80.00th=[18220], 90.00th=[23725], 95.00th=[27919], 00:17:23.260 | 99.00th=[37487], 99.50th=[45876], 99.90th=[45876], 99.95th=[45876], 00:17:23.260 | 99.99th=[47449] 00:17:23.260 write: IOPS=4021, BW=15.7MiB/s (16.5MB/s)(15.9MiB/1012msec); 0 zone resets 00:17:23.260 slat (usec): min=2, max=16345, avg=130.55, stdev=761.68 00:17:23.260 clat (usec): min=1603, max=46044, avg=17945.79, stdev=6724.45 00:17:23.260 lat (usec): min=1614, max=46050, avg=18076.34, stdev=6767.63 00:17:23.260 clat percentiles (usec): 00:17:23.260 | 1.00th=[ 6390], 5.00th=[ 7963], 10.00th=[ 8586], 20.00th=[12911], 00:17:23.260 | 30.00th=[16188], 40.00th=[16712], 50.00th=[17171], 60.00th=[17695], 00:17:23.260 | 70.00th=[18220], 80.00th=[23200], 90.00th=[26608], 95.00th=[30540], 00:17:23.260 | 99.00th=[38011], 99.50th=[42730], 99.90th=[45876], 99.95th=[45876], 00:17:23.260 | 99.99th=[45876] 00:17:23.260 bw ( KiB/s): min=15152, max=16384, per=25.97%, avg=15768.00, stdev=871.16, samples=2 00:17:23.260 iops : min= 3788, max= 4096, avg=3942.00, stdev=217.79, samples=2 00:17:23.260 lat (msec) : 2=0.12%, 4=0.08%, 10=10.48%, 20=69.30%, 50=20.03% 00:17:23.260 cpu : usr=3.66%, sys=4.45%, ctx=402, majf=0, minf=1 00:17:23.260 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:17:23.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:23.260 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:23.260 issued rwts: total=3584,4070,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:23.260 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:23.260 00:17:23.260 Run status group 0 (all jobs): 00:17:23.260 READ: bw=54.4MiB/s (57.1MB/s), 10.0MiB/s-20.0MiB/s (10.5MB/s-20.9MB/s), io=57.0MiB (59.8MB), run=1002-1047msec 00:17:23.260 WRITE: bw=59.3MiB/s (62.2MB/s), 11.5MiB/s-20.1MiB/s (12.0MB/s-21.1MB/s), io=62.1MiB (65.1MB), run=1002-1047msec 00:17:23.260 00:17:23.260 Disk stats (read/write): 00:17:23.260 nvme0n1: ios=2596/2631, merge=0/0, ticks=56569/45599, in_queue=102168, util=96.69% 00:17:23.260 nvme0n2: ios=3823/4096, merge=0/0, ticks=21480/30992, in_queue=52472, util=87.61% 00:17:23.260 nvme0n3: ios=3072/3151, merge=0/0, ticks=46635/54336, in_queue=100971, util=89.06% 00:17:23.260 nvme0n4: ios=3072/3345, merge=0/0, ticks=48173/57591, in_queue=105764, util=89.72% 00:17:23.260 15:58:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:17:23.260 15:58:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3753535 00:17:23.260 15:58:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:17:23.260 15:58:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:17:23.260 [global] 00:17:23.260 thread=1 00:17:23.260 invalidate=1 00:17:23.260 rw=read 00:17:23.260 time_based=1 00:17:23.260 runtime=10 00:17:23.260 ioengine=libaio 00:17:23.260 direct=1 00:17:23.260 bs=4096 00:17:23.260 iodepth=1 00:17:23.260 norandommap=1 00:17:23.260 numjobs=1 00:17:23.260 00:17:23.260 [job0] 00:17:23.260 filename=/dev/nvme0n1 00:17:23.260 [job1] 00:17:23.260 filename=/dev/nvme0n2 00:17:23.260 [job2] 00:17:23.260 filename=/dev/nvme0n3 00:17:23.260 [job3] 00:17:23.260 filename=/dev/nvme0n4 00:17:23.260 Could not set queue depth (nvme0n1) 00:17:23.260 Could not set queue depth (nvme0n2) 00:17:23.260 Could not set queue depth (nvme0n3) 00:17:23.260 Could not set queue depth (nvme0n4) 00:17:23.517 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:23.517 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:23.517 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:23.517 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:23.517 fio-3.35 00:17:23.517 Starting 4 threads 00:17:26.789 15:58:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:17:26.789 15:58:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:17:26.789 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=1273856, buflen=4096 00:17:26.789 fio: pid=3753773, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:26.789 15:58:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:26.789 15:58:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:17:26.789 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=290816, buflen=4096 00:17:26.789 fio: pid=3753767, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:26.789 15:58:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:26.789 15:58:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:17:26.789 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=1662976, buflen=4096 00:17:26.789 fio: pid=3753730, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:27.046 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=48087040, buflen=4096 00:17:27.046 fio: pid=3753746, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:27.046 15:58:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:27.046 15:58:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:17:27.046 00:17:27.046 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3753730: Mon Jul 15 15:58:55 2024 00:17:27.046 read: IOPS=130, BW=520KiB/s (532kB/s)(1624KiB/3126msec) 00:17:27.046 slat (nsec): min=6176, max=77811, avg=9896.57, stdev=7335.91 00:17:27.046 clat (usec): min=265, max=41931, avg=7632.25, stdev=15639.75 00:17:27.046 lat (usec): min=272, max=41953, avg=7642.12, stdev=15645.38 00:17:27.046 clat percentiles (usec): 00:17:27.046 | 1.00th=[ 269], 5.00th=[ 302], 10.00th=[ 306], 20.00th=[ 310], 00:17:27.046 | 30.00th=[ 314], 40.00th=[ 314], 50.00th=[ 318], 60.00th=[ 322], 00:17:27.046 | 70.00th=[ 334], 80.00th=[ 355], 90.00th=[41157], 95.00th=[41157], 00:17:27.046 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:17:27.046 | 99.99th=[41681] 00:17:27.046 bw ( KiB/s): min= 96, max= 2728, per=3.54%, avg=538.00, stdev=1072.88, samples=6 00:17:27.046 iops : min= 24, max= 682, avg=134.50, stdev=268.22, samples=6 00:17:27.046 lat (usec) : 500=81.57%, 750=0.25% 00:17:27.046 lat (msec) : 50=17.94% 00:17:27.046 cpu : usr=0.03%, sys=0.13%, ctx=409, majf=0, minf=1 00:17:27.046 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:27.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.046 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.046 issued rwts: total=407,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.046 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:27.046 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3753746: Mon Jul 15 15:58:55 2024 00:17:27.046 read: IOPS=3561, BW=13.9MiB/s (14.6MB/s)(45.9MiB/3297msec) 00:17:27.046 slat (usec): min=6, max=15785, avg=11.69, stdev=251.97 00:17:27.046 clat (usec): min=219, max=3231, avg=266.18, stdev=51.46 00:17:27.046 lat (usec): min=226, max=16229, avg=277.87, stdev=260.09 00:17:27.046 clat percentiles (usec): 00:17:27.046 | 1.00th=[ 229], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 243], 00:17:27.046 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 260], 00:17:27.046 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 297], 95.00th=[ 396], 00:17:27.046 | 99.00th=[ 433], 99.50th=[ 437], 99.90th=[ 465], 99.95th=[ 510], 00:17:27.046 | 99.99th=[ 1647] 00:17:27.046 bw ( KiB/s): min=12672, max=15512, per=94.87%, avg=14419.33, stdev=1364.18, samples=6 00:17:27.046 iops : min= 3168, max= 3878, avg=3604.83, stdev=341.04, samples=6 00:17:27.046 lat (usec) : 250=39.99%, 500=59.94%, 750=0.05% 00:17:27.046 lat (msec) : 2=0.01%, 4=0.01% 00:17:27.046 cpu : usr=1.00%, sys=3.03%, ctx=11745, majf=0, minf=1 00:17:27.046 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:27.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.046 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.046 issued rwts: total=11741,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.046 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:27.046 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3753767: Mon Jul 15 15:58:55 2024 00:17:27.046 read: IOPS=24, BW=97.5KiB/s (99.8kB/s)(284KiB/2913msec) 00:17:27.046 slat (usec): min=9, max=15929, avg=242.87, stdev=1874.68 00:17:27.046 clat (usec): min=438, max=41994, avg=40481.92, stdev=4827.98 00:17:27.046 lat (usec): min=473, max=57040, avg=40727.97, stdev=5210.05 00:17:27.046 clat percentiles (usec): 00:17:27.046 | 1.00th=[ 437], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:17:27.046 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:17:27.046 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:17:27.046 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:27.046 | 99.99th=[42206] 00:17:27.046 bw ( KiB/s): min= 96, max= 104, per=0.64%, avg=97.60, stdev= 3.58, samples=5 00:17:27.046 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:17:27.046 lat (usec) : 500=1.39% 00:17:27.046 lat (msec) : 50=97.22% 00:17:27.046 cpu : usr=0.00%, sys=0.14%, ctx=74, majf=0, minf=1 00:17:27.046 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:27.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.046 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.046 issued rwts: total=72,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.046 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:27.046 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3753773: Mon Jul 15 15:58:55 2024 00:17:27.046 read: IOPS=114, BW=455KiB/s (466kB/s)(1244KiB/2733msec) 00:17:27.046 slat (nsec): min=6071, max=33362, avg=10272.91, stdev=6375.38 00:17:27.046 clat (usec): min=280, max=41982, avg=8708.43, stdev=16490.50 00:17:27.046 lat (usec): min=287, max=42006, avg=8718.66, stdev=16496.37 00:17:27.046 clat percentiles (usec): 00:17:27.046 | 1.00th=[ 289], 5.00th=[ 306], 10.00th=[ 310], 20.00th=[ 314], 00:17:27.046 | 30.00th=[ 318], 40.00th=[ 322], 50.00th=[ 326], 60.00th=[ 330], 00:17:27.046 | 70.00th=[ 347], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:17:27.046 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:27.046 | 99.99th=[42206] 00:17:27.046 bw ( KiB/s): min= 96, max= 2048, per=3.21%, avg=488.00, stdev=872.07, samples=5 00:17:27.046 iops : min= 24, max= 512, avg=122.00, stdev=218.02, samples=5 00:17:27.046 lat (usec) : 500=78.53%, 750=0.64% 00:17:27.046 lat (msec) : 50=20.51% 00:17:27.046 cpu : usr=0.00%, sys=0.22%, ctx=312, majf=0, minf=2 00:17:27.046 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:27.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.046 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.046 issued rwts: total=312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.046 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:27.046 00:17:27.046 Run status group 0 (all jobs): 00:17:27.046 READ: bw=14.8MiB/s (15.6MB/s), 97.5KiB/s-13.9MiB/s (99.8kB/s-14.6MB/s), io=48.9MiB (51.3MB), run=2733-3297msec 00:17:27.046 00:17:27.046 Disk stats (read/write): 00:17:27.046 nvme0n1: ios=406/0, merge=0/0, ticks=3097/0, in_queue=3097, util=95.69% 00:17:27.046 nvme0n2: ios=11183/0, merge=0/0, ticks=2938/0, in_queue=2938, util=95.27% 00:17:27.046 nvme0n3: ios=70/0, merge=0/0, ticks=2835/0, in_queue=2835, util=96.01% 00:17:27.046 nvme0n4: ios=308/0, merge=0/0, ticks=2587/0, in_queue=2587, util=96.41% 00:17:27.303 15:58:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:27.303 15:58:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:17:27.303 15:58:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:27.303 15:58:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:17:27.559 15:58:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:27.559 15:58:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:17:27.815 15:58:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:27.815 15:58:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:17:28.071 15:58:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:17:28.071 15:58:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 3753535 00:17:28.071 15:58:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:17:28.071 15:58:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:28.071 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:28.071 15:58:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:28.071 15:58:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:17:28.071 15:58:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:28.071 15:58:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:28.071 15:58:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:28.071 15:58:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:28.071 15:58:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:17:28.071 15:58:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:17:28.071 15:58:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:17:28.071 nvmf hotplug test: fio failed as expected 00:17:28.071 15:58:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:28.326 15:58:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:17:28.326 15:58:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:17:28.326 15:58:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:17:28.326 15:58:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:17:28.326 15:58:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:17:28.326 15:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:28.326 15:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:17:28.326 15:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:28.326 15:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:17:28.326 15:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:28.326 15:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:28.326 rmmod nvme_tcp 00:17:28.326 rmmod nvme_fabrics 00:17:28.326 rmmod nvme_keyring 00:17:28.326 15:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:28.326 15:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:17:28.326 15:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:17:28.326 15:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 3750822 ']' 00:17:28.326 15:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 3750822 00:17:28.326 15:58:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 3750822 ']' 00:17:28.326 15:58:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 3750822 00:17:28.326 15:58:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:17:28.326 15:58:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:28.326 15:58:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3750822 00:17:28.326 15:58:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:28.327 15:58:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:28.327 15:58:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3750822' 00:17:28.327 killing process with pid 3750822 00:17:28.327 15:58:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 3750822 00:17:28.327 15:58:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 3750822 00:17:28.583 15:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:28.583 15:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:28.583 15:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:28.583 15:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:28.583 15:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:28.583 15:58:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.583 15:58:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:28.583 15:58:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:31.109 15:58:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:31.109 00:17:31.109 real 0m26.620s 00:17:31.109 user 1m47.220s 00:17:31.109 sys 0m7.850s 00:17:31.109 15:58:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:31.109 15:58:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.109 ************************************ 00:17:31.109 END TEST nvmf_fio_target 00:17:31.109 ************************************ 00:17:31.109 15:58:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:31.109 15:58:59 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:31.109 15:58:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:31.109 15:58:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:31.109 15:58:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:31.109 ************************************ 00:17:31.109 START TEST nvmf_bdevio 00:17:31.109 ************************************ 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:31.109 * Looking for test storage... 00:17:31.109 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:17:31.109 15:58:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:36.376 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:36.376 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:17:36.376 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:36.376 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:36.376 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:36.376 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:36.376 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:36.376 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:17:36.376 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:36.376 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:17:36.376 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:17:36.376 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:17:36.376 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:17:36.376 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:17:36.376 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:17:36.376 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:36.376 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:36.376 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:36.376 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:36.376 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:36.376 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:36.376 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:36.376 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:36.376 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:36.376 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:36.376 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:36.376 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:36.376 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:36.376 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:36.376 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:36.376 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:36.376 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:36.376 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:36.376 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:36.376 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:36.376 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:36.376 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:36.376 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:36.376 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:36.376 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:36.377 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:36.377 Found net devices under 0000:86:00.0: cvl_0_0 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:36.377 Found net devices under 0000:86:00.1: cvl_0_1 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:36.377 15:59:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:36.377 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:36.377 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:17:36.377 00:17:36.377 --- 10.0.0.2 ping statistics --- 00:17:36.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.377 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:17:36.377 15:59:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:36.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:36.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:17:36.377 00:17:36.377 --- 10.0.0.1 ping statistics --- 00:17:36.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.377 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:17:36.377 15:59:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:36.377 15:59:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:17:36.377 15:59:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:36.377 15:59:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:36.377 15:59:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:36.377 15:59:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:36.377 15:59:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:36.377 15:59:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:36.377 15:59:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:36.377 15:59:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:36.377 15:59:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:36.377 15:59:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:36.377 15:59:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:36.377 15:59:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=3758023 00:17:36.377 15:59:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 3758023 00:17:36.377 15:59:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:17:36.377 15:59:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 3758023 ']' 00:17:36.377 15:59:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.377 15:59:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:36.377 15:59:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.377 15:59:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:36.377 15:59:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:36.377 [2024-07-15 15:59:05.097565] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:17:36.377 [2024-07-15 15:59:05.097609] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:36.377 EAL: No free 2048 kB hugepages reported on node 1 00:17:36.377 [2024-07-15 15:59:05.156356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:36.377 [2024-07-15 15:59:05.236141] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:36.377 [2024-07-15 15:59:05.236176] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:36.377 [2024-07-15 15:59:05.236183] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:36.377 [2024-07-15 15:59:05.236189] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:36.377 [2024-07-15 15:59:05.236194] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:36.377 [2024-07-15 15:59:05.236256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:36.377 [2024-07-15 15:59:05.236363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:36.377 [2024-07-15 15:59:05.236397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:36.377 [2024-07-15 15:59:05.236398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:37.309 15:59:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:37.309 15:59:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:17:37.309 15:59:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:37.309 15:59:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:37.309 15:59:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:37.309 15:59:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:37.309 15:59:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:37.309 15:59:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.309 15:59:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:37.309 [2024-07-15 15:59:05.942203] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:37.309 15:59:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.309 15:59:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:37.309 15:59:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.309 15:59:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:37.309 Malloc0 00:17:37.309 15:59:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.309 15:59:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:37.309 15:59:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.309 15:59:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:37.309 15:59:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.309 15:59:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:37.309 15:59:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.309 15:59:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:37.309 15:59:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.309 15:59:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:37.309 15:59:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.309 15:59:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:37.309 [2024-07-15 15:59:05.985523] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:37.309 15:59:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.309 15:59:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:17:37.309 15:59:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:37.309 15:59:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:17:37.309 15:59:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:17:37.309 15:59:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:37.309 15:59:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:37.309 { 00:17:37.309 "params": { 00:17:37.309 "name": "Nvme$subsystem", 00:17:37.309 "trtype": "$TEST_TRANSPORT", 00:17:37.309 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:37.309 "adrfam": "ipv4", 00:17:37.309 "trsvcid": "$NVMF_PORT", 00:17:37.309 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:37.309 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:37.309 "hdgst": ${hdgst:-false}, 00:17:37.309 "ddgst": ${ddgst:-false} 00:17:37.309 }, 00:17:37.309 "method": "bdev_nvme_attach_controller" 00:17:37.309 } 00:17:37.309 EOF 00:17:37.309 )") 00:17:37.309 15:59:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:17:37.309 15:59:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:17:37.309 15:59:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:17:37.309 15:59:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:37.309 "params": { 00:17:37.309 "name": "Nvme1", 00:17:37.309 "trtype": "tcp", 00:17:37.310 "traddr": "10.0.0.2", 00:17:37.310 "adrfam": "ipv4", 00:17:37.310 "trsvcid": "4420", 00:17:37.310 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:37.310 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:37.310 "hdgst": false, 00:17:37.310 "ddgst": false 00:17:37.310 }, 00:17:37.310 "method": "bdev_nvme_attach_controller" 00:17:37.310 }' 00:17:37.310 [2024-07-15 15:59:06.034738] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:17:37.310 [2024-07-15 15:59:06.034784] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3758162 ] 00:17:37.310 EAL: No free 2048 kB hugepages reported on node 1 00:17:37.310 [2024-07-15 15:59:06.089755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:37.310 [2024-07-15 15:59:06.165184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:37.310 [2024-07-15 15:59:06.165204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:37.310 [2024-07-15 15:59:06.165206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.566 I/O targets: 00:17:37.566 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:37.566 00:17:37.566 00:17:37.566 CUnit - A unit testing framework for C - Version 2.1-3 00:17:37.566 http://cunit.sourceforge.net/ 00:17:37.566 00:17:37.566 00:17:37.566 Suite: bdevio tests on: Nvme1n1 00:17:37.566 Test: blockdev write read block ...passed 00:17:37.566 Test: blockdev write zeroes read block ...passed 00:17:37.566 Test: blockdev write zeroes read no split ...passed 00:17:37.822 Test: blockdev write zeroes read split ...passed 00:17:37.822 Test: blockdev write zeroes read split partial ...passed 00:17:37.822 Test: blockdev reset ...[2024-07-15 15:59:06.556895] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:37.822 [2024-07-15 15:59:06.556961] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa056d0 (9): Bad file descriptor 00:17:37.822 [2024-07-15 15:59:06.577465] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:37.822 passed 00:17:37.822 Test: blockdev write read 8 blocks ...passed 00:17:37.822 Test: blockdev write read size > 128k ...passed 00:17:37.822 Test: blockdev write read invalid size ...passed 00:17:37.822 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:37.822 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:37.822 Test: blockdev write read max offset ...passed 00:17:37.822 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:37.822 Test: blockdev writev readv 8 blocks ...passed 00:17:38.081 Test: blockdev writev readv 30 x 1block ...passed 00:17:38.081 Test: blockdev writev readv block ...passed 00:17:38.081 Test: blockdev writev readv size > 128k ...passed 00:17:38.081 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:38.081 Test: blockdev comparev and writev ...[2024-07-15 15:59:06.832320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:38.081 [2024-07-15 15:59:06.832344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.081 [2024-07-15 15:59:06.832358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:38.081 [2024-07-15 15:59:06.832365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:38.081 [2024-07-15 15:59:06.832634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:38.081 [2024-07-15 15:59:06.832645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:38.081 [2024-07-15 15:59:06.832656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:38.081 [2024-07-15 15:59:06.832663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:38.081 [2024-07-15 15:59:06.832923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:38.081 [2024-07-15 15:59:06.832932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:38.081 [2024-07-15 15:59:06.832943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:38.081 [2024-07-15 15:59:06.832950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:38.081 [2024-07-15 15:59:06.833205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:38.081 [2024-07-15 15:59:06.833215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:38.081 [2024-07-15 15:59:06.833230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:38.081 [2024-07-15 15:59:06.833237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:38.081 passed 00:17:38.081 Test: blockdev nvme passthru rw ...passed 00:17:38.081 Test: blockdev nvme passthru vendor specific ...[2024-07-15 15:59:06.917607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:38.081 [2024-07-15 15:59:06.917621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:38.081 [2024-07-15 15:59:06.917761] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:38.081 [2024-07-15 15:59:06.917770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:38.081 [2024-07-15 15:59:06.917903] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:38.081 [2024-07-15 15:59:06.917912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:38.081 [2024-07-15 15:59:06.918043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:38.081 [2024-07-15 15:59:06.918052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:38.081 passed 00:17:38.081 Test: blockdev nvme admin passthru ...passed 00:17:38.081 Test: blockdev copy ...passed 00:17:38.081 00:17:38.081 Run Summary: Type Total Ran Passed Failed Inactive 00:17:38.081 suites 1 1 n/a 0 0 00:17:38.081 tests 23 23 23 0 0 00:17:38.081 asserts 152 152 152 0 n/a 00:17:38.081 00:17:38.081 Elapsed time = 1.241 seconds 00:17:38.388 15:59:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:38.388 15:59:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.388 15:59:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:38.388 15:59:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.388 15:59:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:38.388 15:59:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:17:38.388 15:59:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:38.388 15:59:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:17:38.388 15:59:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:38.388 15:59:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:17:38.388 15:59:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:38.388 15:59:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:38.388 rmmod nvme_tcp 00:17:38.388 rmmod nvme_fabrics 00:17:38.388 rmmod nvme_keyring 00:17:38.388 15:59:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:38.388 15:59:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:17:38.388 15:59:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:17:38.388 15:59:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 3758023 ']' 00:17:38.388 15:59:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 3758023 00:17:38.388 15:59:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 3758023 ']' 00:17:38.388 15:59:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 3758023 00:17:38.388 15:59:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:17:38.388 15:59:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:38.388 15:59:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3758023 00:17:38.388 15:59:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:17:38.388 15:59:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:17:38.388 15:59:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3758023' 00:17:38.388 killing process with pid 3758023 00:17:38.388 15:59:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 3758023 00:17:38.388 15:59:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 3758023 00:17:38.645 15:59:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:38.645 15:59:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:38.645 15:59:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:38.645 15:59:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:38.645 15:59:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:38.645 15:59:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:38.645 15:59:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:38.645 15:59:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:41.171 15:59:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:41.171 00:17:41.171 real 0m9.970s 00:17:41.171 user 0m12.546s 00:17:41.171 sys 0m4.568s 00:17:41.171 15:59:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:41.171 15:59:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:41.171 ************************************ 00:17:41.171 END TEST nvmf_bdevio 00:17:41.171 ************************************ 00:17:41.171 15:59:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:41.171 15:59:09 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:41.171 15:59:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:41.171 15:59:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:41.171 15:59:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:41.171 ************************************ 00:17:41.171 START TEST nvmf_auth_target 00:17:41.171 ************************************ 00:17:41.171 15:59:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:41.171 * Looking for test storage... 00:17:41.171 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:41.171 15:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:41.171 15:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:41.171 15:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:41.171 15:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:41.171 15:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:41.171 15:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:41.171 15:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:41.171 15:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:41.172 15:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:41.172 15:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:41.172 15:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:41.172 15:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:41.172 15:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:41.172 15:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:41.172 15:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:41.172 15:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:41.172 15:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:41.172 15:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:41.172 15:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:41.172 15:59:09 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:41.172 15:59:09 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:41.172 15:59:09 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:41.172 15:59:09 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.172 15:59:09 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.172 15:59:09 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.172 15:59:09 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:41.172 15:59:09 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.172 15:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:17:41.172 15:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:41.172 15:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:41.172 15:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:41.172 15:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:41.172 15:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:41.172 15:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:41.172 15:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:41.172 15:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:41.172 15:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:41.172 15:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:41.172 15:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:41.172 15:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:41.172 15:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:41.172 15:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:41.172 15:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:41.172 15:59:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:17:41.172 15:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:41.172 15:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:41.172 15:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:41.172 15:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:41.172 15:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:41.172 15:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:41.172 15:59:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:41.172 15:59:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:41.172 15:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:41.172 15:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:41.172 15:59:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:41.172 15:59:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.432 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:46.432 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:46.432 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:46.432 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:46.432 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:46.432 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:46.432 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:46.432 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:46.432 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:46.432 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:17:46.432 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:46.432 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:17:46.432 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:46.432 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:17:46.432 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:46.432 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:46.432 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:46.432 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:46.432 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:46.432 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:46.432 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:46.432 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:46.432 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:46.432 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:46.432 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:46.432 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:46.432 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:46.432 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:46.432 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:46.432 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:46.432 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:46.432 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:46.433 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:46.433 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:46.433 Found net devices under 0000:86:00.0: cvl_0_0 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:46.433 Found net devices under 0000:86:00.1: cvl_0_1 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:46.433 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:46.433 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:17:46.433 00:17:46.433 --- 10.0.0.2 ping statistics --- 00:17:46.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.433 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:46.433 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:46.433 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:17:46.433 00:17:46.433 --- 10.0.0.1 ping statistics --- 00:17:46.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.433 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3761801 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3761801 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3761801 ']' 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:46.433 15:59:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=3761935 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=31ad5dd02530d6691fd64434b2039ac777093977951afaae 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.ECz 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 31ad5dd02530d6691fd64434b2039ac777093977951afaae 0 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 31ad5dd02530d6691fd64434b2039ac777093977951afaae 0 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=31ad5dd02530d6691fd64434b2039ac777093977951afaae 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.ECz 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.ECz 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.ECz 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0393d96890f377c31c423dce70e739f055cf4b699a76fb8ec03b55f348432ebb 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.N9s 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0393d96890f377c31c423dce70e739f055cf4b699a76fb8ec03b55f348432ebb 3 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0393d96890f377c31c423dce70e739f055cf4b699a76fb8ec03b55f348432ebb 3 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0393d96890f377c31c423dce70e739f055cf4b699a76fb8ec03b55f348432ebb 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.N9s 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.N9s 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.N9s 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:46.997 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e287d895a7a170930db7a3fbc36e16aa 00:17:46.998 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:46.998 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.RX5 00:17:46.998 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e287d895a7a170930db7a3fbc36e16aa 1 00:17:46.998 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e287d895a7a170930db7a3fbc36e16aa 1 00:17:46.998 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:46.998 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:46.998 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e287d895a7a170930db7a3fbc36e16aa 00:17:46.998 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:46.998 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:46.998 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.RX5 00:17:46.998 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.RX5 00:17:46.998 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.RX5 00:17:46.998 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:17:46.998 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:46.998 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:46.998 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:46.998 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:46.998 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:46.998 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:46.998 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=016ae44aad9e4ae4e9315415768f0f4fcec3b75a1428b1f0 00:17:46.998 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:46.998 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Vmw 00:17:46.998 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 016ae44aad9e4ae4e9315415768f0f4fcec3b75a1428b1f0 2 00:17:46.998 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 016ae44aad9e4ae4e9315415768f0f4fcec3b75a1428b1f0 2 00:17:46.998 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:46.998 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:46.998 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=016ae44aad9e4ae4e9315415768f0f4fcec3b75a1428b1f0 00:17:46.998 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:46.998 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:46.998 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Vmw 00:17:46.998 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Vmw 00:17:46.998 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.Vmw 00:17:46.998 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:17:46.998 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:46.998 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:46.998 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:46.998 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:46.998 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:46.998 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:46.998 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=979401f9e25bb9d682ae1de7d4aef7d4be26b4cace9d0002 00:17:46.998 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:47.255 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.6jv 00:17:47.255 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 979401f9e25bb9d682ae1de7d4aef7d4be26b4cace9d0002 2 00:17:47.255 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 979401f9e25bb9d682ae1de7d4aef7d4be26b4cace9d0002 2 00:17:47.255 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:47.255 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:47.255 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=979401f9e25bb9d682ae1de7d4aef7d4be26b4cace9d0002 00:17:47.255 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:47.255 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:47.255 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.6jv 00:17:47.255 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.6jv 00:17:47.255 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.6jv 00:17:47.255 15:59:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:17:47.255 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:47.255 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:47.255 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:47.255 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:47.255 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:47.255 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:47.255 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=10a188246391339463e4131f0cf722a8 00:17:47.255 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:47.255 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.qru 00:17:47.255 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 10a188246391339463e4131f0cf722a8 1 00:17:47.255 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 10a188246391339463e4131f0cf722a8 1 00:17:47.255 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:47.256 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:47.256 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=10a188246391339463e4131f0cf722a8 00:17:47.256 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:47.256 15:59:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:47.256 15:59:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.qru 00:17:47.256 15:59:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.qru 00:17:47.256 15:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.qru 00:17:47.256 15:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:17:47.256 15:59:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:47.256 15:59:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:47.256 15:59:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:47.256 15:59:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:47.256 15:59:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:47.256 15:59:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:47.256 15:59:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e49f956e8959f56e8ae4b59178da1dd9b908898f48fd34fe9264450814d71ced 00:17:47.256 15:59:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:47.256 15:59:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.tnr 00:17:47.256 15:59:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e49f956e8959f56e8ae4b59178da1dd9b908898f48fd34fe9264450814d71ced 3 00:17:47.256 15:59:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e49f956e8959f56e8ae4b59178da1dd9b908898f48fd34fe9264450814d71ced 3 00:17:47.256 15:59:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:47.256 15:59:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:47.256 15:59:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e49f956e8959f56e8ae4b59178da1dd9b908898f48fd34fe9264450814d71ced 00:17:47.256 15:59:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:47.256 15:59:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:47.256 15:59:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.tnr 00:17:47.256 15:59:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.tnr 00:17:47.256 15:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.tnr 00:17:47.256 15:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:17:47.256 15:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 3761801 00:17:47.256 15:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3761801 ']' 00:17:47.256 15:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.256 15:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:47.256 15:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.256 15:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:47.256 15:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.513 15:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:47.513 15:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:47.513 15:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 3761935 /var/tmp/host.sock 00:17:47.513 15:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3761935 ']' 00:17:47.513 15:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:17:47.513 15:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:47.513 15:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:47.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:47.513 15:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:47.513 15:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.770 15:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:47.770 15:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:47.770 15:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:17:47.770 15:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.770 15:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.770 15:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.770 15:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:47.770 15:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ECz 00:17:47.770 15:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.770 15:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.770 15:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.770 15:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.ECz 00:17:47.770 15:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.ECz 00:17:47.770 15:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.N9s ]] 00:17:47.770 15:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.N9s 00:17:47.770 15:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.770 15:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.770 15:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.770 15:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.N9s 00:17:47.770 15:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.N9s 00:17:48.027 15:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:48.027 15:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.RX5 00:17:48.027 15:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.027 15:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.027 15:59:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.027 15:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.RX5 00:17:48.027 15:59:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.RX5 00:17:48.284 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.Vmw ]] 00:17:48.284 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Vmw 00:17:48.284 15:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.284 15:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.284 15:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.284 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Vmw 00:17:48.284 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Vmw 00:17:48.284 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:48.284 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.6jv 00:17:48.284 15:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.284 15:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.541 15:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.541 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.6jv 00:17:48.541 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.6jv 00:17:48.541 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.qru ]] 00:17:48.541 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.qru 00:17:48.541 15:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.541 15:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.541 15:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.541 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.qru 00:17:48.541 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.qru 00:17:48.797 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:48.797 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.tnr 00:17:48.797 15:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.797 15:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.797 15:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.797 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.tnr 00:17:48.797 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.tnr 00:17:49.054 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:17:49.055 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:49.055 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:49.055 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:49.055 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:49.055 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:49.055 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:17:49.055 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:49.055 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:49.055 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:49.055 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:49.055 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.055 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.055 15:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.055 15:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.055 15:59:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.055 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.055 15:59:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.312 00:17:49.312 15:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:49.312 15:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:49.312 15:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.568 15:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.568 15:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.568 15:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.568 15:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.568 15:59:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.568 15:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:49.568 { 00:17:49.568 "cntlid": 1, 00:17:49.568 "qid": 0, 00:17:49.568 "state": "enabled", 00:17:49.568 "thread": "nvmf_tgt_poll_group_000", 00:17:49.568 "listen_address": { 00:17:49.568 "trtype": "TCP", 00:17:49.568 "adrfam": "IPv4", 00:17:49.568 "traddr": "10.0.0.2", 00:17:49.568 "trsvcid": "4420" 00:17:49.568 }, 00:17:49.568 "peer_address": { 00:17:49.568 "trtype": "TCP", 00:17:49.568 "adrfam": "IPv4", 00:17:49.568 "traddr": "10.0.0.1", 00:17:49.568 "trsvcid": "55724" 00:17:49.568 }, 00:17:49.568 "auth": { 00:17:49.568 "state": "completed", 00:17:49.568 "digest": "sha256", 00:17:49.568 "dhgroup": "null" 00:17:49.568 } 00:17:49.568 } 00:17:49.568 ]' 00:17:49.568 15:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:49.568 15:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:49.568 15:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:49.568 15:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:49.568 15:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:49.568 15:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.568 15:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.568 15:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.824 15:59:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzFhZDVkZDAyNTMwZDY2OTFmZDY0NDM0YjIwMzlhYzc3NzA5Mzk3Nzk1MWFmYWFlNe7tPQ==: --dhchap-ctrl-secret DHHC-1:03:MDM5M2Q5Njg5MGYzNzdjMzFjNDIzZGNlNzBlNzM5ZjA1NWNmNGI2OTlhNzZmYjhlYzAzYjU1ZjM0ODQzMmViYtXVTjs=: 00:17:50.386 15:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.386 15:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:50.386 15:59:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.386 15:59:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.386 15:59:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.386 15:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:50.386 15:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:50.386 15:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:50.642 15:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:17:50.642 15:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:50.642 15:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:50.642 15:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:50.642 15:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:50.642 15:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.642 15:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.642 15:59:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.642 15:59:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.642 15:59:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.642 15:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.642 15:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.900 00:17:50.900 15:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:50.900 15:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:50.900 15:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.900 15:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.900 15:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.900 15:59:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.900 15:59:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.900 15:59:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.900 15:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:50.900 { 00:17:50.900 "cntlid": 3, 00:17:50.900 "qid": 0, 00:17:50.900 "state": "enabled", 00:17:50.900 "thread": "nvmf_tgt_poll_group_000", 00:17:50.900 "listen_address": { 00:17:50.900 "trtype": "TCP", 00:17:50.900 "adrfam": "IPv4", 00:17:50.900 "traddr": "10.0.0.2", 00:17:50.900 "trsvcid": "4420" 00:17:50.900 }, 00:17:50.900 "peer_address": { 00:17:50.900 "trtype": "TCP", 00:17:50.900 "adrfam": "IPv4", 00:17:50.900 "traddr": "10.0.0.1", 00:17:50.900 "trsvcid": "55752" 00:17:50.900 }, 00:17:50.900 "auth": { 00:17:50.900 "state": "completed", 00:17:50.900 "digest": "sha256", 00:17:50.900 "dhgroup": "null" 00:17:50.900 } 00:17:50.900 } 00:17:50.900 ]' 00:17:50.900 15:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:51.155 15:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:51.155 15:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:51.156 15:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:51.156 15:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:51.156 15:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.156 15:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.156 15:59:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.411 15:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZTI4N2Q4OTVhN2ExNzA5MzBkYjdhM2ZiYzM2ZTE2YWGKVPxo: --dhchap-ctrl-secret DHHC-1:02:MDE2YWU0NGFhZDllNGFlNGU5MzE1NDE1NzY4ZjBmNGZjZWMzYjc1YTE0MjhiMWYwT6tFBw==: 00:17:51.974 15:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.974 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.974 15:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:51.974 15:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.974 15:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.974 15:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.974 15:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:51.974 15:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:51.974 15:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:51.974 15:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:17:51.974 15:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:51.974 15:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:51.974 15:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:51.974 15:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:51.974 15:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.974 15:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.974 15:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.974 15:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.974 15:59:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.974 15:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.974 15:59:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.243 00:17:52.243 15:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:52.243 15:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:52.243 15:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.501 15:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.501 15:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.501 15:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.501 15:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.501 15:59:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.501 15:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:52.501 { 00:17:52.501 "cntlid": 5, 00:17:52.501 "qid": 0, 00:17:52.501 "state": "enabled", 00:17:52.501 "thread": "nvmf_tgt_poll_group_000", 00:17:52.501 "listen_address": { 00:17:52.501 "trtype": "TCP", 00:17:52.501 "adrfam": "IPv4", 00:17:52.501 "traddr": "10.0.0.2", 00:17:52.501 "trsvcid": "4420" 00:17:52.501 }, 00:17:52.501 "peer_address": { 00:17:52.501 "trtype": "TCP", 00:17:52.501 "adrfam": "IPv4", 00:17:52.501 "traddr": "10.0.0.1", 00:17:52.501 "trsvcid": "55784" 00:17:52.501 }, 00:17:52.501 "auth": { 00:17:52.501 "state": "completed", 00:17:52.501 "digest": "sha256", 00:17:52.501 "dhgroup": "null" 00:17:52.501 } 00:17:52.501 } 00:17:52.501 ]' 00:17:52.501 15:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:52.501 15:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:52.501 15:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:52.501 15:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:52.501 15:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:52.501 15:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.501 15:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.501 15:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.757 15:59:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OTc5NDAxZjllMjViYjlkNjgyYWUxZGU3ZDRhZWY3ZDRiZTI2YjRjYWNlOWQwMDAySmlsEw==: --dhchap-ctrl-secret DHHC-1:01:MTBhMTg4MjQ2MzkxMzM5NDYzZTQxMzFmMGNmNzIyYTjcjymE: 00:17:53.320 15:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.320 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.320 15:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:53.320 15:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.320 15:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.321 15:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.321 15:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:53.321 15:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:53.321 15:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:53.578 15:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:17:53.578 15:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:53.578 15:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:53.578 15:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:53.578 15:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:53.578 15:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.578 15:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:53.578 15:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.578 15:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.578 15:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.578 15:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:53.578 15:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:53.836 00:17:53.836 15:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:53.836 15:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:53.836 15:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.093 15:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.093 15:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.093 15:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.093 15:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.093 15:59:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.093 15:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:54.093 { 00:17:54.093 "cntlid": 7, 00:17:54.093 "qid": 0, 00:17:54.093 "state": "enabled", 00:17:54.093 "thread": "nvmf_tgt_poll_group_000", 00:17:54.093 "listen_address": { 00:17:54.093 "trtype": "TCP", 00:17:54.093 "adrfam": "IPv4", 00:17:54.093 "traddr": "10.0.0.2", 00:17:54.093 "trsvcid": "4420" 00:17:54.093 }, 00:17:54.093 "peer_address": { 00:17:54.093 "trtype": "TCP", 00:17:54.093 "adrfam": "IPv4", 00:17:54.093 "traddr": "10.0.0.1", 00:17:54.093 "trsvcid": "55810" 00:17:54.093 }, 00:17:54.093 "auth": { 00:17:54.093 "state": "completed", 00:17:54.093 "digest": "sha256", 00:17:54.093 "dhgroup": "null" 00:17:54.093 } 00:17:54.093 } 00:17:54.093 ]' 00:17:54.093 15:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:54.093 15:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:54.093 15:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:54.093 15:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:54.093 15:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:54.093 15:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.093 15:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.093 15:59:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.350 15:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZTQ5Zjk1NmU4OTU5ZjU2ZThhZTRiNTkxNzhkYTFkZDliOTA4ODk4ZjQ4ZmQzNGZlOTI2NDQ1MDgxNGQ3MWNlZA1oZjU=: 00:17:54.914 15:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.914 15:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:54.914 15:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.914 15:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.914 15:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.914 15:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:54.914 15:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:54.914 15:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:54.914 15:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:55.171 15:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:17:55.171 15:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:55.171 15:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:55.171 15:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:55.171 15:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:55.171 15:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.171 15:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.171 15:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.171 15:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.171 15:59:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.171 15:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.171 15:59:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.171 00:17:55.427 15:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:55.427 15:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:55.427 15:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.427 15:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.427 15:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.427 15:59:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.427 15:59:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.427 15:59:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.427 15:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:55.427 { 00:17:55.427 "cntlid": 9, 00:17:55.427 "qid": 0, 00:17:55.427 "state": "enabled", 00:17:55.427 "thread": "nvmf_tgt_poll_group_000", 00:17:55.427 "listen_address": { 00:17:55.427 "trtype": "TCP", 00:17:55.427 "adrfam": "IPv4", 00:17:55.427 "traddr": "10.0.0.2", 00:17:55.427 "trsvcid": "4420" 00:17:55.427 }, 00:17:55.427 "peer_address": { 00:17:55.427 "trtype": "TCP", 00:17:55.427 "adrfam": "IPv4", 00:17:55.427 "traddr": "10.0.0.1", 00:17:55.427 "trsvcid": "55846" 00:17:55.427 }, 00:17:55.427 "auth": { 00:17:55.427 "state": "completed", 00:17:55.427 "digest": "sha256", 00:17:55.427 "dhgroup": "ffdhe2048" 00:17:55.427 } 00:17:55.427 } 00:17:55.427 ]' 00:17:55.427 15:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:55.427 15:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:55.427 15:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:55.683 15:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:55.683 15:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:55.683 15:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.683 15:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.683 15:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.683 15:59:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzFhZDVkZDAyNTMwZDY2OTFmZDY0NDM0YjIwMzlhYzc3NzA5Mzk3Nzk1MWFmYWFlNe7tPQ==: --dhchap-ctrl-secret DHHC-1:03:MDM5M2Q5Njg5MGYzNzdjMzFjNDIzZGNlNzBlNzM5ZjA1NWNmNGI2OTlhNzZmYjhlYzAzYjU1ZjM0ODQzMmViYtXVTjs=: 00:17:56.268 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.268 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.268 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:56.268 15:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.268 15:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.268 15:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.268 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:56.268 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:56.268 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:56.530 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:17:56.530 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:56.530 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:56.530 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:56.530 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:56.530 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.530 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.530 15:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.530 15:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.530 15:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.530 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.530 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.788 00:17:56.788 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:56.788 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:56.788 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.046 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.046 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.046 15:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.046 15:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.046 15:59:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.046 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:57.046 { 00:17:57.046 "cntlid": 11, 00:17:57.046 "qid": 0, 00:17:57.046 "state": "enabled", 00:17:57.046 "thread": "nvmf_tgt_poll_group_000", 00:17:57.046 "listen_address": { 00:17:57.046 "trtype": "TCP", 00:17:57.046 "adrfam": "IPv4", 00:17:57.046 "traddr": "10.0.0.2", 00:17:57.046 "trsvcid": "4420" 00:17:57.046 }, 00:17:57.046 "peer_address": { 00:17:57.046 "trtype": "TCP", 00:17:57.046 "adrfam": "IPv4", 00:17:57.046 "traddr": "10.0.0.1", 00:17:57.046 "trsvcid": "55878" 00:17:57.046 }, 00:17:57.046 "auth": { 00:17:57.046 "state": "completed", 00:17:57.046 "digest": "sha256", 00:17:57.046 "dhgroup": "ffdhe2048" 00:17:57.046 } 00:17:57.046 } 00:17:57.046 ]' 00:17:57.046 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:57.046 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:57.046 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:57.046 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:57.046 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:57.046 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.046 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.046 15:59:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.318 15:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZTI4N2Q4OTVhN2ExNzA5MzBkYjdhM2ZiYzM2ZTE2YWGKVPxo: --dhchap-ctrl-secret DHHC-1:02:MDE2YWU0NGFhZDllNGFlNGU5MzE1NDE1NzY4ZjBmNGZjZWMzYjc1YTE0MjhiMWYwT6tFBw==: 00:17:57.882 15:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.882 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.882 15:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:57.882 15:59:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.882 15:59:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.882 15:59:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.882 15:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:57.882 15:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:57.882 15:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:58.139 15:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:17:58.140 15:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:58.140 15:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:58.140 15:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:58.140 15:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:58.140 15:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.140 15:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.140 15:59:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.140 15:59:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.140 15:59:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.140 15:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.140 15:59:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.396 00:17:58.396 15:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:58.396 15:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:58.396 15:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.396 15:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.396 15:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.396 15:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.652 15:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.652 15:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.652 15:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:58.652 { 00:17:58.652 "cntlid": 13, 00:17:58.652 "qid": 0, 00:17:58.652 "state": "enabled", 00:17:58.652 "thread": "nvmf_tgt_poll_group_000", 00:17:58.652 "listen_address": { 00:17:58.652 "trtype": "TCP", 00:17:58.652 "adrfam": "IPv4", 00:17:58.652 "traddr": "10.0.0.2", 00:17:58.652 "trsvcid": "4420" 00:17:58.652 }, 00:17:58.652 "peer_address": { 00:17:58.652 "trtype": "TCP", 00:17:58.652 "adrfam": "IPv4", 00:17:58.652 "traddr": "10.0.0.1", 00:17:58.652 "trsvcid": "59634" 00:17:58.652 }, 00:17:58.652 "auth": { 00:17:58.652 "state": "completed", 00:17:58.652 "digest": "sha256", 00:17:58.652 "dhgroup": "ffdhe2048" 00:17:58.652 } 00:17:58.652 } 00:17:58.652 ]' 00:17:58.652 15:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:58.652 15:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:58.652 15:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:58.652 15:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:58.652 15:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:58.652 15:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.652 15:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.652 15:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.908 15:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OTc5NDAxZjllMjViYjlkNjgyYWUxZGU3ZDRhZWY3ZDRiZTI2YjRjYWNlOWQwMDAySmlsEw==: --dhchap-ctrl-secret DHHC-1:01:MTBhMTg4MjQ2MzkxMzM5NDYzZTQxMzFmMGNmNzIyYTjcjymE: 00:17:59.472 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.472 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:59.472 15:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.472 15:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.472 15:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.472 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:59.472 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:59.472 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:59.729 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:17:59.729 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:59.729 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:59.729 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:59.729 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:59.729 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.729 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:59.729 15:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.729 15:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.729 15:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.729 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:59.729 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:59.729 00:17:59.985 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:59.985 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:59.985 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.985 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.985 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.985 15:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.985 15:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.985 15:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.985 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:59.985 { 00:17:59.985 "cntlid": 15, 00:17:59.985 "qid": 0, 00:17:59.985 "state": "enabled", 00:17:59.985 "thread": "nvmf_tgt_poll_group_000", 00:17:59.985 "listen_address": { 00:17:59.985 "trtype": "TCP", 00:17:59.985 "adrfam": "IPv4", 00:17:59.985 "traddr": "10.0.0.2", 00:17:59.985 "trsvcid": "4420" 00:17:59.985 }, 00:17:59.985 "peer_address": { 00:17:59.985 "trtype": "TCP", 00:17:59.985 "adrfam": "IPv4", 00:17:59.985 "traddr": "10.0.0.1", 00:17:59.985 "trsvcid": "59664" 00:17:59.985 }, 00:17:59.985 "auth": { 00:17:59.985 "state": "completed", 00:17:59.985 "digest": "sha256", 00:17:59.985 "dhgroup": "ffdhe2048" 00:17:59.985 } 00:17:59.985 } 00:17:59.985 ]' 00:17:59.985 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:59.985 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:59.985 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:00.242 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:00.242 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:00.242 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.242 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.242 15:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.242 15:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZTQ5Zjk1NmU4OTU5ZjU2ZThhZTRiNTkxNzhkYTFkZDliOTA4ODk4ZjQ4ZmQzNGZlOTI2NDQ1MDgxNGQ3MWNlZA1oZjU=: 00:18:00.805 15:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.805 15:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:00.805 15:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.805 15:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.805 15:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.805 15:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:00.805 15:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:00.805 15:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:00.805 15:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:01.062 15:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:18:01.062 15:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:01.062 15:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:01.062 15:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:01.062 15:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:01.062 15:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.062 15:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.062 15:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.062 15:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.062 15:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.062 15:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.062 15:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.320 00:18:01.320 15:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:01.320 15:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:01.320 15:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.577 15:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.577 15:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.577 15:59:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.577 15:59:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.577 15:59:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.577 15:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:01.577 { 00:18:01.577 "cntlid": 17, 00:18:01.577 "qid": 0, 00:18:01.577 "state": "enabled", 00:18:01.577 "thread": "nvmf_tgt_poll_group_000", 00:18:01.577 "listen_address": { 00:18:01.577 "trtype": "TCP", 00:18:01.577 "adrfam": "IPv4", 00:18:01.577 "traddr": "10.0.0.2", 00:18:01.577 "trsvcid": "4420" 00:18:01.577 }, 00:18:01.577 "peer_address": { 00:18:01.577 "trtype": "TCP", 00:18:01.577 "adrfam": "IPv4", 00:18:01.577 "traddr": "10.0.0.1", 00:18:01.577 "trsvcid": "59696" 00:18:01.577 }, 00:18:01.577 "auth": { 00:18:01.577 "state": "completed", 00:18:01.577 "digest": "sha256", 00:18:01.577 "dhgroup": "ffdhe3072" 00:18:01.577 } 00:18:01.577 } 00:18:01.577 ]' 00:18:01.577 15:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:01.577 15:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:01.577 15:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:01.577 15:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:01.577 15:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:01.577 15:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.577 15:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.577 15:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.834 15:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzFhZDVkZDAyNTMwZDY2OTFmZDY0NDM0YjIwMzlhYzc3NzA5Mzk3Nzk1MWFmYWFlNe7tPQ==: --dhchap-ctrl-secret DHHC-1:03:MDM5M2Q5Njg5MGYzNzdjMzFjNDIzZGNlNzBlNzM5ZjA1NWNmNGI2OTlhNzZmYjhlYzAzYjU1ZjM0ODQzMmViYtXVTjs=: 00:18:02.397 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.397 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:02.397 15:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.397 15:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.397 15:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.397 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:02.397 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:02.397 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:02.654 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:18:02.654 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:02.654 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:02.654 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:02.655 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:02.655 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.655 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.655 15:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.655 15:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.655 15:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.655 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.655 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.912 00:18:02.912 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:02.912 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.912 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:02.912 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.912 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.168 15:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.168 15:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.168 15:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.168 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:03.168 { 00:18:03.168 "cntlid": 19, 00:18:03.168 "qid": 0, 00:18:03.168 "state": "enabled", 00:18:03.168 "thread": "nvmf_tgt_poll_group_000", 00:18:03.168 "listen_address": { 00:18:03.168 "trtype": "TCP", 00:18:03.168 "adrfam": "IPv4", 00:18:03.168 "traddr": "10.0.0.2", 00:18:03.168 "trsvcid": "4420" 00:18:03.168 }, 00:18:03.168 "peer_address": { 00:18:03.168 "trtype": "TCP", 00:18:03.168 "adrfam": "IPv4", 00:18:03.168 "traddr": "10.0.0.1", 00:18:03.168 "trsvcid": "59722" 00:18:03.168 }, 00:18:03.168 "auth": { 00:18:03.168 "state": "completed", 00:18:03.168 "digest": "sha256", 00:18:03.168 "dhgroup": "ffdhe3072" 00:18:03.168 } 00:18:03.168 } 00:18:03.168 ]' 00:18:03.168 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:03.168 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:03.168 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:03.168 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:03.168 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:03.168 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.168 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.168 15:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.424 15:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZTI4N2Q4OTVhN2ExNzA5MzBkYjdhM2ZiYzM2ZTE2YWGKVPxo: --dhchap-ctrl-secret DHHC-1:02:MDE2YWU0NGFhZDllNGFlNGU5MzE1NDE1NzY4ZjBmNGZjZWMzYjc1YTE0MjhiMWYwT6tFBw==: 00:18:03.986 15:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.986 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.986 15:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:03.986 15:59:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.986 15:59:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.986 15:59:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.986 15:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:03.986 15:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:03.986 15:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:04.243 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:18:04.243 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:04.243 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:04.243 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:04.243 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:04.243 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.243 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.243 15:59:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.243 15:59:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.243 15:59:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.243 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.243 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.500 00:18:04.500 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:04.500 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:04.500 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.500 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.500 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.500 15:59:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.500 15:59:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.756 15:59:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.756 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:04.756 { 00:18:04.756 "cntlid": 21, 00:18:04.756 "qid": 0, 00:18:04.756 "state": "enabled", 00:18:04.756 "thread": "nvmf_tgt_poll_group_000", 00:18:04.756 "listen_address": { 00:18:04.756 "trtype": "TCP", 00:18:04.756 "adrfam": "IPv4", 00:18:04.756 "traddr": "10.0.0.2", 00:18:04.756 "trsvcid": "4420" 00:18:04.756 }, 00:18:04.756 "peer_address": { 00:18:04.756 "trtype": "TCP", 00:18:04.756 "adrfam": "IPv4", 00:18:04.756 "traddr": "10.0.0.1", 00:18:04.756 "trsvcid": "59742" 00:18:04.756 }, 00:18:04.756 "auth": { 00:18:04.756 "state": "completed", 00:18:04.756 "digest": "sha256", 00:18:04.756 "dhgroup": "ffdhe3072" 00:18:04.756 } 00:18:04.756 } 00:18:04.756 ]' 00:18:04.757 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:04.757 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:04.757 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:04.757 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:04.757 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:04.757 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.757 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.757 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.013 15:59:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OTc5NDAxZjllMjViYjlkNjgyYWUxZGU3ZDRhZWY3ZDRiZTI2YjRjYWNlOWQwMDAySmlsEw==: --dhchap-ctrl-secret DHHC-1:01:MTBhMTg4MjQ2MzkxMzM5NDYzZTQxMzFmMGNmNzIyYTjcjymE: 00:18:05.590 15:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.590 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.590 15:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:05.590 15:59:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.590 15:59:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.590 15:59:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.590 15:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:05.590 15:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:05.590 15:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:05.590 15:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:18:05.590 15:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:05.590 15:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:05.590 15:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:05.590 15:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:05.590 15:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.590 15:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:05.590 15:59:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.590 15:59:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.845 15:59:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.845 15:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:05.845 15:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:05.845 00:18:05.845 15:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:05.845 15:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.845 15:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:06.101 15:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.101 15:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.101 15:59:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.101 15:59:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.101 15:59:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.101 15:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:06.101 { 00:18:06.101 "cntlid": 23, 00:18:06.101 "qid": 0, 00:18:06.101 "state": "enabled", 00:18:06.101 "thread": "nvmf_tgt_poll_group_000", 00:18:06.101 "listen_address": { 00:18:06.101 "trtype": "TCP", 00:18:06.101 "adrfam": "IPv4", 00:18:06.101 "traddr": "10.0.0.2", 00:18:06.101 "trsvcid": "4420" 00:18:06.101 }, 00:18:06.101 "peer_address": { 00:18:06.101 "trtype": "TCP", 00:18:06.101 "adrfam": "IPv4", 00:18:06.101 "traddr": "10.0.0.1", 00:18:06.101 "trsvcid": "59756" 00:18:06.101 }, 00:18:06.101 "auth": { 00:18:06.101 "state": "completed", 00:18:06.101 "digest": "sha256", 00:18:06.101 "dhgroup": "ffdhe3072" 00:18:06.101 } 00:18:06.101 } 00:18:06.101 ]' 00:18:06.101 15:59:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:06.101 15:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:06.101 15:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:06.101 15:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:06.101 15:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:06.356 15:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.356 15:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.356 15:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.356 15:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZTQ5Zjk1NmU4OTU5ZjU2ZThhZTRiNTkxNzhkYTFkZDliOTA4ODk4ZjQ4ZmQzNGZlOTI2NDQ1MDgxNGQ3MWNlZA1oZjU=: 00:18:06.919 15:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.919 15:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:06.919 15:59:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.919 15:59:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.919 15:59:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.919 15:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:06.919 15:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:06.919 15:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:06.919 15:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:07.175 15:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:18:07.175 15:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:07.175 15:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:07.175 15:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:07.175 15:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:07.175 15:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.175 15:59:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.175 15:59:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.175 15:59:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.175 15:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.175 15:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.175 15:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.431 00:18:07.431 15:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:07.431 15:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:07.431 15:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.687 15:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.687 15:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.687 15:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.687 15:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.687 15:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.687 15:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:07.687 { 00:18:07.687 "cntlid": 25, 00:18:07.687 "qid": 0, 00:18:07.687 "state": "enabled", 00:18:07.687 "thread": "nvmf_tgt_poll_group_000", 00:18:07.687 "listen_address": { 00:18:07.687 "trtype": "TCP", 00:18:07.687 "adrfam": "IPv4", 00:18:07.687 "traddr": "10.0.0.2", 00:18:07.687 "trsvcid": "4420" 00:18:07.687 }, 00:18:07.687 "peer_address": { 00:18:07.687 "trtype": "TCP", 00:18:07.687 "adrfam": "IPv4", 00:18:07.687 "traddr": "10.0.0.1", 00:18:07.687 "trsvcid": "59788" 00:18:07.687 }, 00:18:07.687 "auth": { 00:18:07.687 "state": "completed", 00:18:07.687 "digest": "sha256", 00:18:07.687 "dhgroup": "ffdhe4096" 00:18:07.687 } 00:18:07.687 } 00:18:07.687 ]' 00:18:07.687 15:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:07.687 15:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:07.687 15:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:07.687 15:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:07.687 15:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:07.687 15:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.687 15:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.687 15:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.944 15:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzFhZDVkZDAyNTMwZDY2OTFmZDY0NDM0YjIwMzlhYzc3NzA5Mzk3Nzk1MWFmYWFlNe7tPQ==: --dhchap-ctrl-secret DHHC-1:03:MDM5M2Q5Njg5MGYzNzdjMzFjNDIzZGNlNzBlNzM5ZjA1NWNmNGI2OTlhNzZmYjhlYzAzYjU1ZjM0ODQzMmViYtXVTjs=: 00:18:08.505 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.505 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.505 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:08.505 15:59:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.505 15:59:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.505 15:59:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.505 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:08.505 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:08.505 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:08.763 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:18:08.763 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:08.763 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:08.763 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:08.763 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:08.763 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.763 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.763 15:59:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.763 15:59:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.763 15:59:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.763 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.763 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.020 00:18:09.020 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:09.020 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:09.020 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.020 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.020 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.020 15:59:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.020 15:59:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.020 15:59:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.020 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:09.020 { 00:18:09.020 "cntlid": 27, 00:18:09.020 "qid": 0, 00:18:09.020 "state": "enabled", 00:18:09.020 "thread": "nvmf_tgt_poll_group_000", 00:18:09.020 "listen_address": { 00:18:09.020 "trtype": "TCP", 00:18:09.020 "adrfam": "IPv4", 00:18:09.020 "traddr": "10.0.0.2", 00:18:09.020 "trsvcid": "4420" 00:18:09.020 }, 00:18:09.020 "peer_address": { 00:18:09.020 "trtype": "TCP", 00:18:09.020 "adrfam": "IPv4", 00:18:09.020 "traddr": "10.0.0.1", 00:18:09.020 "trsvcid": "48304" 00:18:09.020 }, 00:18:09.020 "auth": { 00:18:09.020 "state": "completed", 00:18:09.020 "digest": "sha256", 00:18:09.020 "dhgroup": "ffdhe4096" 00:18:09.020 } 00:18:09.020 } 00:18:09.020 ]' 00:18:09.020 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:09.277 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:09.277 15:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:09.277 15:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:09.277 15:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:09.277 15:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.277 15:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.277 15:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.549 15:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZTI4N2Q4OTVhN2ExNzA5MzBkYjdhM2ZiYzM2ZTE2YWGKVPxo: --dhchap-ctrl-secret DHHC-1:02:MDE2YWU0NGFhZDllNGFlNGU5MzE1NDE1NzY4ZjBmNGZjZWMzYjc1YTE0MjhiMWYwT6tFBw==: 00:18:10.114 15:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.114 15:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:10.114 15:59:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.114 15:59:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.114 15:59:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.114 15:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:10.114 15:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:10.114 15:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:10.114 15:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:18:10.114 15:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:10.114 15:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:10.114 15:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:10.114 15:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:10.114 15:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.114 15:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.114 15:59:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.114 15:59:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.114 15:59:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.114 15:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.114 15:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.381 00:18:10.381 15:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:10.381 15:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:10.381 15:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.639 15:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.639 15:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.639 15:59:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.639 15:59:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.639 15:59:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.639 15:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:10.639 { 00:18:10.639 "cntlid": 29, 00:18:10.639 "qid": 0, 00:18:10.639 "state": "enabled", 00:18:10.639 "thread": "nvmf_tgt_poll_group_000", 00:18:10.639 "listen_address": { 00:18:10.639 "trtype": "TCP", 00:18:10.639 "adrfam": "IPv4", 00:18:10.639 "traddr": "10.0.0.2", 00:18:10.639 "trsvcid": "4420" 00:18:10.639 }, 00:18:10.639 "peer_address": { 00:18:10.639 "trtype": "TCP", 00:18:10.639 "adrfam": "IPv4", 00:18:10.639 "traddr": "10.0.0.1", 00:18:10.639 "trsvcid": "48322" 00:18:10.639 }, 00:18:10.639 "auth": { 00:18:10.639 "state": "completed", 00:18:10.639 "digest": "sha256", 00:18:10.639 "dhgroup": "ffdhe4096" 00:18:10.639 } 00:18:10.639 } 00:18:10.639 ]' 00:18:10.639 15:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:10.639 15:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:10.639 15:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:10.639 15:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:10.639 15:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:10.639 15:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.639 15:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.639 15:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.896 15:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OTc5NDAxZjllMjViYjlkNjgyYWUxZGU3ZDRhZWY3ZDRiZTI2YjRjYWNlOWQwMDAySmlsEw==: --dhchap-ctrl-secret DHHC-1:01:MTBhMTg4MjQ2MzkxMzM5NDYzZTQxMzFmMGNmNzIyYTjcjymE: 00:18:11.460 15:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.460 15:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:11.460 15:59:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.460 15:59:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.460 15:59:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.460 15:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:11.460 15:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:11.460 15:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:11.717 15:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:18:11.718 15:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:11.718 15:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:11.718 15:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:11.718 15:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:11.718 15:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.718 15:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:11.718 15:59:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.718 15:59:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.718 15:59:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.718 15:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:11.718 15:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:11.975 00:18:11.975 15:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:11.975 15:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:11.975 15:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.232 15:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.232 15:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.232 15:59:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.232 15:59:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.232 15:59:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.232 15:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:12.232 { 00:18:12.232 "cntlid": 31, 00:18:12.232 "qid": 0, 00:18:12.232 "state": "enabled", 00:18:12.232 "thread": "nvmf_tgt_poll_group_000", 00:18:12.232 "listen_address": { 00:18:12.232 "trtype": "TCP", 00:18:12.232 "adrfam": "IPv4", 00:18:12.232 "traddr": "10.0.0.2", 00:18:12.232 "trsvcid": "4420" 00:18:12.232 }, 00:18:12.233 "peer_address": { 00:18:12.233 "trtype": "TCP", 00:18:12.233 "adrfam": "IPv4", 00:18:12.233 "traddr": "10.0.0.1", 00:18:12.233 "trsvcid": "48344" 00:18:12.233 }, 00:18:12.233 "auth": { 00:18:12.233 "state": "completed", 00:18:12.233 "digest": "sha256", 00:18:12.233 "dhgroup": "ffdhe4096" 00:18:12.233 } 00:18:12.233 } 00:18:12.233 ]' 00:18:12.233 15:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:12.233 15:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:12.233 15:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:12.233 15:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:12.233 15:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:12.233 15:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.233 15:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.233 15:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.490 15:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZTQ5Zjk1NmU4OTU5ZjU2ZThhZTRiNTkxNzhkYTFkZDliOTA4ODk4ZjQ4ZmQzNGZlOTI2NDQ1MDgxNGQ3MWNlZA1oZjU=: 00:18:13.054 15:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.054 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.054 15:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:13.054 15:59:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.054 15:59:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.054 15:59:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.054 15:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:13.054 15:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:13.054 15:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:13.054 15:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:13.311 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:18:13.311 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:13.311 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:13.311 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:13.311 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:13.311 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.311 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.311 15:59:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.311 15:59:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.311 15:59:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.311 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.311 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.567 00:18:13.567 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:13.567 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:13.567 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.824 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.824 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.824 15:59:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.824 15:59:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.824 15:59:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.824 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:13.824 { 00:18:13.824 "cntlid": 33, 00:18:13.824 "qid": 0, 00:18:13.824 "state": "enabled", 00:18:13.824 "thread": "nvmf_tgt_poll_group_000", 00:18:13.824 "listen_address": { 00:18:13.824 "trtype": "TCP", 00:18:13.824 "adrfam": "IPv4", 00:18:13.824 "traddr": "10.0.0.2", 00:18:13.824 "trsvcid": "4420" 00:18:13.824 }, 00:18:13.824 "peer_address": { 00:18:13.824 "trtype": "TCP", 00:18:13.824 "adrfam": "IPv4", 00:18:13.824 "traddr": "10.0.0.1", 00:18:13.824 "trsvcid": "48358" 00:18:13.824 }, 00:18:13.824 "auth": { 00:18:13.824 "state": "completed", 00:18:13.824 "digest": "sha256", 00:18:13.824 "dhgroup": "ffdhe6144" 00:18:13.824 } 00:18:13.824 } 00:18:13.824 ]' 00:18:13.824 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:13.824 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:13.824 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:13.824 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:13.824 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:13.824 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.824 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.824 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.080 15:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzFhZDVkZDAyNTMwZDY2OTFmZDY0NDM0YjIwMzlhYzc3NzA5Mzk3Nzk1MWFmYWFlNe7tPQ==: --dhchap-ctrl-secret DHHC-1:03:MDM5M2Q5Njg5MGYzNzdjMzFjNDIzZGNlNzBlNzM5ZjA1NWNmNGI2OTlhNzZmYjhlYzAzYjU1ZjM0ODQzMmViYtXVTjs=: 00:18:14.641 15:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.641 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.641 15:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:14.641 15:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.641 15:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.641 15:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.641 15:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:14.641 15:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:14.641 15:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:14.898 15:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:18:14.898 15:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:14.898 15:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:14.898 15:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:14.898 15:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:14.898 15:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.898 15:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.898 15:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.898 15:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.898 15:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.898 15:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.898 15:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.154 00:18:15.154 15:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:15.154 15:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:15.154 15:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.411 15:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.411 15:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.411 15:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.411 15:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.411 15:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.411 15:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:15.411 { 00:18:15.411 "cntlid": 35, 00:18:15.411 "qid": 0, 00:18:15.411 "state": "enabled", 00:18:15.411 "thread": "nvmf_tgt_poll_group_000", 00:18:15.411 "listen_address": { 00:18:15.411 "trtype": "TCP", 00:18:15.411 "adrfam": "IPv4", 00:18:15.411 "traddr": "10.0.0.2", 00:18:15.411 "trsvcid": "4420" 00:18:15.411 }, 00:18:15.411 "peer_address": { 00:18:15.411 "trtype": "TCP", 00:18:15.411 "adrfam": "IPv4", 00:18:15.411 "traddr": "10.0.0.1", 00:18:15.411 "trsvcid": "48386" 00:18:15.411 }, 00:18:15.411 "auth": { 00:18:15.411 "state": "completed", 00:18:15.411 "digest": "sha256", 00:18:15.411 "dhgroup": "ffdhe6144" 00:18:15.411 } 00:18:15.411 } 00:18:15.411 ]' 00:18:15.411 15:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:15.411 15:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:15.411 15:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:15.411 15:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:15.411 15:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:15.411 15:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.411 15:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.411 15:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.667 15:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZTI4N2Q4OTVhN2ExNzA5MzBkYjdhM2ZiYzM2ZTE2YWGKVPxo: --dhchap-ctrl-secret DHHC-1:02:MDE2YWU0NGFhZDllNGFlNGU5MzE1NDE1NzY4ZjBmNGZjZWMzYjc1YTE0MjhiMWYwT6tFBw==: 00:18:16.229 15:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.229 15:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:16.229 15:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.229 15:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.229 15:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.229 15:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:16.229 15:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:16.229 15:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:16.487 15:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:18:16.487 15:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:16.487 15:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:16.487 15:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:16.487 15:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:16.487 15:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.487 15:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.487 15:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.487 15:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.487 15:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.487 15:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.487 15:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.745 00:18:16.745 15:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:16.745 15:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:16.745 15:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.001 15:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.001 15:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.001 15:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.001 15:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.001 15:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.002 15:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:17.002 { 00:18:17.002 "cntlid": 37, 00:18:17.002 "qid": 0, 00:18:17.002 "state": "enabled", 00:18:17.002 "thread": "nvmf_tgt_poll_group_000", 00:18:17.002 "listen_address": { 00:18:17.002 "trtype": "TCP", 00:18:17.002 "adrfam": "IPv4", 00:18:17.002 "traddr": "10.0.0.2", 00:18:17.002 "trsvcid": "4420" 00:18:17.002 }, 00:18:17.002 "peer_address": { 00:18:17.002 "trtype": "TCP", 00:18:17.002 "adrfam": "IPv4", 00:18:17.002 "traddr": "10.0.0.1", 00:18:17.002 "trsvcid": "48426" 00:18:17.002 }, 00:18:17.002 "auth": { 00:18:17.002 "state": "completed", 00:18:17.002 "digest": "sha256", 00:18:17.002 "dhgroup": "ffdhe6144" 00:18:17.002 } 00:18:17.002 } 00:18:17.002 ]' 00:18:17.002 15:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:17.002 15:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:17.002 15:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:17.002 15:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:17.002 15:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:17.002 15:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.002 15:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.002 15:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.258 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OTc5NDAxZjllMjViYjlkNjgyYWUxZGU3ZDRhZWY3ZDRiZTI2YjRjYWNlOWQwMDAySmlsEw==: --dhchap-ctrl-secret DHHC-1:01:MTBhMTg4MjQ2MzkxMzM5NDYzZTQxMzFmMGNmNzIyYTjcjymE: 00:18:17.823 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.823 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:17.823 15:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.823 15:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.823 15:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.823 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:17.823 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:17.823 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:18.080 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:18:18.080 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:18.080 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:18.080 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:18.080 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:18.080 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.080 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:18.080 15:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.080 15:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.080 15:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.080 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:18.080 15:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:18.338 00:18:18.338 15:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:18.338 15:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:18.338 15:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.595 15:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.595 15:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.595 15:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.595 15:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.595 15:59:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.595 15:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:18.595 { 00:18:18.595 "cntlid": 39, 00:18:18.595 "qid": 0, 00:18:18.595 "state": "enabled", 00:18:18.595 "thread": "nvmf_tgt_poll_group_000", 00:18:18.595 "listen_address": { 00:18:18.595 "trtype": "TCP", 00:18:18.595 "adrfam": "IPv4", 00:18:18.595 "traddr": "10.0.0.2", 00:18:18.595 "trsvcid": "4420" 00:18:18.595 }, 00:18:18.595 "peer_address": { 00:18:18.595 "trtype": "TCP", 00:18:18.595 "adrfam": "IPv4", 00:18:18.595 "traddr": "10.0.0.1", 00:18:18.595 "trsvcid": "60090" 00:18:18.595 }, 00:18:18.595 "auth": { 00:18:18.595 "state": "completed", 00:18:18.595 "digest": "sha256", 00:18:18.595 "dhgroup": "ffdhe6144" 00:18:18.595 } 00:18:18.595 } 00:18:18.595 ]' 00:18:18.595 15:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:18.595 15:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:18.595 15:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:18.595 15:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:18.595 15:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:18.595 15:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.595 15:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.595 15:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.853 15:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZTQ5Zjk1NmU4OTU5ZjU2ZThhZTRiNTkxNzhkYTFkZDliOTA4ODk4ZjQ4ZmQzNGZlOTI2NDQ1MDgxNGQ3MWNlZA1oZjU=: 00:18:19.417 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.417 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.417 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:19.417 15:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.417 15:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.417 15:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.417 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:19.417 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:19.417 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:19.417 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:19.675 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:18:19.675 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:19.675 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:19.675 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:19.675 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:19.675 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.675 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.675 15:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.675 15:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.675 15:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.675 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.675 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.932 00:18:19.932 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:19.932 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:19.932 15:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.189 15:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.189 15:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.189 15:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.189 15:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.189 15:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.189 15:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:20.189 { 00:18:20.189 "cntlid": 41, 00:18:20.189 "qid": 0, 00:18:20.189 "state": "enabled", 00:18:20.189 "thread": "nvmf_tgt_poll_group_000", 00:18:20.189 "listen_address": { 00:18:20.189 "trtype": "TCP", 00:18:20.189 "adrfam": "IPv4", 00:18:20.189 "traddr": "10.0.0.2", 00:18:20.189 "trsvcid": "4420" 00:18:20.189 }, 00:18:20.189 "peer_address": { 00:18:20.189 "trtype": "TCP", 00:18:20.189 "adrfam": "IPv4", 00:18:20.189 "traddr": "10.0.0.1", 00:18:20.189 "trsvcid": "60108" 00:18:20.189 }, 00:18:20.189 "auth": { 00:18:20.189 "state": "completed", 00:18:20.189 "digest": "sha256", 00:18:20.189 "dhgroup": "ffdhe8192" 00:18:20.189 } 00:18:20.189 } 00:18:20.189 ]' 00:18:20.189 15:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:20.189 15:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:20.189 15:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:20.189 15:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:20.446 15:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:20.446 15:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.446 15:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.446 15:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.446 15:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzFhZDVkZDAyNTMwZDY2OTFmZDY0NDM0YjIwMzlhYzc3NzA5Mzk3Nzk1MWFmYWFlNe7tPQ==: --dhchap-ctrl-secret DHHC-1:03:MDM5M2Q5Njg5MGYzNzdjMzFjNDIzZGNlNzBlNzM5ZjA1NWNmNGI2OTlhNzZmYjhlYzAzYjU1ZjM0ODQzMmViYtXVTjs=: 00:18:21.011 15:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.011 15:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:21.011 15:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.011 15:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.011 15:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.011 15:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:21.011 15:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:21.011 15:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:21.268 15:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:18:21.268 15:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:21.268 15:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:21.268 15:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:21.268 15:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:21.268 15:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.268 15:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.268 15:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.268 15:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.268 15:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.268 15:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.268 15:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.831 00:18:21.831 15:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:21.831 15:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:21.831 15:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.089 15:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.089 15:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.089 15:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.089 15:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.089 15:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.089 15:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:22.089 { 00:18:22.089 "cntlid": 43, 00:18:22.089 "qid": 0, 00:18:22.089 "state": "enabled", 00:18:22.089 "thread": "nvmf_tgt_poll_group_000", 00:18:22.089 "listen_address": { 00:18:22.089 "trtype": "TCP", 00:18:22.089 "adrfam": "IPv4", 00:18:22.089 "traddr": "10.0.0.2", 00:18:22.089 "trsvcid": "4420" 00:18:22.089 }, 00:18:22.089 "peer_address": { 00:18:22.089 "trtype": "TCP", 00:18:22.089 "adrfam": "IPv4", 00:18:22.089 "traddr": "10.0.0.1", 00:18:22.089 "trsvcid": "60124" 00:18:22.089 }, 00:18:22.089 "auth": { 00:18:22.089 "state": "completed", 00:18:22.089 "digest": "sha256", 00:18:22.089 "dhgroup": "ffdhe8192" 00:18:22.089 } 00:18:22.089 } 00:18:22.089 ]' 00:18:22.089 15:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:22.089 15:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:22.089 15:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:22.089 15:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:22.089 15:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:22.089 15:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.089 15:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.089 15:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.346 15:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZTI4N2Q4OTVhN2ExNzA5MzBkYjdhM2ZiYzM2ZTE2YWGKVPxo: --dhchap-ctrl-secret DHHC-1:02:MDE2YWU0NGFhZDllNGFlNGU5MzE1NDE1NzY4ZjBmNGZjZWMzYjc1YTE0MjhiMWYwT6tFBw==: 00:18:22.910 15:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.910 15:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:22.910 15:59:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.910 15:59:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.910 15:59:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.910 15:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:22.910 15:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:22.910 15:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:22.910 15:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:18:22.910 15:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:22.910 15:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:22.910 15:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:22.910 15:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:22.910 15:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.910 15:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.910 15:59:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.910 15:59:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.910 15:59:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.911 15:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.911 15:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.475 00:18:23.475 15:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:23.475 15:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:23.475 15:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.732 15:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.732 15:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.732 15:59:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.732 15:59:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.732 15:59:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.732 15:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:23.732 { 00:18:23.732 "cntlid": 45, 00:18:23.732 "qid": 0, 00:18:23.732 "state": "enabled", 00:18:23.732 "thread": "nvmf_tgt_poll_group_000", 00:18:23.732 "listen_address": { 00:18:23.732 "trtype": "TCP", 00:18:23.732 "adrfam": "IPv4", 00:18:23.732 "traddr": "10.0.0.2", 00:18:23.732 "trsvcid": "4420" 00:18:23.732 }, 00:18:23.732 "peer_address": { 00:18:23.732 "trtype": "TCP", 00:18:23.732 "adrfam": "IPv4", 00:18:23.732 "traddr": "10.0.0.1", 00:18:23.732 "trsvcid": "60148" 00:18:23.732 }, 00:18:23.732 "auth": { 00:18:23.732 "state": "completed", 00:18:23.732 "digest": "sha256", 00:18:23.732 "dhgroup": "ffdhe8192" 00:18:23.732 } 00:18:23.732 } 00:18:23.732 ]' 00:18:23.732 15:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:23.732 15:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:23.732 15:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:23.732 15:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:23.732 15:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:23.732 15:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.732 15:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.732 15:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.989 15:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OTc5NDAxZjllMjViYjlkNjgyYWUxZGU3ZDRhZWY3ZDRiZTI2YjRjYWNlOWQwMDAySmlsEw==: --dhchap-ctrl-secret DHHC-1:01:MTBhMTg4MjQ2MzkxMzM5NDYzZTQxMzFmMGNmNzIyYTjcjymE: 00:18:24.619 15:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.619 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.619 15:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:24.619 15:59:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.619 15:59:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.619 15:59:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.619 15:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:24.619 15:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:24.619 15:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:24.619 15:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:18:24.619 15:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:24.619 15:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:24.619 15:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:24.619 15:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:24.619 15:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.619 15:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:24.619 15:59:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.619 15:59:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.619 15:59:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.619 15:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:24.619 15:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:25.182 00:18:25.182 15:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:25.182 15:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:25.182 15:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.439 15:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.439 15:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.439 15:59:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.439 15:59:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.439 15:59:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.439 15:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:25.439 { 00:18:25.439 "cntlid": 47, 00:18:25.439 "qid": 0, 00:18:25.439 "state": "enabled", 00:18:25.439 "thread": "nvmf_tgt_poll_group_000", 00:18:25.439 "listen_address": { 00:18:25.439 "trtype": "TCP", 00:18:25.439 "adrfam": "IPv4", 00:18:25.439 "traddr": "10.0.0.2", 00:18:25.439 "trsvcid": "4420" 00:18:25.439 }, 00:18:25.439 "peer_address": { 00:18:25.439 "trtype": "TCP", 00:18:25.439 "adrfam": "IPv4", 00:18:25.439 "traddr": "10.0.0.1", 00:18:25.439 "trsvcid": "60172" 00:18:25.439 }, 00:18:25.439 "auth": { 00:18:25.439 "state": "completed", 00:18:25.439 "digest": "sha256", 00:18:25.439 "dhgroup": "ffdhe8192" 00:18:25.439 } 00:18:25.439 } 00:18:25.439 ]' 00:18:25.439 15:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:25.439 15:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:25.439 15:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:25.439 15:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:25.439 15:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:25.439 15:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.439 15:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.439 15:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.695 15:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZTQ5Zjk1NmU4OTU5ZjU2ZThhZTRiNTkxNzhkYTFkZDliOTA4ODk4ZjQ4ZmQzNGZlOTI2NDQ1MDgxNGQ3MWNlZA1oZjU=: 00:18:26.259 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.259 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:26.259 15:59:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.259 15:59:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.259 15:59:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.259 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:26.259 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:26.259 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:26.259 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:26.259 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:26.516 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:18:26.516 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:26.516 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:26.516 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:26.516 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:26.516 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.516 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.516 15:59:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.516 15:59:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.516 15:59:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.516 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.516 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.773 00:18:26.773 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:26.773 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:26.773 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.031 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.031 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.031 15:59:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.031 15:59:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.031 15:59:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.031 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:27.031 { 00:18:27.031 "cntlid": 49, 00:18:27.031 "qid": 0, 00:18:27.031 "state": "enabled", 00:18:27.031 "thread": "nvmf_tgt_poll_group_000", 00:18:27.031 "listen_address": { 00:18:27.031 "trtype": "TCP", 00:18:27.031 "adrfam": "IPv4", 00:18:27.031 "traddr": "10.0.0.2", 00:18:27.031 "trsvcid": "4420" 00:18:27.031 }, 00:18:27.031 "peer_address": { 00:18:27.031 "trtype": "TCP", 00:18:27.031 "adrfam": "IPv4", 00:18:27.031 "traddr": "10.0.0.1", 00:18:27.031 "trsvcid": "60196" 00:18:27.031 }, 00:18:27.031 "auth": { 00:18:27.031 "state": "completed", 00:18:27.032 "digest": "sha384", 00:18:27.032 "dhgroup": "null" 00:18:27.032 } 00:18:27.032 } 00:18:27.032 ]' 00:18:27.032 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:27.032 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:27.032 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:27.032 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:27.032 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:27.032 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.032 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.032 15:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.288 15:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzFhZDVkZDAyNTMwZDY2OTFmZDY0NDM0YjIwMzlhYzc3NzA5Mzk3Nzk1MWFmYWFlNe7tPQ==: --dhchap-ctrl-secret DHHC-1:03:MDM5M2Q5Njg5MGYzNzdjMzFjNDIzZGNlNzBlNzM5ZjA1NWNmNGI2OTlhNzZmYjhlYzAzYjU1ZjM0ODQzMmViYtXVTjs=: 00:18:27.853 15:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.853 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.853 15:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:27.853 15:59:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.853 15:59:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.853 15:59:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.853 15:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:27.853 15:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:27.853 15:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:28.110 15:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:18:28.110 15:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:28.110 15:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:28.110 15:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:28.110 15:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:28.110 15:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.110 15:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.110 15:59:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.110 15:59:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.110 15:59:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.110 15:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.110 15:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.367 00:18:28.367 15:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:28.367 15:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:28.367 15:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.367 15:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.367 15:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.367 15:59:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.367 15:59:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.367 15:59:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.367 15:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:28.367 { 00:18:28.367 "cntlid": 51, 00:18:28.367 "qid": 0, 00:18:28.367 "state": "enabled", 00:18:28.367 "thread": "nvmf_tgt_poll_group_000", 00:18:28.367 "listen_address": { 00:18:28.367 "trtype": "TCP", 00:18:28.367 "adrfam": "IPv4", 00:18:28.367 "traddr": "10.0.0.2", 00:18:28.367 "trsvcid": "4420" 00:18:28.367 }, 00:18:28.367 "peer_address": { 00:18:28.367 "trtype": "TCP", 00:18:28.367 "adrfam": "IPv4", 00:18:28.367 "traddr": "10.0.0.1", 00:18:28.367 "trsvcid": "59940" 00:18:28.367 }, 00:18:28.367 "auth": { 00:18:28.367 "state": "completed", 00:18:28.367 "digest": "sha384", 00:18:28.367 "dhgroup": "null" 00:18:28.367 } 00:18:28.367 } 00:18:28.367 ]' 00:18:28.367 15:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:28.624 15:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:28.624 15:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:28.624 15:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:28.624 15:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:28.624 15:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.624 15:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.624 15:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.624 15:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZTI4N2Q4OTVhN2ExNzA5MzBkYjdhM2ZiYzM2ZTE2YWGKVPxo: --dhchap-ctrl-secret DHHC-1:02:MDE2YWU0NGFhZDllNGFlNGU5MzE1NDE1NzY4ZjBmNGZjZWMzYjc1YTE0MjhiMWYwT6tFBw==: 00:18:29.188 15:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.188 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.188 15:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:29.188 15:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.188 15:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.445 15:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.445 15:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:29.445 15:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:29.445 15:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:29.445 15:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:18:29.445 15:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:29.445 15:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:29.445 15:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:29.445 15:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:29.445 15:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.445 15:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.445 15:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.445 15:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.445 15:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.445 15:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.445 15:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.702 00:18:29.702 15:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:29.702 15:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:29.702 15:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.959 15:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.959 15:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.959 15:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.959 15:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.959 15:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.959 15:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:29.959 { 00:18:29.959 "cntlid": 53, 00:18:29.959 "qid": 0, 00:18:29.959 "state": "enabled", 00:18:29.959 "thread": "nvmf_tgt_poll_group_000", 00:18:29.959 "listen_address": { 00:18:29.959 "trtype": "TCP", 00:18:29.959 "adrfam": "IPv4", 00:18:29.959 "traddr": "10.0.0.2", 00:18:29.959 "trsvcid": "4420" 00:18:29.959 }, 00:18:29.959 "peer_address": { 00:18:29.959 "trtype": "TCP", 00:18:29.959 "adrfam": "IPv4", 00:18:29.959 "traddr": "10.0.0.1", 00:18:29.959 "trsvcid": "59970" 00:18:29.959 }, 00:18:29.959 "auth": { 00:18:29.959 "state": "completed", 00:18:29.959 "digest": "sha384", 00:18:29.959 "dhgroup": "null" 00:18:29.959 } 00:18:29.959 } 00:18:29.959 ]' 00:18:29.959 15:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:29.959 15:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:29.959 15:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:29.959 15:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:29.959 15:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:29.959 15:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.959 15:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.959 15:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.216 15:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OTc5NDAxZjllMjViYjlkNjgyYWUxZGU3ZDRhZWY3ZDRiZTI2YjRjYWNlOWQwMDAySmlsEw==: --dhchap-ctrl-secret DHHC-1:01:MTBhMTg4MjQ2MzkxMzM5NDYzZTQxMzFmMGNmNzIyYTjcjymE: 00:18:30.780 15:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.780 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.780 15:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:30.780 15:59:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.780 15:59:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.780 15:59:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.780 15:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:30.780 15:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:30.780 15:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:31.037 15:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:18:31.037 15:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:31.037 15:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:31.037 15:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:31.037 15:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:31.037 15:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.037 15:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:31.037 15:59:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.037 15:59:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.037 15:59:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.037 15:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:31.037 15:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:31.294 00:18:31.294 16:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:31.294 16:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:31.294 16:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.294 16:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.294 16:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.294 16:00:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.294 16:00:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.294 16:00:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.294 16:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:31.294 { 00:18:31.294 "cntlid": 55, 00:18:31.294 "qid": 0, 00:18:31.294 "state": "enabled", 00:18:31.294 "thread": "nvmf_tgt_poll_group_000", 00:18:31.294 "listen_address": { 00:18:31.294 "trtype": "TCP", 00:18:31.294 "adrfam": "IPv4", 00:18:31.294 "traddr": "10.0.0.2", 00:18:31.294 "trsvcid": "4420" 00:18:31.294 }, 00:18:31.294 "peer_address": { 00:18:31.294 "trtype": "TCP", 00:18:31.294 "adrfam": "IPv4", 00:18:31.294 "traddr": "10.0.0.1", 00:18:31.294 "trsvcid": "59994" 00:18:31.294 }, 00:18:31.294 "auth": { 00:18:31.294 "state": "completed", 00:18:31.294 "digest": "sha384", 00:18:31.294 "dhgroup": "null" 00:18:31.294 } 00:18:31.294 } 00:18:31.294 ]' 00:18:31.294 16:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:31.549 16:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:31.549 16:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:31.549 16:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:31.549 16:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:31.549 16:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.549 16:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.549 16:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.805 16:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZTQ5Zjk1NmU4OTU5ZjU2ZThhZTRiNTkxNzhkYTFkZDliOTA4ODk4ZjQ4ZmQzNGZlOTI2NDQ1MDgxNGQ3MWNlZA1oZjU=: 00:18:32.367 16:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.367 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.367 16:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:32.367 16:00:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.367 16:00:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.367 16:00:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.367 16:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:32.367 16:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:32.367 16:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:32.367 16:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:32.367 16:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:18:32.367 16:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:32.367 16:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:32.367 16:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:32.367 16:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:32.367 16:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.367 16:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.367 16:00:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.367 16:00:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.367 16:00:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.367 16:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.367 16:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.622 00:18:32.622 16:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:32.622 16:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:32.622 16:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.877 16:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.877 16:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.877 16:00:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.877 16:00:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.877 16:00:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.877 16:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:32.877 { 00:18:32.877 "cntlid": 57, 00:18:32.877 "qid": 0, 00:18:32.877 "state": "enabled", 00:18:32.877 "thread": "nvmf_tgt_poll_group_000", 00:18:32.877 "listen_address": { 00:18:32.877 "trtype": "TCP", 00:18:32.877 "adrfam": "IPv4", 00:18:32.877 "traddr": "10.0.0.2", 00:18:32.877 "trsvcid": "4420" 00:18:32.877 }, 00:18:32.877 "peer_address": { 00:18:32.877 "trtype": "TCP", 00:18:32.877 "adrfam": "IPv4", 00:18:32.877 "traddr": "10.0.0.1", 00:18:32.877 "trsvcid": "60018" 00:18:32.877 }, 00:18:32.877 "auth": { 00:18:32.877 "state": "completed", 00:18:32.877 "digest": "sha384", 00:18:32.877 "dhgroup": "ffdhe2048" 00:18:32.877 } 00:18:32.877 } 00:18:32.877 ]' 00:18:32.877 16:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:32.877 16:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:32.877 16:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:32.877 16:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:32.877 16:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:33.132 16:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.132 16:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.132 16:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.132 16:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzFhZDVkZDAyNTMwZDY2OTFmZDY0NDM0YjIwMzlhYzc3NzA5Mzk3Nzk1MWFmYWFlNe7tPQ==: --dhchap-ctrl-secret DHHC-1:03:MDM5M2Q5Njg5MGYzNzdjMzFjNDIzZGNlNzBlNzM5ZjA1NWNmNGI2OTlhNzZmYjhlYzAzYjU1ZjM0ODQzMmViYtXVTjs=: 00:18:33.693 16:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.693 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.693 16:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:33.693 16:00:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.693 16:00:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.693 16:00:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.693 16:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:33.693 16:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:33.693 16:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:33.948 16:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:18:33.948 16:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:33.948 16:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:33.948 16:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:33.948 16:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:33.948 16:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.948 16:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.948 16:00:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.948 16:00:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.948 16:00:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.948 16:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.948 16:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.204 00:18:34.204 16:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:34.204 16:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:34.204 16:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.470 16:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.470 16:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.470 16:00:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.470 16:00:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.470 16:00:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.470 16:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:34.470 { 00:18:34.470 "cntlid": 59, 00:18:34.470 "qid": 0, 00:18:34.470 "state": "enabled", 00:18:34.470 "thread": "nvmf_tgt_poll_group_000", 00:18:34.470 "listen_address": { 00:18:34.470 "trtype": "TCP", 00:18:34.470 "adrfam": "IPv4", 00:18:34.470 "traddr": "10.0.0.2", 00:18:34.470 "trsvcid": "4420" 00:18:34.470 }, 00:18:34.470 "peer_address": { 00:18:34.470 "trtype": "TCP", 00:18:34.470 "adrfam": "IPv4", 00:18:34.470 "traddr": "10.0.0.1", 00:18:34.470 "trsvcid": "60040" 00:18:34.470 }, 00:18:34.470 "auth": { 00:18:34.470 "state": "completed", 00:18:34.470 "digest": "sha384", 00:18:34.470 "dhgroup": "ffdhe2048" 00:18:34.470 } 00:18:34.470 } 00:18:34.470 ]' 00:18:34.470 16:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:34.470 16:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:34.470 16:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:34.470 16:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:34.470 16:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:34.470 16:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.470 16:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.470 16:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.732 16:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZTI4N2Q4OTVhN2ExNzA5MzBkYjdhM2ZiYzM2ZTE2YWGKVPxo: --dhchap-ctrl-secret DHHC-1:02:MDE2YWU0NGFhZDllNGFlNGU5MzE1NDE1NzY4ZjBmNGZjZWMzYjc1YTE0MjhiMWYwT6tFBw==: 00:18:35.295 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.295 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:35.295 16:00:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.295 16:00:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.295 16:00:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.295 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:35.295 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:35.295 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:35.551 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:18:35.551 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:35.551 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:35.551 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:35.551 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:35.551 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.551 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.551 16:00:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.551 16:00:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.551 16:00:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.551 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.551 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.807 00:18:35.807 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:35.807 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:35.807 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.807 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.807 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.807 16:00:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.807 16:00:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.807 16:00:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.807 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:35.807 { 00:18:35.807 "cntlid": 61, 00:18:35.807 "qid": 0, 00:18:35.807 "state": "enabled", 00:18:35.807 "thread": "nvmf_tgt_poll_group_000", 00:18:35.807 "listen_address": { 00:18:35.807 "trtype": "TCP", 00:18:35.807 "adrfam": "IPv4", 00:18:35.807 "traddr": "10.0.0.2", 00:18:35.807 "trsvcid": "4420" 00:18:35.807 }, 00:18:35.807 "peer_address": { 00:18:35.807 "trtype": "TCP", 00:18:35.807 "adrfam": "IPv4", 00:18:35.807 "traddr": "10.0.0.1", 00:18:35.807 "trsvcid": "60072" 00:18:35.807 }, 00:18:35.807 "auth": { 00:18:35.807 "state": "completed", 00:18:35.807 "digest": "sha384", 00:18:35.807 "dhgroup": "ffdhe2048" 00:18:35.807 } 00:18:35.807 } 00:18:35.807 ]' 00:18:36.069 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:36.069 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:36.069 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:36.069 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:36.069 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:36.069 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.069 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.069 16:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.326 16:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OTc5NDAxZjllMjViYjlkNjgyYWUxZGU3ZDRhZWY3ZDRiZTI2YjRjYWNlOWQwMDAySmlsEw==: --dhchap-ctrl-secret DHHC-1:01:MTBhMTg4MjQ2MzkxMzM5NDYzZTQxMzFmMGNmNzIyYTjcjymE: 00:18:36.890 16:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.890 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.890 16:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:36.890 16:00:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.890 16:00:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.890 16:00:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.890 16:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:36.890 16:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:36.890 16:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:36.890 16:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:18:36.891 16:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:36.891 16:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:36.891 16:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:36.891 16:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:36.891 16:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.891 16:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:36.891 16:00:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.891 16:00:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.891 16:00:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.891 16:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:36.891 16:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:37.146 00:18:37.146 16:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:37.146 16:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:37.146 16:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.403 16:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.403 16:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.403 16:00:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.403 16:00:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.403 16:00:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.403 16:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:37.403 { 00:18:37.403 "cntlid": 63, 00:18:37.403 "qid": 0, 00:18:37.403 "state": "enabled", 00:18:37.403 "thread": "nvmf_tgt_poll_group_000", 00:18:37.403 "listen_address": { 00:18:37.403 "trtype": "TCP", 00:18:37.403 "adrfam": "IPv4", 00:18:37.403 "traddr": "10.0.0.2", 00:18:37.403 "trsvcid": "4420" 00:18:37.403 }, 00:18:37.403 "peer_address": { 00:18:37.403 "trtype": "TCP", 00:18:37.403 "adrfam": "IPv4", 00:18:37.403 "traddr": "10.0.0.1", 00:18:37.403 "trsvcid": "60098" 00:18:37.403 }, 00:18:37.403 "auth": { 00:18:37.403 "state": "completed", 00:18:37.403 "digest": "sha384", 00:18:37.403 "dhgroup": "ffdhe2048" 00:18:37.403 } 00:18:37.403 } 00:18:37.403 ]' 00:18:37.403 16:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:37.403 16:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:37.403 16:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:37.659 16:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:37.660 16:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:37.660 16:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.660 16:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.660 16:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.660 16:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZTQ5Zjk1NmU4OTU5ZjU2ZThhZTRiNTkxNzhkYTFkZDliOTA4ODk4ZjQ4ZmQzNGZlOTI2NDQ1MDgxNGQ3MWNlZA1oZjU=: 00:18:38.221 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.221 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:38.221 16:00:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.221 16:00:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.221 16:00:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.221 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:38.221 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:38.221 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:38.221 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:38.478 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:18:38.478 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:38.478 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:38.478 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:38.478 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:38.478 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.478 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.478 16:00:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.478 16:00:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.478 16:00:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.478 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.478 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.776 00:18:38.776 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:38.776 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:38.776 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.050 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.050 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.050 16:00:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.050 16:00:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.050 16:00:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.050 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:39.050 { 00:18:39.050 "cntlid": 65, 00:18:39.050 "qid": 0, 00:18:39.050 "state": "enabled", 00:18:39.050 "thread": "nvmf_tgt_poll_group_000", 00:18:39.050 "listen_address": { 00:18:39.050 "trtype": "TCP", 00:18:39.050 "adrfam": "IPv4", 00:18:39.050 "traddr": "10.0.0.2", 00:18:39.050 "trsvcid": "4420" 00:18:39.050 }, 00:18:39.050 "peer_address": { 00:18:39.050 "trtype": "TCP", 00:18:39.050 "adrfam": "IPv4", 00:18:39.050 "traddr": "10.0.0.1", 00:18:39.050 "trsvcid": "39206" 00:18:39.050 }, 00:18:39.050 "auth": { 00:18:39.050 "state": "completed", 00:18:39.050 "digest": "sha384", 00:18:39.050 "dhgroup": "ffdhe3072" 00:18:39.050 } 00:18:39.050 } 00:18:39.050 ]' 00:18:39.050 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:39.050 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:39.050 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:39.050 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:39.050 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:39.050 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.050 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.050 16:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.306 16:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzFhZDVkZDAyNTMwZDY2OTFmZDY0NDM0YjIwMzlhYzc3NzA5Mzk3Nzk1MWFmYWFlNe7tPQ==: --dhchap-ctrl-secret DHHC-1:03:MDM5M2Q5Njg5MGYzNzdjMzFjNDIzZGNlNzBlNzM5ZjA1NWNmNGI2OTlhNzZmYjhlYzAzYjU1ZjM0ODQzMmViYtXVTjs=: 00:18:39.870 16:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.870 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.870 16:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:39.870 16:00:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.870 16:00:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.870 16:00:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.870 16:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:39.870 16:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:39.870 16:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:40.127 16:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:18:40.127 16:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:40.127 16:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:40.127 16:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:40.127 16:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:40.127 16:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.127 16:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.127 16:00:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.127 16:00:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.127 16:00:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.128 16:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.128 16:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.384 00:18:40.384 16:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:40.384 16:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:40.384 16:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.384 16:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.384 16:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.384 16:00:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.384 16:00:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.384 16:00:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.384 16:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:40.384 { 00:18:40.384 "cntlid": 67, 00:18:40.384 "qid": 0, 00:18:40.384 "state": "enabled", 00:18:40.384 "thread": "nvmf_tgt_poll_group_000", 00:18:40.384 "listen_address": { 00:18:40.384 "trtype": "TCP", 00:18:40.384 "adrfam": "IPv4", 00:18:40.385 "traddr": "10.0.0.2", 00:18:40.385 "trsvcid": "4420" 00:18:40.385 }, 00:18:40.385 "peer_address": { 00:18:40.385 "trtype": "TCP", 00:18:40.385 "adrfam": "IPv4", 00:18:40.385 "traddr": "10.0.0.1", 00:18:40.385 "trsvcid": "39240" 00:18:40.385 }, 00:18:40.385 "auth": { 00:18:40.385 "state": "completed", 00:18:40.385 "digest": "sha384", 00:18:40.385 "dhgroup": "ffdhe3072" 00:18:40.385 } 00:18:40.385 } 00:18:40.385 ]' 00:18:40.385 16:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:40.385 16:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:40.385 16:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:40.642 16:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:40.642 16:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:40.642 16:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.642 16:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.642 16:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.642 16:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZTI4N2Q4OTVhN2ExNzA5MzBkYjdhM2ZiYzM2ZTE2YWGKVPxo: --dhchap-ctrl-secret DHHC-1:02:MDE2YWU0NGFhZDllNGFlNGU5MzE1NDE1NzY4ZjBmNGZjZWMzYjc1YTE0MjhiMWYwT6tFBw==: 00:18:41.205 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.205 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.205 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:41.205 16:00:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.205 16:00:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.205 16:00:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.205 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:41.205 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:41.205 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:41.461 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:18:41.461 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:41.461 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:41.461 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:41.461 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:41.461 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.461 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:41.461 16:00:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.461 16:00:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.461 16:00:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.461 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:41.461 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:41.718 00:18:41.718 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:41.718 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.718 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:41.975 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.975 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.975 16:00:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.975 16:00:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.975 16:00:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.975 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:41.975 { 00:18:41.975 "cntlid": 69, 00:18:41.975 "qid": 0, 00:18:41.975 "state": "enabled", 00:18:41.975 "thread": "nvmf_tgt_poll_group_000", 00:18:41.975 "listen_address": { 00:18:41.975 "trtype": "TCP", 00:18:41.975 "adrfam": "IPv4", 00:18:41.975 "traddr": "10.0.0.2", 00:18:41.975 "trsvcid": "4420" 00:18:41.975 }, 00:18:41.975 "peer_address": { 00:18:41.975 "trtype": "TCP", 00:18:41.975 "adrfam": "IPv4", 00:18:41.975 "traddr": "10.0.0.1", 00:18:41.975 "trsvcid": "39264" 00:18:41.975 }, 00:18:41.975 "auth": { 00:18:41.975 "state": "completed", 00:18:41.975 "digest": "sha384", 00:18:41.975 "dhgroup": "ffdhe3072" 00:18:41.975 } 00:18:41.975 } 00:18:41.975 ]' 00:18:41.975 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:41.975 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:41.975 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:41.975 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:41.975 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:41.975 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.975 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.975 16:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.238 16:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OTc5NDAxZjllMjViYjlkNjgyYWUxZGU3ZDRhZWY3ZDRiZTI2YjRjYWNlOWQwMDAySmlsEw==: --dhchap-ctrl-secret DHHC-1:01:MTBhMTg4MjQ2MzkxMzM5NDYzZTQxMzFmMGNmNzIyYTjcjymE: 00:18:42.803 16:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.803 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.803 16:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:42.803 16:00:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.803 16:00:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.803 16:00:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.803 16:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:42.803 16:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:42.803 16:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:43.060 16:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:18:43.060 16:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:43.060 16:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:43.060 16:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:43.060 16:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:43.060 16:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.060 16:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:43.060 16:00:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.060 16:00:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.060 16:00:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.060 16:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:43.060 16:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:43.316 00:18:43.316 16:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:43.316 16:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:43.316 16:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.316 16:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.316 16:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.316 16:00:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.316 16:00:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.316 16:00:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.316 16:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:43.316 { 00:18:43.316 "cntlid": 71, 00:18:43.316 "qid": 0, 00:18:43.316 "state": "enabled", 00:18:43.316 "thread": "nvmf_tgt_poll_group_000", 00:18:43.316 "listen_address": { 00:18:43.316 "trtype": "TCP", 00:18:43.316 "adrfam": "IPv4", 00:18:43.316 "traddr": "10.0.0.2", 00:18:43.316 "trsvcid": "4420" 00:18:43.316 }, 00:18:43.316 "peer_address": { 00:18:43.316 "trtype": "TCP", 00:18:43.316 "adrfam": "IPv4", 00:18:43.316 "traddr": "10.0.0.1", 00:18:43.316 "trsvcid": "39278" 00:18:43.316 }, 00:18:43.316 "auth": { 00:18:43.316 "state": "completed", 00:18:43.316 "digest": "sha384", 00:18:43.316 "dhgroup": "ffdhe3072" 00:18:43.316 } 00:18:43.316 } 00:18:43.316 ]' 00:18:43.316 16:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:43.572 16:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:43.572 16:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:43.572 16:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:43.572 16:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:43.572 16:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.572 16:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.572 16:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.828 16:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZTQ5Zjk1NmU4OTU5ZjU2ZThhZTRiNTkxNzhkYTFkZDliOTA4ODk4ZjQ4ZmQzNGZlOTI2NDQ1MDgxNGQ3MWNlZA1oZjU=: 00:18:44.391 16:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.391 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.391 16:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:44.391 16:00:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.391 16:00:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.391 16:00:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.391 16:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:44.391 16:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:44.391 16:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:44.391 16:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:44.391 16:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:18:44.391 16:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:44.391 16:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:44.391 16:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:44.391 16:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:44.391 16:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.391 16:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.391 16:00:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.391 16:00:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.391 16:00:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.391 16:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.391 16:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.648 00:18:44.648 16:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:44.648 16:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:44.648 16:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.906 16:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.906 16:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.906 16:00:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.906 16:00:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.906 16:00:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.906 16:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:44.906 { 00:18:44.906 "cntlid": 73, 00:18:44.906 "qid": 0, 00:18:44.906 "state": "enabled", 00:18:44.906 "thread": "nvmf_tgt_poll_group_000", 00:18:44.906 "listen_address": { 00:18:44.906 "trtype": "TCP", 00:18:44.906 "adrfam": "IPv4", 00:18:44.906 "traddr": "10.0.0.2", 00:18:44.906 "trsvcid": "4420" 00:18:44.906 }, 00:18:44.906 "peer_address": { 00:18:44.906 "trtype": "TCP", 00:18:44.906 "adrfam": "IPv4", 00:18:44.906 "traddr": "10.0.0.1", 00:18:44.906 "trsvcid": "39300" 00:18:44.906 }, 00:18:44.906 "auth": { 00:18:44.906 "state": "completed", 00:18:44.906 "digest": "sha384", 00:18:44.906 "dhgroup": "ffdhe4096" 00:18:44.906 } 00:18:44.906 } 00:18:44.906 ]' 00:18:44.906 16:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:44.906 16:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:44.906 16:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:44.906 16:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:44.906 16:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:45.162 16:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.162 16:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.162 16:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.162 16:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzFhZDVkZDAyNTMwZDY2OTFmZDY0NDM0YjIwMzlhYzc3NzA5Mzk3Nzk1MWFmYWFlNe7tPQ==: --dhchap-ctrl-secret DHHC-1:03:MDM5M2Q5Njg5MGYzNzdjMzFjNDIzZGNlNzBlNzM5ZjA1NWNmNGI2OTlhNzZmYjhlYzAzYjU1ZjM0ODQzMmViYtXVTjs=: 00:18:45.724 16:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.724 16:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:45.724 16:00:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.724 16:00:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.724 16:00:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.724 16:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:45.724 16:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:45.724 16:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:45.981 16:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:18:45.981 16:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:45.981 16:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:45.981 16:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:45.981 16:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:45.981 16:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.981 16:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.981 16:00:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.981 16:00:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.981 16:00:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.981 16:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.981 16:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.238 00:18:46.238 16:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:46.238 16:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:46.238 16:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.495 16:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.495 16:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.495 16:00:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.495 16:00:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.495 16:00:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.495 16:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:46.495 { 00:18:46.495 "cntlid": 75, 00:18:46.495 "qid": 0, 00:18:46.495 "state": "enabled", 00:18:46.495 "thread": "nvmf_tgt_poll_group_000", 00:18:46.495 "listen_address": { 00:18:46.495 "trtype": "TCP", 00:18:46.495 "adrfam": "IPv4", 00:18:46.495 "traddr": "10.0.0.2", 00:18:46.495 "trsvcid": "4420" 00:18:46.495 }, 00:18:46.495 "peer_address": { 00:18:46.495 "trtype": "TCP", 00:18:46.495 "adrfam": "IPv4", 00:18:46.495 "traddr": "10.0.0.1", 00:18:46.495 "trsvcid": "39320" 00:18:46.495 }, 00:18:46.495 "auth": { 00:18:46.495 "state": "completed", 00:18:46.495 "digest": "sha384", 00:18:46.495 "dhgroup": "ffdhe4096" 00:18:46.495 } 00:18:46.495 } 00:18:46.495 ]' 00:18:46.495 16:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:46.495 16:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:46.495 16:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:46.495 16:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:46.495 16:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:46.495 16:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.495 16:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.495 16:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.752 16:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZTI4N2Q4OTVhN2ExNzA5MzBkYjdhM2ZiYzM2ZTE2YWGKVPxo: --dhchap-ctrl-secret DHHC-1:02:MDE2YWU0NGFhZDllNGFlNGU5MzE1NDE1NzY4ZjBmNGZjZWMzYjc1YTE0MjhiMWYwT6tFBw==: 00:18:47.314 16:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.314 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.314 16:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:47.314 16:00:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.314 16:00:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.314 16:00:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.314 16:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:47.314 16:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:47.314 16:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:47.570 16:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:18:47.570 16:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:47.570 16:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:47.570 16:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:47.570 16:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:47.570 16:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.570 16:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.570 16:00:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.570 16:00:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.570 16:00:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.570 16:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.570 16:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.825 00:18:47.826 16:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:47.826 16:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:47.826 16:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.826 16:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.826 16:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.826 16:00:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.826 16:00:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.082 16:00:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.082 16:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:48.082 { 00:18:48.082 "cntlid": 77, 00:18:48.082 "qid": 0, 00:18:48.082 "state": "enabled", 00:18:48.082 "thread": "nvmf_tgt_poll_group_000", 00:18:48.082 "listen_address": { 00:18:48.082 "trtype": "TCP", 00:18:48.082 "adrfam": "IPv4", 00:18:48.082 "traddr": "10.0.0.2", 00:18:48.082 "trsvcid": "4420" 00:18:48.082 }, 00:18:48.082 "peer_address": { 00:18:48.082 "trtype": "TCP", 00:18:48.082 "adrfam": "IPv4", 00:18:48.082 "traddr": "10.0.0.1", 00:18:48.082 "trsvcid": "39338" 00:18:48.082 }, 00:18:48.082 "auth": { 00:18:48.082 "state": "completed", 00:18:48.082 "digest": "sha384", 00:18:48.082 "dhgroup": "ffdhe4096" 00:18:48.082 } 00:18:48.082 } 00:18:48.082 ]' 00:18:48.082 16:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:48.082 16:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:48.082 16:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:48.082 16:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:48.082 16:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:48.082 16:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.082 16:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.082 16:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.338 16:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OTc5NDAxZjllMjViYjlkNjgyYWUxZGU3ZDRhZWY3ZDRiZTI2YjRjYWNlOWQwMDAySmlsEw==: --dhchap-ctrl-secret DHHC-1:01:MTBhMTg4MjQ2MzkxMzM5NDYzZTQxMzFmMGNmNzIyYTjcjymE: 00:18:48.900 16:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.900 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.900 16:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:48.900 16:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.900 16:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.900 16:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.900 16:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:48.900 16:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:48.900 16:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:48.900 16:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:18:48.900 16:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:48.900 16:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:48.900 16:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:48.900 16:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:48.900 16:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.900 16:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:48.900 16:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.900 16:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.900 16:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.900 16:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:48.900 16:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:49.156 00:18:49.156 16:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:49.156 16:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:49.156 16:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.412 16:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.412 16:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.412 16:00:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.412 16:00:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.412 16:00:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.412 16:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:49.412 { 00:18:49.412 "cntlid": 79, 00:18:49.412 "qid": 0, 00:18:49.412 "state": "enabled", 00:18:49.412 "thread": "nvmf_tgt_poll_group_000", 00:18:49.412 "listen_address": { 00:18:49.412 "trtype": "TCP", 00:18:49.412 "adrfam": "IPv4", 00:18:49.412 "traddr": "10.0.0.2", 00:18:49.412 "trsvcid": "4420" 00:18:49.412 }, 00:18:49.412 "peer_address": { 00:18:49.412 "trtype": "TCP", 00:18:49.412 "adrfam": "IPv4", 00:18:49.412 "traddr": "10.0.0.1", 00:18:49.412 "trsvcid": "33714" 00:18:49.412 }, 00:18:49.412 "auth": { 00:18:49.412 "state": "completed", 00:18:49.412 "digest": "sha384", 00:18:49.412 "dhgroup": "ffdhe4096" 00:18:49.412 } 00:18:49.412 } 00:18:49.412 ]' 00:18:49.412 16:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:49.412 16:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:49.412 16:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:49.412 16:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:49.412 16:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:49.668 16:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.668 16:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.668 16:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.668 16:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZTQ5Zjk1NmU4OTU5ZjU2ZThhZTRiNTkxNzhkYTFkZDliOTA4ODk4ZjQ4ZmQzNGZlOTI2NDQ1MDgxNGQ3MWNlZA1oZjU=: 00:18:50.232 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.232 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:50.232 16:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.232 16:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.232 16:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.232 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:50.232 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:50.233 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:50.233 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:50.489 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:18:50.489 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:50.489 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:50.489 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:50.489 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:50.489 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.489 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.489 16:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.489 16:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.489 16:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.489 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.489 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.746 00:18:50.746 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:50.746 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:50.746 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.002 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.002 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.002 16:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.002 16:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.002 16:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.002 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:51.002 { 00:18:51.002 "cntlid": 81, 00:18:51.002 "qid": 0, 00:18:51.002 "state": "enabled", 00:18:51.002 "thread": "nvmf_tgt_poll_group_000", 00:18:51.002 "listen_address": { 00:18:51.002 "trtype": "TCP", 00:18:51.002 "adrfam": "IPv4", 00:18:51.002 "traddr": "10.0.0.2", 00:18:51.002 "trsvcid": "4420" 00:18:51.002 }, 00:18:51.002 "peer_address": { 00:18:51.002 "trtype": "TCP", 00:18:51.002 "adrfam": "IPv4", 00:18:51.002 "traddr": "10.0.0.1", 00:18:51.002 "trsvcid": "33736" 00:18:51.002 }, 00:18:51.002 "auth": { 00:18:51.002 "state": "completed", 00:18:51.002 "digest": "sha384", 00:18:51.002 "dhgroup": "ffdhe6144" 00:18:51.002 } 00:18:51.002 } 00:18:51.002 ]' 00:18:51.002 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:51.002 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:51.002 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:51.002 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:51.002 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:51.258 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.258 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.258 16:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.258 16:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzFhZDVkZDAyNTMwZDY2OTFmZDY0NDM0YjIwMzlhYzc3NzA5Mzk3Nzk1MWFmYWFlNe7tPQ==: --dhchap-ctrl-secret DHHC-1:03:MDM5M2Q5Njg5MGYzNzdjMzFjNDIzZGNlNzBlNzM5ZjA1NWNmNGI2OTlhNzZmYjhlYzAzYjU1ZjM0ODQzMmViYtXVTjs=: 00:18:51.820 16:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.820 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.820 16:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:51.820 16:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.820 16:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.820 16:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.820 16:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:51.820 16:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:51.820 16:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:52.076 16:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:18:52.076 16:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:52.076 16:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:52.076 16:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:52.076 16:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:52.076 16:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.076 16:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.076 16:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.076 16:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.076 16:00:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.076 16:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.076 16:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.333 00:18:52.333 16:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:52.333 16:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:52.333 16:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.590 16:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.590 16:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.590 16:00:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.590 16:00:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.590 16:00:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.590 16:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:52.590 { 00:18:52.590 "cntlid": 83, 00:18:52.590 "qid": 0, 00:18:52.590 "state": "enabled", 00:18:52.590 "thread": "nvmf_tgt_poll_group_000", 00:18:52.590 "listen_address": { 00:18:52.590 "trtype": "TCP", 00:18:52.590 "adrfam": "IPv4", 00:18:52.590 "traddr": "10.0.0.2", 00:18:52.590 "trsvcid": "4420" 00:18:52.590 }, 00:18:52.590 "peer_address": { 00:18:52.590 "trtype": "TCP", 00:18:52.590 "adrfam": "IPv4", 00:18:52.590 "traddr": "10.0.0.1", 00:18:52.590 "trsvcid": "33756" 00:18:52.590 }, 00:18:52.590 "auth": { 00:18:52.590 "state": "completed", 00:18:52.590 "digest": "sha384", 00:18:52.590 "dhgroup": "ffdhe6144" 00:18:52.590 } 00:18:52.590 } 00:18:52.590 ]' 00:18:52.590 16:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:52.590 16:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:52.590 16:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:52.590 16:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:52.590 16:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:52.892 16:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.892 16:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.892 16:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.892 16:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZTI4N2Q4OTVhN2ExNzA5MzBkYjdhM2ZiYzM2ZTE2YWGKVPxo: --dhchap-ctrl-secret DHHC-1:02:MDE2YWU0NGFhZDllNGFlNGU5MzE1NDE1NzY4ZjBmNGZjZWMzYjc1YTE0MjhiMWYwT6tFBw==: 00:18:53.455 16:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.455 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.455 16:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:53.455 16:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.455 16:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.455 16:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.455 16:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:53.455 16:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:53.455 16:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:53.712 16:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:18:53.712 16:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:53.712 16:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:53.712 16:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:53.712 16:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:53.712 16:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.712 16:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.712 16:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.712 16:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.712 16:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.712 16:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.712 16:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.979 00:18:53.979 16:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:53.980 16:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.980 16:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:54.244 16:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.244 16:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.244 16:00:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.244 16:00:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.244 16:00:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.244 16:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:54.244 { 00:18:54.244 "cntlid": 85, 00:18:54.244 "qid": 0, 00:18:54.244 "state": "enabled", 00:18:54.244 "thread": "nvmf_tgt_poll_group_000", 00:18:54.244 "listen_address": { 00:18:54.244 "trtype": "TCP", 00:18:54.244 "adrfam": "IPv4", 00:18:54.244 "traddr": "10.0.0.2", 00:18:54.244 "trsvcid": "4420" 00:18:54.244 }, 00:18:54.244 "peer_address": { 00:18:54.244 "trtype": "TCP", 00:18:54.244 "adrfam": "IPv4", 00:18:54.244 "traddr": "10.0.0.1", 00:18:54.244 "trsvcid": "33776" 00:18:54.244 }, 00:18:54.244 "auth": { 00:18:54.244 "state": "completed", 00:18:54.244 "digest": "sha384", 00:18:54.244 "dhgroup": "ffdhe6144" 00:18:54.244 } 00:18:54.244 } 00:18:54.244 ]' 00:18:54.244 16:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:54.244 16:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:54.244 16:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:54.244 16:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:54.244 16:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:54.244 16:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.244 16:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.244 16:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.500 16:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OTc5NDAxZjllMjViYjlkNjgyYWUxZGU3ZDRhZWY3ZDRiZTI2YjRjYWNlOWQwMDAySmlsEw==: --dhchap-ctrl-secret DHHC-1:01:MTBhMTg4MjQ2MzkxMzM5NDYzZTQxMzFmMGNmNzIyYTjcjymE: 00:18:55.062 16:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.062 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.062 16:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:55.062 16:00:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.062 16:00:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.062 16:00:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.062 16:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:55.062 16:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:55.062 16:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:55.319 16:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:18:55.319 16:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:55.319 16:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:55.319 16:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:55.319 16:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:55.319 16:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.319 16:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:55.319 16:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.319 16:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.319 16:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.319 16:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:55.319 16:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:55.576 00:18:55.576 16:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:55.576 16:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.576 16:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:55.833 16:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.833 16:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.833 16:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.833 16:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.833 16:00:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.833 16:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:55.833 { 00:18:55.833 "cntlid": 87, 00:18:55.833 "qid": 0, 00:18:55.833 "state": "enabled", 00:18:55.833 "thread": "nvmf_tgt_poll_group_000", 00:18:55.833 "listen_address": { 00:18:55.833 "trtype": "TCP", 00:18:55.833 "adrfam": "IPv4", 00:18:55.833 "traddr": "10.0.0.2", 00:18:55.833 "trsvcid": "4420" 00:18:55.833 }, 00:18:55.833 "peer_address": { 00:18:55.833 "trtype": "TCP", 00:18:55.833 "adrfam": "IPv4", 00:18:55.833 "traddr": "10.0.0.1", 00:18:55.833 "trsvcid": "33802" 00:18:55.833 }, 00:18:55.833 "auth": { 00:18:55.833 "state": "completed", 00:18:55.833 "digest": "sha384", 00:18:55.833 "dhgroup": "ffdhe6144" 00:18:55.833 } 00:18:55.833 } 00:18:55.833 ]' 00:18:55.833 16:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:55.833 16:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:55.833 16:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:55.833 16:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:55.833 16:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:55.833 16:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.833 16:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.833 16:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.093 16:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZTQ5Zjk1NmU4OTU5ZjU2ZThhZTRiNTkxNzhkYTFkZDliOTA4ODk4ZjQ4ZmQzNGZlOTI2NDQ1MDgxNGQ3MWNlZA1oZjU=: 00:18:56.655 16:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.655 16:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:56.655 16:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.655 16:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.655 16:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.655 16:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:56.655 16:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:56.655 16:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:56.655 16:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:56.912 16:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:18:56.912 16:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:56.912 16:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:56.912 16:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:56.912 16:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:56.912 16:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.912 16:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.912 16:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.912 16:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.912 16:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.912 16:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.912 16:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.476 00:18:57.476 16:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:57.476 16:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.476 16:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:57.476 16:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.476 16:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.476 16:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.476 16:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.476 16:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.476 16:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:57.476 { 00:18:57.476 "cntlid": 89, 00:18:57.476 "qid": 0, 00:18:57.476 "state": "enabled", 00:18:57.476 "thread": "nvmf_tgt_poll_group_000", 00:18:57.476 "listen_address": { 00:18:57.476 "trtype": "TCP", 00:18:57.476 "adrfam": "IPv4", 00:18:57.476 "traddr": "10.0.0.2", 00:18:57.476 "trsvcid": "4420" 00:18:57.476 }, 00:18:57.476 "peer_address": { 00:18:57.476 "trtype": "TCP", 00:18:57.476 "adrfam": "IPv4", 00:18:57.476 "traddr": "10.0.0.1", 00:18:57.476 "trsvcid": "33826" 00:18:57.476 }, 00:18:57.476 "auth": { 00:18:57.476 "state": "completed", 00:18:57.476 "digest": "sha384", 00:18:57.476 "dhgroup": "ffdhe8192" 00:18:57.476 } 00:18:57.476 } 00:18:57.476 ]' 00:18:57.476 16:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:57.476 16:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:57.476 16:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:57.476 16:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:57.476 16:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:57.733 16:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.733 16:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.733 16:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.733 16:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzFhZDVkZDAyNTMwZDY2OTFmZDY0NDM0YjIwMzlhYzc3NzA5Mzk3Nzk1MWFmYWFlNe7tPQ==: --dhchap-ctrl-secret DHHC-1:03:MDM5M2Q5Njg5MGYzNzdjMzFjNDIzZGNlNzBlNzM5ZjA1NWNmNGI2OTlhNzZmYjhlYzAzYjU1ZjM0ODQzMmViYtXVTjs=: 00:18:58.296 16:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.296 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.296 16:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:58.296 16:00:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.296 16:00:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.296 16:00:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.297 16:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:58.297 16:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:58.297 16:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:58.552 16:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:18:58.552 16:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:58.552 16:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:58.552 16:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:58.552 16:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:58.553 16:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.553 16:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.553 16:00:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.553 16:00:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.553 16:00:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.553 16:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.553 16:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.114 00:18:59.114 16:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:59.114 16:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:59.114 16:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.370 16:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.370 16:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.370 16:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.370 16:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.370 16:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.370 16:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:59.370 { 00:18:59.370 "cntlid": 91, 00:18:59.370 "qid": 0, 00:18:59.370 "state": "enabled", 00:18:59.370 "thread": "nvmf_tgt_poll_group_000", 00:18:59.370 "listen_address": { 00:18:59.370 "trtype": "TCP", 00:18:59.370 "adrfam": "IPv4", 00:18:59.370 "traddr": "10.0.0.2", 00:18:59.370 "trsvcid": "4420" 00:18:59.370 }, 00:18:59.370 "peer_address": { 00:18:59.370 "trtype": "TCP", 00:18:59.370 "adrfam": "IPv4", 00:18:59.370 "traddr": "10.0.0.1", 00:18:59.370 "trsvcid": "34844" 00:18:59.370 }, 00:18:59.370 "auth": { 00:18:59.370 "state": "completed", 00:18:59.370 "digest": "sha384", 00:18:59.370 "dhgroup": "ffdhe8192" 00:18:59.370 } 00:18:59.370 } 00:18:59.370 ]' 00:18:59.370 16:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:59.370 16:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:59.370 16:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:59.370 16:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:59.370 16:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:59.370 16:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.370 16:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.370 16:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.626 16:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZTI4N2Q4OTVhN2ExNzA5MzBkYjdhM2ZiYzM2ZTE2YWGKVPxo: --dhchap-ctrl-secret DHHC-1:02:MDE2YWU0NGFhZDllNGFlNGU5MzE1NDE1NzY4ZjBmNGZjZWMzYjc1YTE0MjhiMWYwT6tFBw==: 00:19:00.190 16:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.190 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.190 16:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:00.190 16:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.190 16:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.190 16:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.190 16:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:00.190 16:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:00.190 16:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:00.190 16:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:19:00.190 16:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:00.190 16:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:00.190 16:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:00.190 16:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:00.190 16:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.190 16:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.190 16:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.190 16:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.190 16:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.190 16:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.190 16:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.754 00:19:00.754 16:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:00.754 16:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:00.754 16:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.011 16:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.011 16:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.011 16:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.011 16:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.011 16:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.011 16:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:01.011 { 00:19:01.011 "cntlid": 93, 00:19:01.011 "qid": 0, 00:19:01.011 "state": "enabled", 00:19:01.011 "thread": "nvmf_tgt_poll_group_000", 00:19:01.011 "listen_address": { 00:19:01.011 "trtype": "TCP", 00:19:01.011 "adrfam": "IPv4", 00:19:01.011 "traddr": "10.0.0.2", 00:19:01.011 "trsvcid": "4420" 00:19:01.011 }, 00:19:01.011 "peer_address": { 00:19:01.011 "trtype": "TCP", 00:19:01.011 "adrfam": "IPv4", 00:19:01.011 "traddr": "10.0.0.1", 00:19:01.011 "trsvcid": "34868" 00:19:01.011 }, 00:19:01.011 "auth": { 00:19:01.011 "state": "completed", 00:19:01.011 "digest": "sha384", 00:19:01.011 "dhgroup": "ffdhe8192" 00:19:01.011 } 00:19:01.011 } 00:19:01.011 ]' 00:19:01.011 16:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:01.011 16:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:01.011 16:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:01.011 16:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:01.011 16:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:01.011 16:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.011 16:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.011 16:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.267 16:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OTc5NDAxZjllMjViYjlkNjgyYWUxZGU3ZDRhZWY3ZDRiZTI2YjRjYWNlOWQwMDAySmlsEw==: --dhchap-ctrl-secret DHHC-1:01:MTBhMTg4MjQ2MzkxMzM5NDYzZTQxMzFmMGNmNzIyYTjcjymE: 00:19:01.829 16:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.829 16:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:01.829 16:00:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.829 16:00:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.829 16:00:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.829 16:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:01.830 16:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:01.830 16:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:02.086 16:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:19:02.086 16:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:02.086 16:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:02.086 16:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:02.086 16:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:02.086 16:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.086 16:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:19:02.086 16:00:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.086 16:00:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.086 16:00:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.086 16:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:02.086 16:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:02.649 00:19:02.649 16:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:02.649 16:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.649 16:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:02.649 16:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.649 16:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.649 16:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.649 16:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.649 16:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.649 16:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:02.649 { 00:19:02.649 "cntlid": 95, 00:19:02.649 "qid": 0, 00:19:02.649 "state": "enabled", 00:19:02.649 "thread": "nvmf_tgt_poll_group_000", 00:19:02.649 "listen_address": { 00:19:02.649 "trtype": "TCP", 00:19:02.649 "adrfam": "IPv4", 00:19:02.649 "traddr": "10.0.0.2", 00:19:02.649 "trsvcid": "4420" 00:19:02.649 }, 00:19:02.649 "peer_address": { 00:19:02.649 "trtype": "TCP", 00:19:02.649 "adrfam": "IPv4", 00:19:02.649 "traddr": "10.0.0.1", 00:19:02.649 "trsvcid": "34910" 00:19:02.649 }, 00:19:02.649 "auth": { 00:19:02.649 "state": "completed", 00:19:02.649 "digest": "sha384", 00:19:02.649 "dhgroup": "ffdhe8192" 00:19:02.649 } 00:19:02.649 } 00:19:02.649 ]' 00:19:02.649 16:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:02.906 16:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:02.906 16:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:02.906 16:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:02.906 16:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:02.906 16:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.906 16:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.906 16:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.162 16:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZTQ5Zjk1NmU4OTU5ZjU2ZThhZTRiNTkxNzhkYTFkZDliOTA4ODk4ZjQ4ZmQzNGZlOTI2NDQ1MDgxNGQ3MWNlZA1oZjU=: 00:19:03.725 16:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.725 16:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:03.725 16:00:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.725 16:00:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.725 16:00:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.725 16:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:03.725 16:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:03.725 16:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:03.725 16:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:03.725 16:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:03.725 16:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:19:03.725 16:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:03.725 16:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:03.725 16:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:03.725 16:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:03.725 16:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.725 16:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.725 16:00:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.725 16:00:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.725 16:00:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.725 16:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.725 16:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.982 00:19:03.982 16:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:03.982 16:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:03.982 16:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.239 16:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.239 16:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.239 16:00:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.239 16:00:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.239 16:00:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.239 16:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:04.239 { 00:19:04.239 "cntlid": 97, 00:19:04.239 "qid": 0, 00:19:04.239 "state": "enabled", 00:19:04.239 "thread": "nvmf_tgt_poll_group_000", 00:19:04.239 "listen_address": { 00:19:04.239 "trtype": "TCP", 00:19:04.239 "adrfam": "IPv4", 00:19:04.239 "traddr": "10.0.0.2", 00:19:04.239 "trsvcid": "4420" 00:19:04.239 }, 00:19:04.239 "peer_address": { 00:19:04.239 "trtype": "TCP", 00:19:04.239 "adrfam": "IPv4", 00:19:04.239 "traddr": "10.0.0.1", 00:19:04.239 "trsvcid": "34948" 00:19:04.239 }, 00:19:04.239 "auth": { 00:19:04.239 "state": "completed", 00:19:04.239 "digest": "sha512", 00:19:04.239 "dhgroup": "null" 00:19:04.239 } 00:19:04.239 } 00:19:04.239 ]' 00:19:04.239 16:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:04.239 16:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:04.239 16:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:04.239 16:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:04.239 16:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:04.239 16:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.239 16:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.239 16:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.494 16:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzFhZDVkZDAyNTMwZDY2OTFmZDY0NDM0YjIwMzlhYzc3NzA5Mzk3Nzk1MWFmYWFlNe7tPQ==: --dhchap-ctrl-secret DHHC-1:03:MDM5M2Q5Njg5MGYzNzdjMzFjNDIzZGNlNzBlNzM5ZjA1NWNmNGI2OTlhNzZmYjhlYzAzYjU1ZjM0ODQzMmViYtXVTjs=: 00:19:05.057 16:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.057 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.057 16:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:05.057 16:00:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.057 16:00:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.057 16:00:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.057 16:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:05.057 16:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:05.057 16:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:05.314 16:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:19:05.314 16:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:05.314 16:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:05.314 16:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:05.314 16:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:05.314 16:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.314 16:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.314 16:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.314 16:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.314 16:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.314 16:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.314 16:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.572 00:19:05.573 16:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:05.573 16:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:05.573 16:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.573 16:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.842 16:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.842 16:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.842 16:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.842 16:00:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.842 16:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.842 { 00:19:05.842 "cntlid": 99, 00:19:05.842 "qid": 0, 00:19:05.842 "state": "enabled", 00:19:05.842 "thread": "nvmf_tgt_poll_group_000", 00:19:05.842 "listen_address": { 00:19:05.842 "trtype": "TCP", 00:19:05.842 "adrfam": "IPv4", 00:19:05.842 "traddr": "10.0.0.2", 00:19:05.842 "trsvcid": "4420" 00:19:05.842 }, 00:19:05.842 "peer_address": { 00:19:05.842 "trtype": "TCP", 00:19:05.842 "adrfam": "IPv4", 00:19:05.842 "traddr": "10.0.0.1", 00:19:05.842 "trsvcid": "34976" 00:19:05.842 }, 00:19:05.842 "auth": { 00:19:05.842 "state": "completed", 00:19:05.842 "digest": "sha512", 00:19:05.842 "dhgroup": "null" 00:19:05.842 } 00:19:05.843 } 00:19:05.843 ]' 00:19:05.843 16:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.843 16:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:05.843 16:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.843 16:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:05.843 16:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.843 16:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.843 16:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.843 16:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.102 16:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZTI4N2Q4OTVhN2ExNzA5MzBkYjdhM2ZiYzM2ZTE2YWGKVPxo: --dhchap-ctrl-secret DHHC-1:02:MDE2YWU0NGFhZDllNGFlNGU5MzE1NDE1NzY4ZjBmNGZjZWMzYjc1YTE0MjhiMWYwT6tFBw==: 00:19:06.664 16:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.664 16:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:06.664 16:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.664 16:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.664 16:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.664 16:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:06.664 16:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:06.664 16:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:06.664 16:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:19:06.664 16:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:06.664 16:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:06.664 16:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:06.664 16:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:06.664 16:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.664 16:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.664 16:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.664 16:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.664 16:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.664 16:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.664 16:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.920 00:19:06.920 16:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:06.920 16:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:06.920 16:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.229 16:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.229 16:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.229 16:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.229 16:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.229 16:00:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.229 16:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:07.229 { 00:19:07.229 "cntlid": 101, 00:19:07.229 "qid": 0, 00:19:07.229 "state": "enabled", 00:19:07.229 "thread": "nvmf_tgt_poll_group_000", 00:19:07.229 "listen_address": { 00:19:07.229 "trtype": "TCP", 00:19:07.229 "adrfam": "IPv4", 00:19:07.229 "traddr": "10.0.0.2", 00:19:07.229 "trsvcid": "4420" 00:19:07.229 }, 00:19:07.229 "peer_address": { 00:19:07.229 "trtype": "TCP", 00:19:07.229 "adrfam": "IPv4", 00:19:07.229 "traddr": "10.0.0.1", 00:19:07.229 "trsvcid": "34994" 00:19:07.229 }, 00:19:07.229 "auth": { 00:19:07.229 "state": "completed", 00:19:07.229 "digest": "sha512", 00:19:07.229 "dhgroup": "null" 00:19:07.229 } 00:19:07.229 } 00:19:07.229 ]' 00:19:07.229 16:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:07.229 16:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:07.229 16:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:07.229 16:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:07.229 16:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:07.229 16:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.229 16:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.229 16:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.488 16:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OTc5NDAxZjllMjViYjlkNjgyYWUxZGU3ZDRhZWY3ZDRiZTI2YjRjYWNlOWQwMDAySmlsEw==: --dhchap-ctrl-secret DHHC-1:01:MTBhMTg4MjQ2MzkxMzM5NDYzZTQxMzFmMGNmNzIyYTjcjymE: 00:19:08.052 16:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.052 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.052 16:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:08.052 16:00:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.052 16:00:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.052 16:00:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.052 16:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:08.052 16:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:08.052 16:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:08.309 16:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:19:08.309 16:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:08.309 16:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:08.309 16:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:08.309 16:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:08.309 16:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.309 16:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:19:08.309 16:00:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.309 16:00:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.309 16:00:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.309 16:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:08.309 16:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:08.565 00:19:08.565 16:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:08.565 16:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:08.565 16:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.565 16:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.565 16:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.565 16:00:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.565 16:00:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.565 16:00:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.565 16:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:08.565 { 00:19:08.565 "cntlid": 103, 00:19:08.565 "qid": 0, 00:19:08.565 "state": "enabled", 00:19:08.565 "thread": "nvmf_tgt_poll_group_000", 00:19:08.565 "listen_address": { 00:19:08.565 "trtype": "TCP", 00:19:08.565 "adrfam": "IPv4", 00:19:08.565 "traddr": "10.0.0.2", 00:19:08.565 "trsvcid": "4420" 00:19:08.565 }, 00:19:08.565 "peer_address": { 00:19:08.565 "trtype": "TCP", 00:19:08.565 "adrfam": "IPv4", 00:19:08.565 "traddr": "10.0.0.1", 00:19:08.565 "trsvcid": "49690" 00:19:08.565 }, 00:19:08.565 "auth": { 00:19:08.565 "state": "completed", 00:19:08.565 "digest": "sha512", 00:19:08.565 "dhgroup": "null" 00:19:08.565 } 00:19:08.565 } 00:19:08.565 ]' 00:19:08.565 16:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:08.834 16:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:08.834 16:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:08.834 16:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:08.834 16:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:08.834 16:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.834 16:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.834 16:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.104 16:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZTQ5Zjk1NmU4OTU5ZjU2ZThhZTRiNTkxNzhkYTFkZDliOTA4ODk4ZjQ4ZmQzNGZlOTI2NDQ1MDgxNGQ3MWNlZA1oZjU=: 00:19:09.667 16:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.667 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.667 16:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:09.667 16:00:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.667 16:00:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.667 16:00:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.667 16:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:09.667 16:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:09.667 16:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:09.667 16:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:09.667 16:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:19:09.667 16:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:09.667 16:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:09.667 16:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:09.667 16:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:09.667 16:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.667 16:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.667 16:00:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.667 16:00:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.667 16:00:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.667 16:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.667 16:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.923 00:19:09.923 16:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:09.923 16:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:09.923 16:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.179 16:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.179 16:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.179 16:00:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.179 16:00:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.179 16:00:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.179 16:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:10.179 { 00:19:10.179 "cntlid": 105, 00:19:10.179 "qid": 0, 00:19:10.179 "state": "enabled", 00:19:10.179 "thread": "nvmf_tgt_poll_group_000", 00:19:10.179 "listen_address": { 00:19:10.179 "trtype": "TCP", 00:19:10.180 "adrfam": "IPv4", 00:19:10.180 "traddr": "10.0.0.2", 00:19:10.180 "trsvcid": "4420" 00:19:10.180 }, 00:19:10.180 "peer_address": { 00:19:10.180 "trtype": "TCP", 00:19:10.180 "adrfam": "IPv4", 00:19:10.180 "traddr": "10.0.0.1", 00:19:10.180 "trsvcid": "49726" 00:19:10.180 }, 00:19:10.180 "auth": { 00:19:10.180 "state": "completed", 00:19:10.180 "digest": "sha512", 00:19:10.180 "dhgroup": "ffdhe2048" 00:19:10.180 } 00:19:10.180 } 00:19:10.180 ]' 00:19:10.180 16:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:10.180 16:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:10.180 16:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:10.180 16:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:10.180 16:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:10.180 16:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.180 16:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.180 16:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.436 16:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzFhZDVkZDAyNTMwZDY2OTFmZDY0NDM0YjIwMzlhYzc3NzA5Mzk3Nzk1MWFmYWFlNe7tPQ==: --dhchap-ctrl-secret DHHC-1:03:MDM5M2Q5Njg5MGYzNzdjMzFjNDIzZGNlNzBlNzM5ZjA1NWNmNGI2OTlhNzZmYjhlYzAzYjU1ZjM0ODQzMmViYtXVTjs=: 00:19:10.999 16:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.999 16:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:10.999 16:00:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.999 16:00:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.999 16:00:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.999 16:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:10.999 16:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:10.999 16:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:11.257 16:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:19:11.257 16:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:11.257 16:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:11.257 16:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:11.257 16:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:11.257 16:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.257 16:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.257 16:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.257 16:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.257 16:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.257 16:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.257 16:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.514 00:19:11.514 16:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:11.514 16:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:11.514 16:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.514 16:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.770 16:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.770 16:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.770 16:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.770 16:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.770 16:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:11.770 { 00:19:11.770 "cntlid": 107, 00:19:11.770 "qid": 0, 00:19:11.770 "state": "enabled", 00:19:11.770 "thread": "nvmf_tgt_poll_group_000", 00:19:11.770 "listen_address": { 00:19:11.770 "trtype": "TCP", 00:19:11.770 "adrfam": "IPv4", 00:19:11.770 "traddr": "10.0.0.2", 00:19:11.770 "trsvcid": "4420" 00:19:11.770 }, 00:19:11.770 "peer_address": { 00:19:11.770 "trtype": "TCP", 00:19:11.770 "adrfam": "IPv4", 00:19:11.770 "traddr": "10.0.0.1", 00:19:11.770 "trsvcid": "49742" 00:19:11.770 }, 00:19:11.770 "auth": { 00:19:11.770 "state": "completed", 00:19:11.770 "digest": "sha512", 00:19:11.770 "dhgroup": "ffdhe2048" 00:19:11.770 } 00:19:11.770 } 00:19:11.770 ]' 00:19:11.770 16:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:11.770 16:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:11.770 16:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:11.770 16:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:11.770 16:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:11.770 16:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.770 16:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.770 16:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.026 16:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZTI4N2Q4OTVhN2ExNzA5MzBkYjdhM2ZiYzM2ZTE2YWGKVPxo: --dhchap-ctrl-secret DHHC-1:02:MDE2YWU0NGFhZDllNGFlNGU5MzE1NDE1NzY4ZjBmNGZjZWMzYjc1YTE0MjhiMWYwT6tFBw==: 00:19:12.588 16:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.588 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.588 16:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:12.588 16:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.588 16:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.588 16:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.588 16:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:12.588 16:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:12.588 16:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:12.588 16:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:19:12.588 16:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:12.588 16:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:12.588 16:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:12.588 16:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:12.588 16:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.588 16:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.588 16:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.588 16:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.845 16:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.845 16:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.845 16:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.845 00:19:12.845 16:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:12.845 16:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:12.845 16:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.100 16:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.100 16:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.100 16:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.100 16:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.100 16:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.100 16:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:13.100 { 00:19:13.100 "cntlid": 109, 00:19:13.100 "qid": 0, 00:19:13.100 "state": "enabled", 00:19:13.100 "thread": "nvmf_tgt_poll_group_000", 00:19:13.100 "listen_address": { 00:19:13.100 "trtype": "TCP", 00:19:13.100 "adrfam": "IPv4", 00:19:13.100 "traddr": "10.0.0.2", 00:19:13.100 "trsvcid": "4420" 00:19:13.100 }, 00:19:13.100 "peer_address": { 00:19:13.100 "trtype": "TCP", 00:19:13.100 "adrfam": "IPv4", 00:19:13.100 "traddr": "10.0.0.1", 00:19:13.100 "trsvcid": "49784" 00:19:13.100 }, 00:19:13.100 "auth": { 00:19:13.100 "state": "completed", 00:19:13.100 "digest": "sha512", 00:19:13.100 "dhgroup": "ffdhe2048" 00:19:13.100 } 00:19:13.100 } 00:19:13.100 ]' 00:19:13.100 16:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:13.100 16:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:13.100 16:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:13.100 16:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:13.100 16:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:13.356 16:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.356 16:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.356 16:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.356 16:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OTc5NDAxZjllMjViYjlkNjgyYWUxZGU3ZDRhZWY3ZDRiZTI2YjRjYWNlOWQwMDAySmlsEw==: --dhchap-ctrl-secret DHHC-1:01:MTBhMTg4MjQ2MzkxMzM5NDYzZTQxMzFmMGNmNzIyYTjcjymE: 00:19:13.917 16:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.917 16:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:13.917 16:00:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.917 16:00:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.917 16:00:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.917 16:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:13.917 16:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:13.917 16:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:14.172 16:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:19:14.172 16:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:14.172 16:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:14.172 16:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:14.172 16:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:14.172 16:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.172 16:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:19:14.172 16:00:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.172 16:00:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.172 16:00:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.172 16:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:14.173 16:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:14.439 00:19:14.439 16:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:14.439 16:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:14.439 16:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.696 16:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.696 16:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.696 16:00:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.696 16:00:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.696 16:00:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.696 16:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:14.696 { 00:19:14.696 "cntlid": 111, 00:19:14.696 "qid": 0, 00:19:14.696 "state": "enabled", 00:19:14.696 "thread": "nvmf_tgt_poll_group_000", 00:19:14.696 "listen_address": { 00:19:14.696 "trtype": "TCP", 00:19:14.696 "adrfam": "IPv4", 00:19:14.696 "traddr": "10.0.0.2", 00:19:14.696 "trsvcid": "4420" 00:19:14.696 }, 00:19:14.696 "peer_address": { 00:19:14.696 "trtype": "TCP", 00:19:14.696 "adrfam": "IPv4", 00:19:14.696 "traddr": "10.0.0.1", 00:19:14.696 "trsvcid": "49794" 00:19:14.696 }, 00:19:14.696 "auth": { 00:19:14.696 "state": "completed", 00:19:14.696 "digest": "sha512", 00:19:14.696 "dhgroup": "ffdhe2048" 00:19:14.696 } 00:19:14.696 } 00:19:14.696 ]' 00:19:14.696 16:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:14.696 16:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:14.696 16:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:14.696 16:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:14.696 16:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:14.696 16:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.696 16:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.696 16:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.952 16:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZTQ5Zjk1NmU4OTU5ZjU2ZThhZTRiNTkxNzhkYTFkZDliOTA4ODk4ZjQ4ZmQzNGZlOTI2NDQ1MDgxNGQ3MWNlZA1oZjU=: 00:19:15.514 16:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.514 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.514 16:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:15.514 16:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.514 16:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.514 16:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.514 16:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:15.514 16:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:15.514 16:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:15.514 16:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:15.771 16:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:19:15.771 16:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:15.771 16:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:15.771 16:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:15.771 16:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:15.771 16:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.771 16:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.771 16:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.771 16:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.771 16:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.771 16:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.771 16:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.044 00:19:16.044 16:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:16.044 16:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:16.044 16:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.044 16:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.044 16:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.044 16:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.044 16:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.044 16:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.044 16:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:16.044 { 00:19:16.044 "cntlid": 113, 00:19:16.044 "qid": 0, 00:19:16.044 "state": "enabled", 00:19:16.044 "thread": "nvmf_tgt_poll_group_000", 00:19:16.044 "listen_address": { 00:19:16.044 "trtype": "TCP", 00:19:16.044 "adrfam": "IPv4", 00:19:16.044 "traddr": "10.0.0.2", 00:19:16.044 "trsvcid": "4420" 00:19:16.044 }, 00:19:16.044 "peer_address": { 00:19:16.044 "trtype": "TCP", 00:19:16.044 "adrfam": "IPv4", 00:19:16.044 "traddr": "10.0.0.1", 00:19:16.044 "trsvcid": "49824" 00:19:16.044 }, 00:19:16.044 "auth": { 00:19:16.044 "state": "completed", 00:19:16.044 "digest": "sha512", 00:19:16.044 "dhgroup": "ffdhe3072" 00:19:16.044 } 00:19:16.044 } 00:19:16.044 ]' 00:19:16.044 16:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:16.315 16:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:16.315 16:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:16.315 16:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:16.315 16:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:16.315 16:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.315 16:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.315 16:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.570 16:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzFhZDVkZDAyNTMwZDY2OTFmZDY0NDM0YjIwMzlhYzc3NzA5Mzk3Nzk1MWFmYWFlNe7tPQ==: --dhchap-ctrl-secret DHHC-1:03:MDM5M2Q5Njg5MGYzNzdjMzFjNDIzZGNlNzBlNzM5ZjA1NWNmNGI2OTlhNzZmYjhlYzAzYjU1ZjM0ODQzMmViYtXVTjs=: 00:19:17.132 16:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.132 16:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:17.132 16:00:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.132 16:00:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.132 16:00:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.132 16:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:17.132 16:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:17.132 16:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:17.132 16:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:19:17.132 16:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:17.132 16:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:17.132 16:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:17.132 16:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:17.132 16:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.132 16:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.132 16:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.132 16:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.132 16:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.132 16:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.132 16:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.389 00:19:17.389 16:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:17.389 16:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:17.389 16:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.644 16:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.644 16:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.644 16:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.644 16:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.644 16:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.644 16:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:17.644 { 00:19:17.644 "cntlid": 115, 00:19:17.644 "qid": 0, 00:19:17.644 "state": "enabled", 00:19:17.644 "thread": "nvmf_tgt_poll_group_000", 00:19:17.644 "listen_address": { 00:19:17.644 "trtype": "TCP", 00:19:17.644 "adrfam": "IPv4", 00:19:17.644 "traddr": "10.0.0.2", 00:19:17.644 "trsvcid": "4420" 00:19:17.644 }, 00:19:17.644 "peer_address": { 00:19:17.644 "trtype": "TCP", 00:19:17.644 "adrfam": "IPv4", 00:19:17.644 "traddr": "10.0.0.1", 00:19:17.644 "trsvcid": "49844" 00:19:17.644 }, 00:19:17.644 "auth": { 00:19:17.644 "state": "completed", 00:19:17.644 "digest": "sha512", 00:19:17.644 "dhgroup": "ffdhe3072" 00:19:17.644 } 00:19:17.644 } 00:19:17.644 ]' 00:19:17.644 16:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:17.644 16:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:17.644 16:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:17.644 16:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:17.644 16:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:17.900 16:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.900 16:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.900 16:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.900 16:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZTI4N2Q4OTVhN2ExNzA5MzBkYjdhM2ZiYzM2ZTE2YWGKVPxo: --dhchap-ctrl-secret DHHC-1:02:MDE2YWU0NGFhZDllNGFlNGU5MzE1NDE1NzY4ZjBmNGZjZWMzYjc1YTE0MjhiMWYwT6tFBw==: 00:19:18.461 16:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.461 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.461 16:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:18.461 16:00:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.461 16:00:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.461 16:00:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.461 16:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:18.461 16:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:18.461 16:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:18.718 16:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:19:18.718 16:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:18.718 16:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:18.718 16:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:18.718 16:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:18.718 16:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.718 16:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.718 16:00:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.718 16:00:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.718 16:00:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.718 16:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.718 16:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.975 00:19:18.975 16:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:18.975 16:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:18.975 16:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.232 16:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.232 16:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.232 16:00:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.232 16:00:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.232 16:00:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.232 16:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:19.232 { 00:19:19.232 "cntlid": 117, 00:19:19.232 "qid": 0, 00:19:19.232 "state": "enabled", 00:19:19.232 "thread": "nvmf_tgt_poll_group_000", 00:19:19.232 "listen_address": { 00:19:19.232 "trtype": "TCP", 00:19:19.232 "adrfam": "IPv4", 00:19:19.232 "traddr": "10.0.0.2", 00:19:19.232 "trsvcid": "4420" 00:19:19.232 }, 00:19:19.232 "peer_address": { 00:19:19.232 "trtype": "TCP", 00:19:19.232 "adrfam": "IPv4", 00:19:19.232 "traddr": "10.0.0.1", 00:19:19.232 "trsvcid": "52412" 00:19:19.232 }, 00:19:19.232 "auth": { 00:19:19.232 "state": "completed", 00:19:19.232 "digest": "sha512", 00:19:19.232 "dhgroup": "ffdhe3072" 00:19:19.232 } 00:19:19.232 } 00:19:19.232 ]' 00:19:19.232 16:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:19.232 16:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:19.232 16:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:19.232 16:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:19.232 16:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:19.232 16:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.232 16:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.232 16:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.489 16:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OTc5NDAxZjllMjViYjlkNjgyYWUxZGU3ZDRhZWY3ZDRiZTI2YjRjYWNlOWQwMDAySmlsEw==: --dhchap-ctrl-secret DHHC-1:01:MTBhMTg4MjQ2MzkxMzM5NDYzZTQxMzFmMGNmNzIyYTjcjymE: 00:19:20.051 16:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.051 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.051 16:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:20.051 16:00:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.051 16:00:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.051 16:00:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.051 16:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:20.051 16:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:20.051 16:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:20.306 16:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:19:20.306 16:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:20.306 16:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:20.306 16:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:20.306 16:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:20.306 16:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.306 16:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:19:20.306 16:00:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.306 16:00:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.306 16:00:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.306 16:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:20.306 16:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:20.306 00:19:20.562 16:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.562 16:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.562 16:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.562 16:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.562 16:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.562 16:00:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.562 16:00:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.562 16:00:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.562 16:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.562 { 00:19:20.562 "cntlid": 119, 00:19:20.562 "qid": 0, 00:19:20.562 "state": "enabled", 00:19:20.562 "thread": "nvmf_tgt_poll_group_000", 00:19:20.562 "listen_address": { 00:19:20.562 "trtype": "TCP", 00:19:20.562 "adrfam": "IPv4", 00:19:20.562 "traddr": "10.0.0.2", 00:19:20.562 "trsvcid": "4420" 00:19:20.562 }, 00:19:20.562 "peer_address": { 00:19:20.562 "trtype": "TCP", 00:19:20.562 "adrfam": "IPv4", 00:19:20.562 "traddr": "10.0.0.1", 00:19:20.562 "trsvcid": "52436" 00:19:20.562 }, 00:19:20.562 "auth": { 00:19:20.562 "state": "completed", 00:19:20.562 "digest": "sha512", 00:19:20.562 "dhgroup": "ffdhe3072" 00:19:20.562 } 00:19:20.562 } 00:19:20.562 ]' 00:19:20.562 16:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.818 16:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:20.818 16:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.818 16:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:20.818 16:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.818 16:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.819 16:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.819 16:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.075 16:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZTQ5Zjk1NmU4OTU5ZjU2ZThhZTRiNTkxNzhkYTFkZDliOTA4ODk4ZjQ4ZmQzNGZlOTI2NDQ1MDgxNGQ3MWNlZA1oZjU=: 00:19:21.668 16:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.668 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.668 16:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:21.668 16:00:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.668 16:00:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.668 16:00:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.668 16:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:21.668 16:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:21.668 16:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:21.668 16:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:21.668 16:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:19:21.668 16:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:21.668 16:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:21.668 16:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:21.668 16:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:21.668 16:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.668 16:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.668 16:00:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.668 16:00:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.668 16:00:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.668 16:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.668 16:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.926 00:19:21.926 16:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:21.926 16:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:21.926 16:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.183 16:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.183 16:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.183 16:00:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.183 16:00:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.183 16:00:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.183 16:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:22.183 { 00:19:22.183 "cntlid": 121, 00:19:22.183 "qid": 0, 00:19:22.183 "state": "enabled", 00:19:22.183 "thread": "nvmf_tgt_poll_group_000", 00:19:22.183 "listen_address": { 00:19:22.183 "trtype": "TCP", 00:19:22.183 "adrfam": "IPv4", 00:19:22.183 "traddr": "10.0.0.2", 00:19:22.183 "trsvcid": "4420" 00:19:22.183 }, 00:19:22.183 "peer_address": { 00:19:22.183 "trtype": "TCP", 00:19:22.183 "adrfam": "IPv4", 00:19:22.183 "traddr": "10.0.0.1", 00:19:22.183 "trsvcid": "52462" 00:19:22.183 }, 00:19:22.183 "auth": { 00:19:22.183 "state": "completed", 00:19:22.183 "digest": "sha512", 00:19:22.183 "dhgroup": "ffdhe4096" 00:19:22.183 } 00:19:22.183 } 00:19:22.183 ]' 00:19:22.183 16:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:22.183 16:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:22.183 16:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:22.183 16:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:22.183 16:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:22.183 16:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.183 16:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.183 16:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.441 16:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzFhZDVkZDAyNTMwZDY2OTFmZDY0NDM0YjIwMzlhYzc3NzA5Mzk3Nzk1MWFmYWFlNe7tPQ==: --dhchap-ctrl-secret DHHC-1:03:MDM5M2Q5Njg5MGYzNzdjMzFjNDIzZGNlNzBlNzM5ZjA1NWNmNGI2OTlhNzZmYjhlYzAzYjU1ZjM0ODQzMmViYtXVTjs=: 00:19:23.006 16:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.006 16:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:23.006 16:00:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.006 16:00:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.006 16:00:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.006 16:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:23.006 16:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:23.006 16:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:23.263 16:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:19:23.263 16:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:23.263 16:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:23.263 16:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:23.263 16:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:23.263 16:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.263 16:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.263 16:00:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.263 16:00:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.263 16:00:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.263 16:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.263 16:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.521 00:19:23.521 16:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:23.521 16:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:23.521 16:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.778 16:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.778 16:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.778 16:00:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.778 16:00:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.779 16:00:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.779 16:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:23.779 { 00:19:23.779 "cntlid": 123, 00:19:23.779 "qid": 0, 00:19:23.779 "state": "enabled", 00:19:23.779 "thread": "nvmf_tgt_poll_group_000", 00:19:23.779 "listen_address": { 00:19:23.779 "trtype": "TCP", 00:19:23.779 "adrfam": "IPv4", 00:19:23.779 "traddr": "10.0.0.2", 00:19:23.779 "trsvcid": "4420" 00:19:23.779 }, 00:19:23.779 "peer_address": { 00:19:23.779 "trtype": "TCP", 00:19:23.779 "adrfam": "IPv4", 00:19:23.779 "traddr": "10.0.0.1", 00:19:23.779 "trsvcid": "52482" 00:19:23.779 }, 00:19:23.779 "auth": { 00:19:23.779 "state": "completed", 00:19:23.779 "digest": "sha512", 00:19:23.779 "dhgroup": "ffdhe4096" 00:19:23.779 } 00:19:23.779 } 00:19:23.779 ]' 00:19:23.779 16:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:23.779 16:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:23.779 16:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:23.779 16:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:23.779 16:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:23.779 16:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.779 16:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.779 16:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.040 16:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZTI4N2Q4OTVhN2ExNzA5MzBkYjdhM2ZiYzM2ZTE2YWGKVPxo: --dhchap-ctrl-secret DHHC-1:02:MDE2YWU0NGFhZDllNGFlNGU5MzE1NDE1NzY4ZjBmNGZjZWMzYjc1YTE0MjhiMWYwT6tFBw==: 00:19:24.604 16:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.604 16:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:24.604 16:00:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.604 16:00:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.604 16:00:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.604 16:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:24.604 16:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:24.604 16:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:24.860 16:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:19:24.860 16:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:24.860 16:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:24.860 16:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:24.860 16:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:24.860 16:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.860 16:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.860 16:00:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.860 16:00:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.860 16:00:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.860 16:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.860 16:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.117 00:19:25.117 16:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:25.117 16:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:25.117 16:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.117 16:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.117 16:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.117 16:00:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.117 16:00:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.117 16:00:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.117 16:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:25.117 { 00:19:25.117 "cntlid": 125, 00:19:25.117 "qid": 0, 00:19:25.117 "state": "enabled", 00:19:25.117 "thread": "nvmf_tgt_poll_group_000", 00:19:25.117 "listen_address": { 00:19:25.117 "trtype": "TCP", 00:19:25.117 "adrfam": "IPv4", 00:19:25.117 "traddr": "10.0.0.2", 00:19:25.117 "trsvcid": "4420" 00:19:25.117 }, 00:19:25.117 "peer_address": { 00:19:25.117 "trtype": "TCP", 00:19:25.117 "adrfam": "IPv4", 00:19:25.117 "traddr": "10.0.0.1", 00:19:25.117 "trsvcid": "52512" 00:19:25.117 }, 00:19:25.117 "auth": { 00:19:25.117 "state": "completed", 00:19:25.117 "digest": "sha512", 00:19:25.117 "dhgroup": "ffdhe4096" 00:19:25.117 } 00:19:25.117 } 00:19:25.117 ]' 00:19:25.117 16:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:25.373 16:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:25.373 16:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:25.373 16:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:25.373 16:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:25.373 16:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.373 16:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.373 16:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.631 16:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OTc5NDAxZjllMjViYjlkNjgyYWUxZGU3ZDRhZWY3ZDRiZTI2YjRjYWNlOWQwMDAySmlsEw==: --dhchap-ctrl-secret DHHC-1:01:MTBhMTg4MjQ2MzkxMzM5NDYzZTQxMzFmMGNmNzIyYTjcjymE: 00:19:26.195 16:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.195 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.195 16:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:26.195 16:00:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.195 16:00:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.195 16:00:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.195 16:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:26.195 16:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:26.195 16:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:26.195 16:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:19:26.195 16:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:26.195 16:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:26.195 16:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:26.195 16:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:26.195 16:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.195 16:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:19:26.195 16:00:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.195 16:00:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.195 16:00:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.195 16:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:26.195 16:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:26.453 00:19:26.453 16:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:26.453 16:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.453 16:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:26.711 16:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.711 16:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.711 16:00:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.711 16:00:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.711 16:00:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.711 16:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:26.711 { 00:19:26.711 "cntlid": 127, 00:19:26.711 "qid": 0, 00:19:26.711 "state": "enabled", 00:19:26.711 "thread": "nvmf_tgt_poll_group_000", 00:19:26.711 "listen_address": { 00:19:26.711 "trtype": "TCP", 00:19:26.711 "adrfam": "IPv4", 00:19:26.711 "traddr": "10.0.0.2", 00:19:26.711 "trsvcid": "4420" 00:19:26.711 }, 00:19:26.711 "peer_address": { 00:19:26.711 "trtype": "TCP", 00:19:26.711 "adrfam": "IPv4", 00:19:26.711 "traddr": "10.0.0.1", 00:19:26.711 "trsvcid": "52550" 00:19:26.711 }, 00:19:26.711 "auth": { 00:19:26.711 "state": "completed", 00:19:26.711 "digest": "sha512", 00:19:26.711 "dhgroup": "ffdhe4096" 00:19:26.711 } 00:19:26.711 } 00:19:26.711 ]' 00:19:26.711 16:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:26.711 16:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:26.711 16:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:26.711 16:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:26.967 16:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:26.967 16:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.967 16:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.967 16:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.967 16:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZTQ5Zjk1NmU4OTU5ZjU2ZThhZTRiNTkxNzhkYTFkZDliOTA4ODk4ZjQ4ZmQzNGZlOTI2NDQ1MDgxNGQ3MWNlZA1oZjU=: 00:19:27.531 16:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.531 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.531 16:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:27.531 16:00:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.531 16:00:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.531 16:00:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.531 16:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:27.531 16:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:27.531 16:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:27.531 16:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:27.788 16:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:19:27.788 16:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:27.788 16:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:27.788 16:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:27.788 16:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:27.788 16:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.788 16:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:27.788 16:00:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.788 16:00:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.788 16:00:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.788 16:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:27.788 16:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.044 00:19:28.044 16:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:28.044 16:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:28.044 16:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.301 16:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.301 16:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.301 16:00:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.301 16:00:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.301 16:00:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.301 16:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:28.301 { 00:19:28.301 "cntlid": 129, 00:19:28.301 "qid": 0, 00:19:28.301 "state": "enabled", 00:19:28.301 "thread": "nvmf_tgt_poll_group_000", 00:19:28.301 "listen_address": { 00:19:28.301 "trtype": "TCP", 00:19:28.301 "adrfam": "IPv4", 00:19:28.301 "traddr": "10.0.0.2", 00:19:28.301 "trsvcid": "4420" 00:19:28.301 }, 00:19:28.301 "peer_address": { 00:19:28.301 "trtype": "TCP", 00:19:28.301 "adrfam": "IPv4", 00:19:28.301 "traddr": "10.0.0.1", 00:19:28.301 "trsvcid": "46692" 00:19:28.301 }, 00:19:28.301 "auth": { 00:19:28.301 "state": "completed", 00:19:28.301 "digest": "sha512", 00:19:28.301 "dhgroup": "ffdhe6144" 00:19:28.301 } 00:19:28.301 } 00:19:28.301 ]' 00:19:28.301 16:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:28.301 16:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:28.301 16:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:28.301 16:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:28.301 16:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:28.559 16:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.559 16:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.559 16:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.559 16:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzFhZDVkZDAyNTMwZDY2OTFmZDY0NDM0YjIwMzlhYzc3NzA5Mzk3Nzk1MWFmYWFlNe7tPQ==: --dhchap-ctrl-secret DHHC-1:03:MDM5M2Q5Njg5MGYzNzdjMzFjNDIzZGNlNzBlNzM5ZjA1NWNmNGI2OTlhNzZmYjhlYzAzYjU1ZjM0ODQzMmViYtXVTjs=: 00:19:29.124 16:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.124 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.124 16:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:29.124 16:00:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.124 16:00:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.124 16:00:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.124 16:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:29.124 16:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:29.124 16:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:29.380 16:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:19:29.380 16:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:29.380 16:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:29.380 16:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:29.380 16:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:29.380 16:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.380 16:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.380 16:00:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.380 16:00:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.380 16:00:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.380 16:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.380 16:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.636 00:19:29.636 16:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:29.636 16:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.636 16:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.892 16:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.892 16:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.892 16:00:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.892 16:00:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.892 16:00:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.892 16:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.892 { 00:19:29.892 "cntlid": 131, 00:19:29.892 "qid": 0, 00:19:29.892 "state": "enabled", 00:19:29.892 "thread": "nvmf_tgt_poll_group_000", 00:19:29.892 "listen_address": { 00:19:29.892 "trtype": "TCP", 00:19:29.892 "adrfam": "IPv4", 00:19:29.892 "traddr": "10.0.0.2", 00:19:29.893 "trsvcid": "4420" 00:19:29.893 }, 00:19:29.893 "peer_address": { 00:19:29.893 "trtype": "TCP", 00:19:29.893 "adrfam": "IPv4", 00:19:29.893 "traddr": "10.0.0.1", 00:19:29.893 "trsvcid": "46726" 00:19:29.893 }, 00:19:29.893 "auth": { 00:19:29.893 "state": "completed", 00:19:29.893 "digest": "sha512", 00:19:29.893 "dhgroup": "ffdhe6144" 00:19:29.893 } 00:19:29.893 } 00:19:29.893 ]' 00:19:29.893 16:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:29.893 16:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:29.893 16:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:29.893 16:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:29.893 16:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:30.150 16:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.150 16:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.150 16:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.150 16:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZTI4N2Q4OTVhN2ExNzA5MzBkYjdhM2ZiYzM2ZTE2YWGKVPxo: --dhchap-ctrl-secret DHHC-1:02:MDE2YWU0NGFhZDllNGFlNGU5MzE1NDE1NzY4ZjBmNGZjZWMzYjc1YTE0MjhiMWYwT6tFBw==: 00:19:30.765 16:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.765 16:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:30.765 16:00:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.765 16:00:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.765 16:00:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.765 16:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:30.765 16:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:30.765 16:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:31.022 16:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:19:31.022 16:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:31.022 16:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:31.022 16:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:31.022 16:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:31.022 16:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.022 16:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.022 16:00:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.022 16:00:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.022 16:00:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.022 16:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.022 16:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.279 00:19:31.279 16:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:31.279 16:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:31.279 16:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.538 16:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.538 16:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.538 16:01:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.538 16:01:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.538 16:01:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.538 16:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.538 { 00:19:31.538 "cntlid": 133, 00:19:31.538 "qid": 0, 00:19:31.538 "state": "enabled", 00:19:31.538 "thread": "nvmf_tgt_poll_group_000", 00:19:31.538 "listen_address": { 00:19:31.538 "trtype": "TCP", 00:19:31.538 "adrfam": "IPv4", 00:19:31.538 "traddr": "10.0.0.2", 00:19:31.538 "trsvcid": "4420" 00:19:31.538 }, 00:19:31.538 "peer_address": { 00:19:31.538 "trtype": "TCP", 00:19:31.538 "adrfam": "IPv4", 00:19:31.538 "traddr": "10.0.0.1", 00:19:31.538 "trsvcid": "46758" 00:19:31.538 }, 00:19:31.538 "auth": { 00:19:31.538 "state": "completed", 00:19:31.538 "digest": "sha512", 00:19:31.538 "dhgroup": "ffdhe6144" 00:19:31.538 } 00:19:31.538 } 00:19:31.538 ]' 00:19:31.538 16:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:31.538 16:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:31.538 16:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.538 16:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:31.538 16:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.538 16:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.538 16:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.538 16:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.796 16:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OTc5NDAxZjllMjViYjlkNjgyYWUxZGU3ZDRhZWY3ZDRiZTI2YjRjYWNlOWQwMDAySmlsEw==: --dhchap-ctrl-secret DHHC-1:01:MTBhMTg4MjQ2MzkxMzM5NDYzZTQxMzFmMGNmNzIyYTjcjymE: 00:19:32.361 16:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.361 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.361 16:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:32.361 16:01:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.361 16:01:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.361 16:01:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.361 16:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:32.361 16:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:32.361 16:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:32.619 16:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:19:32.619 16:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:32.619 16:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:32.619 16:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:32.619 16:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:32.619 16:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.619 16:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:19:32.619 16:01:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.619 16:01:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.619 16:01:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.620 16:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:32.620 16:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:32.878 00:19:32.878 16:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:32.878 16:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:32.878 16:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.135 16:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.135 16:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.135 16:01:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.135 16:01:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.135 16:01:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.135 16:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:33.135 { 00:19:33.135 "cntlid": 135, 00:19:33.135 "qid": 0, 00:19:33.135 "state": "enabled", 00:19:33.135 "thread": "nvmf_tgt_poll_group_000", 00:19:33.135 "listen_address": { 00:19:33.135 "trtype": "TCP", 00:19:33.135 "adrfam": "IPv4", 00:19:33.135 "traddr": "10.0.0.2", 00:19:33.135 "trsvcid": "4420" 00:19:33.135 }, 00:19:33.135 "peer_address": { 00:19:33.135 "trtype": "TCP", 00:19:33.135 "adrfam": "IPv4", 00:19:33.135 "traddr": "10.0.0.1", 00:19:33.135 "trsvcid": "46788" 00:19:33.136 }, 00:19:33.136 "auth": { 00:19:33.136 "state": "completed", 00:19:33.136 "digest": "sha512", 00:19:33.136 "dhgroup": "ffdhe6144" 00:19:33.136 } 00:19:33.136 } 00:19:33.136 ]' 00:19:33.136 16:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:33.136 16:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:33.136 16:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:33.136 16:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:33.136 16:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:33.136 16:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.136 16:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.136 16:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.393 16:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZTQ5Zjk1NmU4OTU5ZjU2ZThhZTRiNTkxNzhkYTFkZDliOTA4ODk4ZjQ4ZmQzNGZlOTI2NDQ1MDgxNGQ3MWNlZA1oZjU=: 00:19:33.958 16:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.958 16:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:33.958 16:01:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.958 16:01:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.958 16:01:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.958 16:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:33.958 16:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:33.958 16:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:33.958 16:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:34.215 16:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:19:34.216 16:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:34.216 16:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:34.216 16:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:34.216 16:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:34.216 16:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.216 16:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.216 16:01:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.216 16:01:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.216 16:01:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.216 16:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.216 16:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.781 00:19:34.781 16:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:34.781 16:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:34.781 16:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.781 16:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.781 16:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.781 16:01:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.781 16:01:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.781 16:01:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.781 16:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:34.781 { 00:19:34.781 "cntlid": 137, 00:19:34.781 "qid": 0, 00:19:34.781 "state": "enabled", 00:19:34.781 "thread": "nvmf_tgt_poll_group_000", 00:19:34.781 "listen_address": { 00:19:34.781 "trtype": "TCP", 00:19:34.781 "adrfam": "IPv4", 00:19:34.781 "traddr": "10.0.0.2", 00:19:34.781 "trsvcid": "4420" 00:19:34.781 }, 00:19:34.781 "peer_address": { 00:19:34.781 "trtype": "TCP", 00:19:34.781 "adrfam": "IPv4", 00:19:34.781 "traddr": "10.0.0.1", 00:19:34.781 "trsvcid": "46810" 00:19:34.781 }, 00:19:34.781 "auth": { 00:19:34.781 "state": "completed", 00:19:34.782 "digest": "sha512", 00:19:34.782 "dhgroup": "ffdhe8192" 00:19:34.782 } 00:19:34.782 } 00:19:34.782 ]' 00:19:34.782 16:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:35.040 16:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:35.040 16:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:35.040 16:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:35.040 16:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:35.040 16:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.040 16:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.040 16:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.297 16:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzFhZDVkZDAyNTMwZDY2OTFmZDY0NDM0YjIwMzlhYzc3NzA5Mzk3Nzk1MWFmYWFlNe7tPQ==: --dhchap-ctrl-secret DHHC-1:03:MDM5M2Q5Njg5MGYzNzdjMzFjNDIzZGNlNzBlNzM5ZjA1NWNmNGI2OTlhNzZmYjhlYzAzYjU1ZjM0ODQzMmViYtXVTjs=: 00:19:35.924 16:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.924 16:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:35.924 16:01:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.924 16:01:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.924 16:01:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.924 16:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:35.924 16:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:35.924 16:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:35.924 16:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:19:35.924 16:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:35.924 16:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:35.924 16:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:35.924 16:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:35.924 16:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.924 16:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.924 16:01:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.924 16:01:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.925 16:01:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.925 16:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.925 16:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.525 00:19:36.526 16:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.526 16:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.526 16:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.526 16:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.526 16:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.526 16:01:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.526 16:01:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.526 16:01:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.526 16:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.526 { 00:19:36.526 "cntlid": 139, 00:19:36.526 "qid": 0, 00:19:36.526 "state": "enabled", 00:19:36.526 "thread": "nvmf_tgt_poll_group_000", 00:19:36.526 "listen_address": { 00:19:36.526 "trtype": "TCP", 00:19:36.526 "adrfam": "IPv4", 00:19:36.526 "traddr": "10.0.0.2", 00:19:36.526 "trsvcid": "4420" 00:19:36.526 }, 00:19:36.526 "peer_address": { 00:19:36.526 "trtype": "TCP", 00:19:36.526 "adrfam": "IPv4", 00:19:36.526 "traddr": "10.0.0.1", 00:19:36.526 "trsvcid": "46832" 00:19:36.526 }, 00:19:36.526 "auth": { 00:19:36.526 "state": "completed", 00:19:36.526 "digest": "sha512", 00:19:36.526 "dhgroup": "ffdhe8192" 00:19:36.526 } 00:19:36.526 } 00:19:36.526 ]' 00:19:36.526 16:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.784 16:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:36.784 16:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:36.784 16:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:36.784 16:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:36.784 16:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.784 16:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.784 16:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.041 16:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZTI4N2Q4OTVhN2ExNzA5MzBkYjdhM2ZiYzM2ZTE2YWGKVPxo: --dhchap-ctrl-secret DHHC-1:02:MDE2YWU0NGFhZDllNGFlNGU5MzE1NDE1NzY4ZjBmNGZjZWMzYjc1YTE0MjhiMWYwT6tFBw==: 00:19:37.607 16:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.607 16:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:37.607 16:01:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.607 16:01:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.607 16:01:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.607 16:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.607 16:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:37.607 16:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:37.607 16:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:19:37.607 16:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:37.607 16:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:37.607 16:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:37.607 16:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:37.607 16:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.607 16:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.607 16:01:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.607 16:01:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.607 16:01:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.607 16:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.607 16:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.173 00:19:38.173 16:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:38.173 16:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:38.173 16:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.431 16:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.431 16:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.431 16:01:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.431 16:01:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.431 16:01:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.431 16:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.431 { 00:19:38.431 "cntlid": 141, 00:19:38.431 "qid": 0, 00:19:38.431 "state": "enabled", 00:19:38.431 "thread": "nvmf_tgt_poll_group_000", 00:19:38.431 "listen_address": { 00:19:38.431 "trtype": "TCP", 00:19:38.431 "adrfam": "IPv4", 00:19:38.431 "traddr": "10.0.0.2", 00:19:38.431 "trsvcid": "4420" 00:19:38.431 }, 00:19:38.431 "peer_address": { 00:19:38.431 "trtype": "TCP", 00:19:38.431 "adrfam": "IPv4", 00:19:38.431 "traddr": "10.0.0.1", 00:19:38.431 "trsvcid": "46860" 00:19:38.431 }, 00:19:38.431 "auth": { 00:19:38.431 "state": "completed", 00:19:38.431 "digest": "sha512", 00:19:38.431 "dhgroup": "ffdhe8192" 00:19:38.431 } 00:19:38.431 } 00:19:38.431 ]' 00:19:38.431 16:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.431 16:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:38.431 16:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.431 16:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:38.431 16:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:38.431 16:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.431 16:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.431 16:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.689 16:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:OTc5NDAxZjllMjViYjlkNjgyYWUxZGU3ZDRhZWY3ZDRiZTI2YjRjYWNlOWQwMDAySmlsEw==: --dhchap-ctrl-secret DHHC-1:01:MTBhMTg4MjQ2MzkxMzM5NDYzZTQxMzFmMGNmNzIyYTjcjymE: 00:19:39.254 16:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.254 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.254 16:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:39.254 16:01:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.254 16:01:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.254 16:01:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.255 16:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:39.255 16:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:39.255 16:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:39.513 16:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:19:39.513 16:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.513 16:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:39.513 16:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:39.513 16:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:39.513 16:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.513 16:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:19:39.513 16:01:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.513 16:01:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.513 16:01:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.513 16:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:39.513 16:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:40.079 00:19:40.079 16:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:40.079 16:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:40.079 16:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.079 16:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.079 16:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.079 16:01:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.079 16:01:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.079 16:01:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.079 16:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:40.079 { 00:19:40.079 "cntlid": 143, 00:19:40.079 "qid": 0, 00:19:40.079 "state": "enabled", 00:19:40.079 "thread": "nvmf_tgt_poll_group_000", 00:19:40.079 "listen_address": { 00:19:40.079 "trtype": "TCP", 00:19:40.079 "adrfam": "IPv4", 00:19:40.079 "traddr": "10.0.0.2", 00:19:40.079 "trsvcid": "4420" 00:19:40.079 }, 00:19:40.079 "peer_address": { 00:19:40.079 "trtype": "TCP", 00:19:40.079 "adrfam": "IPv4", 00:19:40.079 "traddr": "10.0.0.1", 00:19:40.079 "trsvcid": "38752" 00:19:40.079 }, 00:19:40.079 "auth": { 00:19:40.079 "state": "completed", 00:19:40.079 "digest": "sha512", 00:19:40.079 "dhgroup": "ffdhe8192" 00:19:40.079 } 00:19:40.079 } 00:19:40.079 ]' 00:19:40.079 16:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:40.338 16:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:40.338 16:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:40.338 16:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:40.338 16:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:40.338 16:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.338 16:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.338 16:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.596 16:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZTQ5Zjk1NmU4OTU5ZjU2ZThhZTRiNTkxNzhkYTFkZDliOTA4ODk4ZjQ4ZmQzNGZlOTI2NDQ1MDgxNGQ3MWNlZA1oZjU=: 00:19:41.162 16:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.162 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.162 16:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:41.162 16:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.162 16:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.162 16:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.162 16:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:41.162 16:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:19:41.162 16:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:41.162 16:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:41.162 16:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:41.162 16:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:41.162 16:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:19:41.162 16:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:41.162 16:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:41.162 16:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:41.162 16:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:41.162 16:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.162 16:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.162 16:01:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.162 16:01:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.162 16:01:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.162 16:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.162 16:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:41.726 00:19:41.726 16:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:41.726 16:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:41.726 16:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.983 16:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.983 16:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.983 16:01:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.983 16:01:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.983 16:01:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.983 16:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:41.983 { 00:19:41.983 "cntlid": 145, 00:19:41.983 "qid": 0, 00:19:41.983 "state": "enabled", 00:19:41.983 "thread": "nvmf_tgt_poll_group_000", 00:19:41.983 "listen_address": { 00:19:41.983 "trtype": "TCP", 00:19:41.983 "adrfam": "IPv4", 00:19:41.983 "traddr": "10.0.0.2", 00:19:41.983 "trsvcid": "4420" 00:19:41.983 }, 00:19:41.983 "peer_address": { 00:19:41.983 "trtype": "TCP", 00:19:41.983 "adrfam": "IPv4", 00:19:41.983 "traddr": "10.0.0.1", 00:19:41.983 "trsvcid": "38768" 00:19:41.983 }, 00:19:41.983 "auth": { 00:19:41.983 "state": "completed", 00:19:41.983 "digest": "sha512", 00:19:41.983 "dhgroup": "ffdhe8192" 00:19:41.983 } 00:19:41.983 } 00:19:41.983 ]' 00:19:41.983 16:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:41.983 16:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:41.983 16:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:41.983 16:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:41.983 16:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:41.983 16:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.983 16:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.983 16:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.241 16:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MzFhZDVkZDAyNTMwZDY2OTFmZDY0NDM0YjIwMzlhYzc3NzA5Mzk3Nzk1MWFmYWFlNe7tPQ==: --dhchap-ctrl-secret DHHC-1:03:MDM5M2Q5Njg5MGYzNzdjMzFjNDIzZGNlNzBlNzM5ZjA1NWNmNGI2OTlhNzZmYjhlYzAzYjU1ZjM0ODQzMmViYtXVTjs=: 00:19:42.807 16:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.807 16:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:42.807 16:01:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.807 16:01:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.807 16:01:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.807 16:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:19:42.807 16:01:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.807 16:01:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.807 16:01:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.807 16:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:42.807 16:01:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:42.807 16:01:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:42.807 16:01:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:42.807 16:01:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:42.807 16:01:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:42.807 16:01:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:42.807 16:01:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:42.807 16:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:43.373 request: 00:19:43.373 { 00:19:43.373 "name": "nvme0", 00:19:43.373 "trtype": "tcp", 00:19:43.373 "traddr": "10.0.0.2", 00:19:43.373 "adrfam": "ipv4", 00:19:43.373 "trsvcid": "4420", 00:19:43.373 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:43.373 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:19:43.373 "prchk_reftag": false, 00:19:43.373 "prchk_guard": false, 00:19:43.373 "hdgst": false, 00:19:43.373 "ddgst": false, 00:19:43.373 "dhchap_key": "key2", 00:19:43.373 "method": "bdev_nvme_attach_controller", 00:19:43.373 "req_id": 1 00:19:43.373 } 00:19:43.373 Got JSON-RPC error response 00:19:43.373 response: 00:19:43.373 { 00:19:43.373 "code": -5, 00:19:43.373 "message": "Input/output error" 00:19:43.373 } 00:19:43.373 16:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:43.373 16:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:43.373 16:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:43.373 16:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:43.373 16:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:43.373 16:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.373 16:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.373 16:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.373 16:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.373 16:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.373 16:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.373 16:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.373 16:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:43.373 16:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:43.373 16:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:43.373 16:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:43.373 16:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:43.373 16:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:43.373 16:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:43.373 16:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:43.373 16:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:43.631 request: 00:19:43.631 { 00:19:43.631 "name": "nvme0", 00:19:43.631 "trtype": "tcp", 00:19:43.631 "traddr": "10.0.0.2", 00:19:43.631 "adrfam": "ipv4", 00:19:43.631 "trsvcid": "4420", 00:19:43.631 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:43.631 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:19:43.631 "prchk_reftag": false, 00:19:43.631 "prchk_guard": false, 00:19:43.631 "hdgst": false, 00:19:43.631 "ddgst": false, 00:19:43.631 "dhchap_key": "key1", 00:19:43.631 "dhchap_ctrlr_key": "ckey2", 00:19:43.631 "method": "bdev_nvme_attach_controller", 00:19:43.631 "req_id": 1 00:19:43.631 } 00:19:43.631 Got JSON-RPC error response 00:19:43.631 response: 00:19:43.631 { 00:19:43.631 "code": -5, 00:19:43.631 "message": "Input/output error" 00:19:43.631 } 00:19:43.631 16:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:43.631 16:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:43.631 16:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:43.631 16:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:43.631 16:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:43.631 16:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.631 16:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.631 16:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.631 16:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:19:43.631 16:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.631 16:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.631 16:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.631 16:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.631 16:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:43.631 16:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.631 16:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:43.631 16:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:43.631 16:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:43.631 16:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:43.631 16:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.631 16:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.198 request: 00:19:44.198 { 00:19:44.198 "name": "nvme0", 00:19:44.198 "trtype": "tcp", 00:19:44.198 "traddr": "10.0.0.2", 00:19:44.198 "adrfam": "ipv4", 00:19:44.198 "trsvcid": "4420", 00:19:44.198 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:44.198 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:19:44.198 "prchk_reftag": false, 00:19:44.198 "prchk_guard": false, 00:19:44.198 "hdgst": false, 00:19:44.198 "ddgst": false, 00:19:44.198 "dhchap_key": "key1", 00:19:44.198 "dhchap_ctrlr_key": "ckey1", 00:19:44.198 "method": "bdev_nvme_attach_controller", 00:19:44.198 "req_id": 1 00:19:44.198 } 00:19:44.198 Got JSON-RPC error response 00:19:44.198 response: 00:19:44.198 { 00:19:44.198 "code": -5, 00:19:44.198 "message": "Input/output error" 00:19:44.198 } 00:19:44.198 16:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:44.198 16:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:44.198 16:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:44.198 16:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:44.198 16:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:44.198 16:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.198 16:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.198 16:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.198 16:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 3761801 00:19:44.198 16:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3761801 ']' 00:19:44.198 16:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3761801 00:19:44.198 16:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:44.198 16:01:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:44.198 16:01:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3761801 00:19:44.198 16:01:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:44.198 16:01:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:44.198 16:01:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3761801' 00:19:44.198 killing process with pid 3761801 00:19:44.198 16:01:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3761801 00:19:44.198 16:01:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3761801 00:19:44.456 16:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:44.456 16:01:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:44.456 16:01:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:44.456 16:01:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.456 16:01:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3783017 00:19:44.456 16:01:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3783017 00:19:44.456 16:01:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3783017 ']' 00:19:44.456 16:01:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.456 16:01:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:44.456 16:01:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.456 16:01:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:44.456 16:01:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:44.456 16:01:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.388 16:01:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:45.388 16:01:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:45.388 16:01:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:45.388 16:01:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:45.388 16:01:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.388 16:01:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:45.388 16:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:45.388 16:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 3783017 00:19:45.388 16:01:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3783017 ']' 00:19:45.388 16:01:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:45.388 16:01:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:45.388 16:01:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:45.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:45.388 16:01:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:45.388 16:01:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.388 16:01:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:45.388 16:01:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:45.388 16:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:19:45.388 16:01:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.388 16:01:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.645 16:01:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.645 16:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:19:45.645 16:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:45.645 16:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:45.645 16:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:45.645 16:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:45.645 16:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.645 16:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:19:45.645 16:01:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.645 16:01:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.645 16:01:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.645 16:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:45.646 16:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:46.210 00:19:46.210 16:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:46.211 16:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:46.211 16:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.211 16:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.211 16:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.211 16:01:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.211 16:01:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.211 16:01:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.211 16:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:46.211 { 00:19:46.211 "cntlid": 1, 00:19:46.211 "qid": 0, 00:19:46.211 "state": "enabled", 00:19:46.211 "thread": "nvmf_tgt_poll_group_000", 00:19:46.211 "listen_address": { 00:19:46.211 "trtype": "TCP", 00:19:46.211 "adrfam": "IPv4", 00:19:46.211 "traddr": "10.0.0.2", 00:19:46.211 "trsvcid": "4420" 00:19:46.211 }, 00:19:46.211 "peer_address": { 00:19:46.211 "trtype": "TCP", 00:19:46.211 "adrfam": "IPv4", 00:19:46.211 "traddr": "10.0.0.1", 00:19:46.211 "trsvcid": "38810" 00:19:46.211 }, 00:19:46.211 "auth": { 00:19:46.211 "state": "completed", 00:19:46.211 "digest": "sha512", 00:19:46.211 "dhgroup": "ffdhe8192" 00:19:46.211 } 00:19:46.211 } 00:19:46.211 ]' 00:19:46.211 16:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:46.211 16:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:46.211 16:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:46.468 16:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:46.468 16:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:46.468 16:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.468 16:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.468 16:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.468 16:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZTQ5Zjk1NmU4OTU5ZjU2ZThhZTRiNTkxNzhkYTFkZDliOTA4ODk4ZjQ4ZmQzNGZlOTI2NDQ1MDgxNGQ3MWNlZA1oZjU=: 00:19:47.033 16:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.033 16:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:47.033 16:01:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.033 16:01:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.033 16:01:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.033 16:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:19:47.033 16:01:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.033 16:01:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.033 16:01:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.033 16:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:47.033 16:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:47.291 16:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:47.291 16:01:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:47.292 16:01:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:47.292 16:01:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:47.292 16:01:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:47.292 16:01:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:47.292 16:01:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:47.292 16:01:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:47.292 16:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:47.550 request: 00:19:47.550 { 00:19:47.550 "name": "nvme0", 00:19:47.550 "trtype": "tcp", 00:19:47.550 "traddr": "10.0.0.2", 00:19:47.550 "adrfam": "ipv4", 00:19:47.550 "trsvcid": "4420", 00:19:47.550 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:47.550 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:19:47.550 "prchk_reftag": false, 00:19:47.550 "prchk_guard": false, 00:19:47.550 "hdgst": false, 00:19:47.550 "ddgst": false, 00:19:47.550 "dhchap_key": "key3", 00:19:47.550 "method": "bdev_nvme_attach_controller", 00:19:47.550 "req_id": 1 00:19:47.550 } 00:19:47.550 Got JSON-RPC error response 00:19:47.550 response: 00:19:47.550 { 00:19:47.550 "code": -5, 00:19:47.550 "message": "Input/output error" 00:19:47.550 } 00:19:47.550 16:01:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:47.550 16:01:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:47.550 16:01:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:47.550 16:01:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:47.550 16:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:19:47.550 16:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:19:47.550 16:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:47.550 16:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:47.808 16:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:47.808 16:01:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:47.808 16:01:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:47.808 16:01:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:47.808 16:01:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:47.808 16:01:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:47.808 16:01:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:47.808 16:01:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:47.808 16:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:47.808 request: 00:19:47.808 { 00:19:47.808 "name": "nvme0", 00:19:47.808 "trtype": "tcp", 00:19:47.808 "traddr": "10.0.0.2", 00:19:47.808 "adrfam": "ipv4", 00:19:47.808 "trsvcid": "4420", 00:19:47.808 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:47.808 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:19:47.808 "prchk_reftag": false, 00:19:47.808 "prchk_guard": false, 00:19:47.808 "hdgst": false, 00:19:47.808 "ddgst": false, 00:19:47.808 "dhchap_key": "key3", 00:19:47.808 "method": "bdev_nvme_attach_controller", 00:19:47.808 "req_id": 1 00:19:47.808 } 00:19:47.808 Got JSON-RPC error response 00:19:47.808 response: 00:19:47.808 { 00:19:47.808 "code": -5, 00:19:47.808 "message": "Input/output error" 00:19:47.808 } 00:19:47.808 16:01:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:47.808 16:01:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:47.808 16:01:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:47.808 16:01:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:47.808 16:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:47.808 16:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:19:47.808 16:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:47.808 16:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:47.808 16:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:47.808 16:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:48.066 16:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:48.066 16:01:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.066 16:01:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.066 16:01:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.067 16:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:48.067 16:01:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.067 16:01:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.067 16:01:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.067 16:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:48.067 16:01:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:48.067 16:01:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:48.067 16:01:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:48.067 16:01:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:48.067 16:01:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:48.067 16:01:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:48.067 16:01:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:48.067 16:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:48.325 request: 00:19:48.325 { 00:19:48.325 "name": "nvme0", 00:19:48.325 "trtype": "tcp", 00:19:48.325 "traddr": "10.0.0.2", 00:19:48.325 "adrfam": "ipv4", 00:19:48.325 "trsvcid": "4420", 00:19:48.325 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:48.325 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:19:48.325 "prchk_reftag": false, 00:19:48.325 "prchk_guard": false, 00:19:48.325 "hdgst": false, 00:19:48.325 "ddgst": false, 00:19:48.325 "dhchap_key": "key0", 00:19:48.325 "dhchap_ctrlr_key": "key1", 00:19:48.325 "method": "bdev_nvme_attach_controller", 00:19:48.325 "req_id": 1 00:19:48.325 } 00:19:48.325 Got JSON-RPC error response 00:19:48.325 response: 00:19:48.325 { 00:19:48.325 "code": -5, 00:19:48.325 "message": "Input/output error" 00:19:48.325 } 00:19:48.325 16:01:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:48.325 16:01:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:48.325 16:01:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:48.325 16:01:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:48.325 16:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:48.325 16:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:48.582 00:19:48.583 16:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:19:48.583 16:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:19:48.583 16:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.583 16:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.583 16:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.583 16:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.841 16:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:19:48.841 16:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:19:48.841 16:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3761935 00:19:48.841 16:01:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3761935 ']' 00:19:48.841 16:01:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3761935 00:19:48.841 16:01:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:48.841 16:01:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:48.841 16:01:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3761935 00:19:48.841 16:01:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:48.841 16:01:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:48.841 16:01:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3761935' 00:19:48.841 killing process with pid 3761935 00:19:48.841 16:01:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3761935 00:19:48.841 16:01:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3761935 00:19:49.098 16:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:49.098 16:01:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:49.098 16:01:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:19:49.098 16:01:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:49.098 16:01:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:19:49.098 16:01:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:49.098 16:01:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:49.098 rmmod nvme_tcp 00:19:49.355 rmmod nvme_fabrics 00:19:49.355 rmmod nvme_keyring 00:19:49.355 16:01:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:49.355 16:01:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:19:49.355 16:01:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:19:49.355 16:01:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 3783017 ']' 00:19:49.355 16:01:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 3783017 00:19:49.355 16:01:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3783017 ']' 00:19:49.355 16:01:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3783017 00:19:49.355 16:01:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:49.355 16:01:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:49.355 16:01:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3783017 00:19:49.355 16:01:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:49.355 16:01:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:49.355 16:01:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3783017' 00:19:49.355 killing process with pid 3783017 00:19:49.355 16:01:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3783017 00:19:49.355 16:01:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3783017 00:19:49.613 16:01:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:49.613 16:01:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:49.613 16:01:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:49.613 16:01:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:49.613 16:01:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:49.613 16:01:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:49.613 16:01:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:49.613 16:01:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:51.544 16:01:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:51.544 16:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.ECz /tmp/spdk.key-sha256.RX5 /tmp/spdk.key-sha384.6jv /tmp/spdk.key-sha512.tnr /tmp/spdk.key-sha512.N9s /tmp/spdk.key-sha384.Vmw /tmp/spdk.key-sha256.qru '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:51.544 00:19:51.544 real 2m10.791s 00:19:51.544 user 5m0.837s 00:19:51.544 sys 0m20.380s 00:19:51.544 16:01:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:51.544 16:01:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.544 ************************************ 00:19:51.544 END TEST nvmf_auth_target 00:19:51.544 ************************************ 00:19:51.544 16:01:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:51.544 16:01:20 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:19:51.544 16:01:20 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:51.544 16:01:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:19:51.544 16:01:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:51.544 16:01:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:51.544 ************************************ 00:19:51.544 START TEST nvmf_bdevio_no_huge 00:19:51.544 ************************************ 00:19:51.544 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:51.802 * Looking for test storage... 00:19:51.802 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:51.802 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:51.802 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:51.802 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:51.802 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:51.802 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:51.802 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:51.802 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:51.802 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:51.802 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:51.802 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:51.802 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:51.802 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:51.802 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:51.802 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:51.802 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:51.802 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:51.802 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:51.802 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:51.802 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:51.802 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:51.802 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:51.802 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:51.802 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.802 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.802 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.802 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:51.803 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.803 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:19:51.803 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:51.803 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:51.803 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:51.803 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:51.803 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:51.803 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:51.803 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:51.803 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:51.803 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:51.803 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:51.803 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:51.803 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:51.803 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:51.803 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:51.803 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:51.803 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:51.803 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:51.803 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:51.803 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:51.803 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:51.803 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:51.803 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:19:51.803 16:01:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:57.059 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:57.059 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:57.059 Found net devices under 0000:86:00.0: cvl_0_0 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:57.059 Found net devices under 0000:86:00.1: cvl_0_1 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:57.059 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:57.059 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:57.059 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:19:57.059 00:19:57.059 --- 10.0.0.2 ping statistics --- 00:19:57.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.060 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:19:57.060 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:57.060 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:57.060 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:19:57.060 00:19:57.060 --- 10.0.0.1 ping statistics --- 00:19:57.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.060 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:19:57.060 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:57.060 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:19:57.060 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:57.060 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:57.060 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:57.060 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:57.060 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:57.060 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:57.060 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:57.060 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:57.060 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:57.060 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:57.060 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:57.060 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=3787275 00:19:57.060 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:57.060 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 3787275 00:19:57.060 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 3787275 ']' 00:19:57.060 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:57.060 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:57.060 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:57.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:57.060 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:57.060 16:01:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:57.060 [2024-07-15 16:01:25.895930] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:19:57.060 [2024-07-15 16:01:25.895980] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:57.060 [2024-07-15 16:01:25.959570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:57.317 [2024-07-15 16:01:26.045661] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:57.317 [2024-07-15 16:01:26.045692] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:57.317 [2024-07-15 16:01:26.045701] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:57.317 [2024-07-15 16:01:26.045708] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:57.317 [2024-07-15 16:01:26.045714] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:57.317 [2024-07-15 16:01:26.045826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:57.317 [2024-07-15 16:01:26.045936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:19:57.317 [2024-07-15 16:01:26.046041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:57.317 [2024-07-15 16:01:26.046042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:19:57.881 16:01:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:57.881 16:01:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:19:57.881 16:01:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:57.881 16:01:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:57.881 16:01:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:57.881 16:01:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:57.881 16:01:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:57.881 16:01:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.881 16:01:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:57.881 [2024-07-15 16:01:26.745300] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:57.881 16:01:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.881 16:01:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:57.881 16:01:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.881 16:01:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:57.881 Malloc0 00:19:57.881 16:01:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.881 16:01:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:57.881 16:01:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.881 16:01:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:57.881 16:01:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.881 16:01:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:57.881 16:01:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.881 16:01:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:57.881 16:01:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.881 16:01:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:57.881 16:01:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.881 16:01:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:57.881 [2024-07-15 16:01:26.789526] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:57.881 16:01:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.881 16:01:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:57.881 16:01:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:57.881 16:01:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:19:57.881 16:01:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:19:57.881 16:01:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:57.881 16:01:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:57.881 { 00:19:57.881 "params": { 00:19:57.881 "name": "Nvme$subsystem", 00:19:57.881 "trtype": "$TEST_TRANSPORT", 00:19:57.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:57.881 "adrfam": "ipv4", 00:19:57.881 "trsvcid": "$NVMF_PORT", 00:19:57.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:57.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:57.881 "hdgst": ${hdgst:-false}, 00:19:57.881 "ddgst": ${ddgst:-false} 00:19:57.881 }, 00:19:57.881 "method": "bdev_nvme_attach_controller" 00:19:57.881 } 00:19:57.881 EOF 00:19:57.881 )") 00:19:57.881 16:01:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:19:57.881 16:01:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:19:57.881 16:01:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:19:57.881 16:01:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:57.881 "params": { 00:19:57.881 "name": "Nvme1", 00:19:57.881 "trtype": "tcp", 00:19:57.881 "traddr": "10.0.0.2", 00:19:57.881 "adrfam": "ipv4", 00:19:57.881 "trsvcid": "4420", 00:19:57.881 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:57.881 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:57.881 "hdgst": false, 00:19:57.881 "ddgst": false 00:19:57.881 }, 00:19:57.881 "method": "bdev_nvme_attach_controller" 00:19:57.881 }' 00:19:58.137 [2024-07-15 16:01:26.839020] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:19:58.137 [2024-07-15 16:01:26.839067] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3787357 ] 00:19:58.137 [2024-07-15 16:01:26.896739] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:58.137 [2024-07-15 16:01:26.982745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.137 [2024-07-15 16:01:26.982842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.137 [2024-07-15 16:01:26.982842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:58.393 I/O targets: 00:19:58.393 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:58.393 00:19:58.393 00:19:58.393 CUnit - A unit testing framework for C - Version 2.1-3 00:19:58.393 http://cunit.sourceforge.net/ 00:19:58.393 00:19:58.393 00:19:58.393 Suite: bdevio tests on: Nvme1n1 00:19:58.393 Test: blockdev write read block ...passed 00:19:58.650 Test: blockdev write zeroes read block ...passed 00:19:58.650 Test: blockdev write zeroes read no split ...passed 00:19:58.650 Test: blockdev write zeroes read split ...passed 00:19:58.650 Test: blockdev write zeroes read split partial ...passed 00:19:58.650 Test: blockdev reset ...[2024-07-15 16:01:27.452639] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:58.650 [2024-07-15 16:01:27.452696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb4b300 (9): Bad file descriptor 00:19:58.650 [2024-07-15 16:01:27.467965] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:58.650 passed 00:19:58.650 Test: blockdev write read 8 blocks ...passed 00:19:58.650 Test: blockdev write read size > 128k ...passed 00:19:58.650 Test: blockdev write read invalid size ...passed 00:19:58.650 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:58.650 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:58.650 Test: blockdev write read max offset ...passed 00:19:58.908 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:58.908 Test: blockdev writev readv 8 blocks ...passed 00:19:58.908 Test: blockdev writev readv 30 x 1block ...passed 00:19:58.908 Test: blockdev writev readv block ...passed 00:19:58.908 Test: blockdev writev readv size > 128k ...passed 00:19:58.908 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:58.908 Test: blockdev comparev and writev ...[2024-07-15 16:01:27.725386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:58.908 [2024-07-15 16:01:27.725414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:58.908 [2024-07-15 16:01:27.725428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:58.908 [2024-07-15 16:01:27.725436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:58.908 [2024-07-15 16:01:27.725690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:58.908 [2024-07-15 16:01:27.725700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:58.908 [2024-07-15 16:01:27.725712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:58.908 [2024-07-15 16:01:27.725718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:58.908 [2024-07-15 16:01:27.725968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:58.908 [2024-07-15 16:01:27.725977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:58.908 [2024-07-15 16:01:27.725989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:58.908 [2024-07-15 16:01:27.725996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:58.908 [2024-07-15 16:01:27.726244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:58.908 [2024-07-15 16:01:27.726254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:58.908 [2024-07-15 16:01:27.726266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:58.908 [2024-07-15 16:01:27.726273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:58.908 passed 00:19:58.908 Test: blockdev nvme passthru rw ...passed 00:19:58.908 Test: blockdev nvme passthru vendor specific ...[2024-07-15 16:01:27.808626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:58.908 [2024-07-15 16:01:27.808642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:58.908 [2024-07-15 16:01:27.808778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:58.908 [2024-07-15 16:01:27.808788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:58.908 [2024-07-15 16:01:27.808916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:58.908 [2024-07-15 16:01:27.808925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:58.908 [2024-07-15 16:01:27.809066] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:58.908 [2024-07-15 16:01:27.809076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:58.908 passed 00:19:58.908 Test: blockdev nvme admin passthru ...passed 00:19:59.165 Test: blockdev copy ...passed 00:19:59.165 00:19:59.165 Run Summary: Type Total Ran Passed Failed Inactive 00:19:59.165 suites 1 1 n/a 0 0 00:19:59.165 tests 23 23 23 0 0 00:19:59.165 asserts 152 152 152 0 n/a 00:19:59.165 00:19:59.165 Elapsed time = 1.234 seconds 00:19:59.423 16:01:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:59.423 16:01:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.423 16:01:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:59.423 16:01:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.423 16:01:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:59.423 16:01:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:59.423 16:01:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:59.423 16:01:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:19:59.423 16:01:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:59.423 16:01:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:19:59.423 16:01:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:59.423 16:01:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:59.423 rmmod nvme_tcp 00:19:59.423 rmmod nvme_fabrics 00:19:59.423 rmmod nvme_keyring 00:19:59.423 16:01:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:59.423 16:01:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:19:59.423 16:01:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:19:59.423 16:01:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 3787275 ']' 00:19:59.423 16:01:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 3787275 00:19:59.423 16:01:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 3787275 ']' 00:19:59.423 16:01:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 3787275 00:19:59.423 16:01:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:19:59.423 16:01:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:59.423 16:01:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3787275 00:19:59.423 16:01:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:19:59.423 16:01:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:19:59.423 16:01:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3787275' 00:19:59.423 killing process with pid 3787275 00:19:59.423 16:01:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 3787275 00:19:59.423 16:01:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 3787275 00:19:59.682 16:01:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:59.682 16:01:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:59.682 16:01:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:59.682 16:01:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:59.682 16:01:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:59.682 16:01:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.682 16:01:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:59.682 16:01:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:02.216 16:01:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:02.216 00:20:02.216 real 0m10.178s 00:20:02.216 user 0m13.788s 00:20:02.216 sys 0m4.806s 00:20:02.216 16:01:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:02.216 16:01:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:02.216 ************************************ 00:20:02.216 END TEST nvmf_bdevio_no_huge 00:20:02.216 ************************************ 00:20:02.216 16:01:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:02.216 16:01:30 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:02.216 16:01:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:02.216 16:01:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:02.216 16:01:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:02.216 ************************************ 00:20:02.216 START TEST nvmf_tls 00:20:02.216 ************************************ 00:20:02.216 16:01:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:02.216 * Looking for test storage... 00:20:02.216 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:02.216 16:01:30 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:02.216 16:01:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:02.216 16:01:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:02.216 16:01:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:02.216 16:01:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:02.216 16:01:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:02.216 16:01:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:02.216 16:01:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:02.216 16:01:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:02.216 16:01:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:02.216 16:01:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:02.216 16:01:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:02.216 16:01:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:02.216 16:01:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:02.216 16:01:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:02.216 16:01:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:02.216 16:01:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:02.216 16:01:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:02.216 16:01:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:02.216 16:01:30 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:02.216 16:01:30 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:02.216 16:01:30 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:02.216 16:01:30 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.216 16:01:30 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.216 16:01:30 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.216 16:01:30 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:02.216 16:01:30 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.216 16:01:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:20:02.216 16:01:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:02.216 16:01:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:02.216 16:01:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:02.216 16:01:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:02.216 16:01:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:02.216 16:01:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:02.216 16:01:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:02.216 16:01:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:02.216 16:01:30 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:02.216 16:01:30 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:20:02.216 16:01:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:02.216 16:01:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:02.216 16:01:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:02.216 16:01:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:02.216 16:01:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:02.216 16:01:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:02.216 16:01:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:02.216 16:01:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:02.216 16:01:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:02.216 16:01:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:02.216 16:01:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:20:02.216 16:01:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:07.484 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:07.484 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:07.484 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:07.485 Found net devices under 0000:86:00.0: cvl_0_0 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:07.485 Found net devices under 0000:86:00.1: cvl_0_1 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:07.485 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:07.485 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:20:07.485 00:20:07.485 --- 10.0.0.2 ping statistics --- 00:20:07.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.485 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:07.485 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:07.485 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:20:07.485 00:20:07.485 --- 10.0.0.1 ping statistics --- 00:20:07.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.485 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3791058 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3791058 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3791058 ']' 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:07.485 16:01:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.744 [2024-07-15 16:01:36.432618] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:20:07.744 [2024-07-15 16:01:36.432659] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:07.744 EAL: No free 2048 kB hugepages reported on node 1 00:20:07.744 [2024-07-15 16:01:36.490947] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.744 [2024-07-15 16:01:36.563165] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:07.744 [2024-07-15 16:01:36.563205] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:07.744 [2024-07-15 16:01:36.563212] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:07.744 [2024-07-15 16:01:36.563218] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:07.744 [2024-07-15 16:01:36.563222] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:07.744 [2024-07-15 16:01:36.563264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:08.310 16:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:08.310 16:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:08.310 16:01:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:08.310 16:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:08.310 16:01:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.568 16:01:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:08.568 16:01:37 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:20:08.568 16:01:37 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:08.568 true 00:20:08.568 16:01:37 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:08.568 16:01:37 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:20:08.826 16:01:37 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:20:08.826 16:01:37 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:20:08.826 16:01:37 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:09.082 16:01:37 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:09.082 16:01:37 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:20:09.082 16:01:37 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:20:09.082 16:01:37 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:20:09.082 16:01:37 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:09.339 16:01:38 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:09.339 16:01:38 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:20:09.595 16:01:38 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:20:09.595 16:01:38 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:20:09.595 16:01:38 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:09.595 16:01:38 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:20:09.595 16:01:38 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:20:09.595 16:01:38 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:20:09.595 16:01:38 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:09.852 16:01:38 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:09.852 16:01:38 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:20:10.109 16:01:38 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:20:10.109 16:01:38 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:20:10.109 16:01:38 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:10.109 16:01:38 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:20:10.109 16:01:38 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:10.365 16:01:39 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:20:10.365 16:01:39 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:20:10.365 16:01:39 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:10.365 16:01:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:10.365 16:01:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:10.365 16:01:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:10.365 16:01:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:10.365 16:01:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:10.365 16:01:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:10.365 16:01:39 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:10.365 16:01:39 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:10.365 16:01:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:10.365 16:01:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:10.365 16:01:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:10.365 16:01:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:20:10.365 16:01:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:10.365 16:01:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:10.365 16:01:39 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:10.365 16:01:39 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:20:10.365 16:01:39 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.hfmpPCsovm 00:20:10.365 16:01:39 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:10.365 16:01:39 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.Ab5JtgSzab 00:20:10.365 16:01:39 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:10.365 16:01:39 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:10.365 16:01:39 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.hfmpPCsovm 00:20:10.365 16:01:39 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.Ab5JtgSzab 00:20:10.365 16:01:39 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:10.622 16:01:39 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:10.879 16:01:39 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.hfmpPCsovm 00:20:10.879 16:01:39 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.hfmpPCsovm 00:20:10.879 16:01:39 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:10.879 [2024-07-15 16:01:39.797663] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:11.136 16:01:39 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:11.137 16:01:40 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:11.395 [2024-07-15 16:01:40.162626] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:11.395 [2024-07-15 16:01:40.162825] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:11.395 16:01:40 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:11.672 malloc0 00:20:11.672 16:01:40 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:11.672 16:01:40 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hfmpPCsovm 00:20:11.978 [2024-07-15 16:01:40.676148] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:11.978 16:01:40 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.hfmpPCsovm 00:20:11.978 EAL: No free 2048 kB hugepages reported on node 1 00:20:21.982 Initializing NVMe Controllers 00:20:21.983 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:21.983 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:21.983 Initialization complete. Launching workers. 00:20:21.983 ======================================================== 00:20:21.983 Latency(us) 00:20:21.983 Device Information : IOPS MiB/s Average min max 00:20:21.983 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16551.59 64.65 3867.11 804.35 7689.56 00:20:21.983 ======================================================== 00:20:21.983 Total : 16551.59 64.65 3867.11 804.35 7689.56 00:20:21.983 00:20:21.983 16:01:50 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hfmpPCsovm 00:20:21.983 16:01:50 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:21.983 16:01:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:21.983 16:01:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:21.983 16:01:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.hfmpPCsovm' 00:20:21.983 16:01:50 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:21.983 16:01:50 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3793590 00:20:21.983 16:01:50 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:21.983 16:01:50 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:21.983 16:01:50 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3793590 /var/tmp/bdevperf.sock 00:20:21.983 16:01:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3793590 ']' 00:20:21.983 16:01:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:21.983 16:01:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:21.983 16:01:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:21.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:21.983 16:01:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:21.983 16:01:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.983 [2024-07-15 16:01:50.833954] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:20:21.983 [2024-07-15 16:01:50.834002] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3793590 ] 00:20:21.983 EAL: No free 2048 kB hugepages reported on node 1 00:20:21.983 [2024-07-15 16:01:50.884597] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.240 [2024-07-15 16:01:50.963798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:22.805 16:01:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:22.805 16:01:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:22.805 16:01:51 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hfmpPCsovm 00:20:23.064 [2024-07-15 16:01:51.805560] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:23.064 [2024-07-15 16:01:51.805630] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:23.064 TLSTESTn1 00:20:23.064 16:01:51 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:23.064 Running I/O for 10 seconds... 00:20:35.259 00:20:35.259 Latency(us) 00:20:35.259 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.259 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:35.259 Verification LBA range: start 0x0 length 0x2000 00:20:35.259 TLSTESTn1 : 10.01 5614.44 21.93 0.00 0.00 22763.00 6040.71 44222.55 00:20:35.259 =================================================================================================================== 00:20:35.259 Total : 5614.44 21.93 0.00 0.00 22763.00 6040.71 44222.55 00:20:35.259 0 00:20:35.259 16:02:02 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:35.259 16:02:02 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3793590 00:20:35.259 16:02:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3793590 ']' 00:20:35.259 16:02:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3793590 00:20:35.259 16:02:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:35.259 16:02:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:35.259 16:02:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3793590 00:20:35.259 16:02:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:35.259 16:02:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:35.259 16:02:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3793590' 00:20:35.259 killing process with pid 3793590 00:20:35.259 16:02:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3793590 00:20:35.259 Received shutdown signal, test time was about 10.000000 seconds 00:20:35.259 00:20:35.259 Latency(us) 00:20:35.259 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.259 =================================================================================================================== 00:20:35.259 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:35.259 [2024-07-15 16:02:02.079787] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:35.259 16:02:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3793590 00:20:35.259 16:02:02 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ab5JtgSzab 00:20:35.259 16:02:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:35.259 16:02:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ab5JtgSzab 00:20:35.259 16:02:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:35.259 16:02:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:35.259 16:02:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:35.260 16:02:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:35.260 16:02:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ab5JtgSzab 00:20:35.260 16:02:02 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:35.260 16:02:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:35.260 16:02:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:35.260 16:02:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Ab5JtgSzab' 00:20:35.260 16:02:02 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:35.260 16:02:02 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3795460 00:20:35.260 16:02:02 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:35.260 16:02:02 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:35.260 16:02:02 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3795460 /var/tmp/bdevperf.sock 00:20:35.260 16:02:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3795460 ']' 00:20:35.260 16:02:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:35.260 16:02:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:35.260 16:02:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:35.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:35.260 16:02:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:35.260 16:02:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.260 [2024-07-15 16:02:02.308129] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:20:35.260 [2024-07-15 16:02:02.308179] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3795460 ] 00:20:35.260 EAL: No free 2048 kB hugepages reported on node 1 00:20:35.260 [2024-07-15 16:02:02.358593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.260 [2024-07-15 16:02:02.437243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:35.260 16:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:35.260 16:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:35.260 16:02:03 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Ab5JtgSzab 00:20:35.260 [2024-07-15 16:02:03.280265] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:35.260 [2024-07-15 16:02:03.280333] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:35.260 [2024-07-15 16:02:03.289893] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:35.260 [2024-07-15 16:02:03.290548] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c4570 (107): Transport endpoint is not connected 00:20:35.260 [2024-07-15 16:02:03.291541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c4570 (9): Bad file descriptor 00:20:35.260 [2024-07-15 16:02:03.292542] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:35.260 [2024-07-15 16:02:03.292556] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:35.260 [2024-07-15 16:02:03.292565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:35.260 request: 00:20:35.260 { 00:20:35.260 "name": "TLSTEST", 00:20:35.260 "trtype": "tcp", 00:20:35.260 "traddr": "10.0.0.2", 00:20:35.260 "adrfam": "ipv4", 00:20:35.260 "trsvcid": "4420", 00:20:35.260 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:35.260 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:35.260 "prchk_reftag": false, 00:20:35.260 "prchk_guard": false, 00:20:35.260 "hdgst": false, 00:20:35.260 "ddgst": false, 00:20:35.260 "psk": "/tmp/tmp.Ab5JtgSzab", 00:20:35.260 "method": "bdev_nvme_attach_controller", 00:20:35.260 "req_id": 1 00:20:35.260 } 00:20:35.260 Got JSON-RPC error response 00:20:35.260 response: 00:20:35.260 { 00:20:35.260 "code": -5, 00:20:35.260 "message": "Input/output error" 00:20:35.260 } 00:20:35.260 16:02:03 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3795460 00:20:35.260 16:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3795460 ']' 00:20:35.260 16:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3795460 00:20:35.260 16:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:35.260 16:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:35.260 16:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3795460 00:20:35.260 16:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:35.260 16:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:35.260 16:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3795460' 00:20:35.260 killing process with pid 3795460 00:20:35.260 16:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3795460 00:20:35.260 Received shutdown signal, test time was about 10.000000 seconds 00:20:35.260 00:20:35.260 Latency(us) 00:20:35.260 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.260 =================================================================================================================== 00:20:35.260 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:35.260 [2024-07-15 16:02:03.355013] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:35.260 16:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3795460 00:20:35.260 16:02:03 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:35.260 16:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:35.260 16:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:35.260 16:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:35.260 16:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:35.260 16:02:03 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.hfmpPCsovm 00:20:35.260 16:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:35.260 16:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.hfmpPCsovm 00:20:35.260 16:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:35.260 16:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:35.260 16:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:35.260 16:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:35.260 16:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.hfmpPCsovm 00:20:35.260 16:02:03 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:35.260 16:02:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:35.260 16:02:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:35.260 16:02:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.hfmpPCsovm' 00:20:35.260 16:02:03 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:35.260 16:02:03 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3795698 00:20:35.260 16:02:03 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:35.260 16:02:03 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:35.260 16:02:03 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3795698 /var/tmp/bdevperf.sock 00:20:35.260 16:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3795698 ']' 00:20:35.260 16:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:35.260 16:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:35.260 16:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:35.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:35.260 16:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:35.260 16:02:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.260 [2024-07-15 16:02:03.578147] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:20:35.260 [2024-07-15 16:02:03.578194] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3795698 ] 00:20:35.260 EAL: No free 2048 kB hugepages reported on node 1 00:20:35.260 [2024-07-15 16:02:03.627694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.260 [2024-07-15 16:02:03.705857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:35.518 16:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:35.518 16:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:35.518 16:02:04 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.hfmpPCsovm 00:20:35.775 [2024-07-15 16:02:04.543452] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:35.775 [2024-07-15 16:02:04.543519] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:35.775 [2024-07-15 16:02:04.553041] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:35.775 [2024-07-15 16:02:04.553066] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:35.775 [2024-07-15 16:02:04.553090] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:35.775 [2024-07-15 16:02:04.553699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1849570 (107): Transport endpoint is not connected 00:20:35.775 [2024-07-15 16:02:04.554693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1849570 (9): Bad file descriptor 00:20:35.775 [2024-07-15 16:02:04.555694] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:35.775 [2024-07-15 16:02:04.555705] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:35.775 [2024-07-15 16:02:04.555714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:35.775 request: 00:20:35.775 { 00:20:35.775 "name": "TLSTEST", 00:20:35.775 "trtype": "tcp", 00:20:35.775 "traddr": "10.0.0.2", 00:20:35.775 "adrfam": "ipv4", 00:20:35.775 "trsvcid": "4420", 00:20:35.775 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:35.775 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:35.775 "prchk_reftag": false, 00:20:35.775 "prchk_guard": false, 00:20:35.775 "hdgst": false, 00:20:35.775 "ddgst": false, 00:20:35.775 "psk": "/tmp/tmp.hfmpPCsovm", 00:20:35.775 "method": "bdev_nvme_attach_controller", 00:20:35.775 "req_id": 1 00:20:35.775 } 00:20:35.775 Got JSON-RPC error response 00:20:35.775 response: 00:20:35.775 { 00:20:35.775 "code": -5, 00:20:35.775 "message": "Input/output error" 00:20:35.775 } 00:20:35.775 16:02:04 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3795698 00:20:35.775 16:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3795698 ']' 00:20:35.775 16:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3795698 00:20:35.775 16:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:35.775 16:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:35.775 16:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3795698 00:20:35.775 16:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:35.775 16:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:35.775 16:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3795698' 00:20:35.775 killing process with pid 3795698 00:20:35.775 16:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3795698 00:20:35.775 Received shutdown signal, test time was about 10.000000 seconds 00:20:35.775 00:20:35.775 Latency(us) 00:20:35.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.775 =================================================================================================================== 00:20:35.775 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:35.775 [2024-07-15 16:02:04.629372] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:35.775 16:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3795698 00:20:36.075 16:02:04 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:36.075 16:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:36.075 16:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:36.075 16:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:36.075 16:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:36.075 16:02:04 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.hfmpPCsovm 00:20:36.075 16:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:36.075 16:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.hfmpPCsovm 00:20:36.075 16:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:36.075 16:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:36.075 16:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:36.075 16:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:36.075 16:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.hfmpPCsovm 00:20:36.075 16:02:04 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:36.075 16:02:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:36.075 16:02:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:36.075 16:02:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.hfmpPCsovm' 00:20:36.075 16:02:04 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:36.075 16:02:04 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3795885 00:20:36.075 16:02:04 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:36.075 16:02:04 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:36.075 16:02:04 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3795885 /var/tmp/bdevperf.sock 00:20:36.075 16:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3795885 ']' 00:20:36.075 16:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:36.075 16:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:36.075 16:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:36.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:36.075 16:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:36.075 16:02:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:36.075 [2024-07-15 16:02:04.853143] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:20:36.075 [2024-07-15 16:02:04.853189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3795885 ] 00:20:36.075 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.075 [2024-07-15 16:02:04.903736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.075 [2024-07-15 16:02:04.981992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:37.005 16:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:37.005 16:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:37.005 16:02:05 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hfmpPCsovm 00:20:37.005 [2024-07-15 16:02:05.805406] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:37.005 [2024-07-15 16:02:05.805471] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:37.005 [2024-07-15 16:02:05.816240] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:37.005 [2024-07-15 16:02:05.816263] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:37.005 [2024-07-15 16:02:05.816288] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:37.005 [2024-07-15 16:02:05.816768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2498570 (107): Transport endpoint is not connected 00:20:37.005 [2024-07-15 16:02:05.817761] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2498570 (9): Bad file descriptor 00:20:37.005 [2024-07-15 16:02:05.818763] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:37.005 [2024-07-15 16:02:05.818771] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:37.005 [2024-07-15 16:02:05.818780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:37.005 request: 00:20:37.005 { 00:20:37.005 "name": "TLSTEST", 00:20:37.005 "trtype": "tcp", 00:20:37.005 "traddr": "10.0.0.2", 00:20:37.005 "adrfam": "ipv4", 00:20:37.005 "trsvcid": "4420", 00:20:37.005 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:37.005 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:37.005 "prchk_reftag": false, 00:20:37.005 "prchk_guard": false, 00:20:37.005 "hdgst": false, 00:20:37.005 "ddgst": false, 00:20:37.005 "psk": "/tmp/tmp.hfmpPCsovm", 00:20:37.005 "method": "bdev_nvme_attach_controller", 00:20:37.005 "req_id": 1 00:20:37.005 } 00:20:37.005 Got JSON-RPC error response 00:20:37.005 response: 00:20:37.005 { 00:20:37.005 "code": -5, 00:20:37.005 "message": "Input/output error" 00:20:37.005 } 00:20:37.005 16:02:05 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3795885 00:20:37.005 16:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3795885 ']' 00:20:37.005 16:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3795885 00:20:37.005 16:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:37.005 16:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:37.005 16:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3795885 00:20:37.005 16:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:37.005 16:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:37.005 16:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3795885' 00:20:37.005 killing process with pid 3795885 00:20:37.005 16:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3795885 00:20:37.005 Received shutdown signal, test time was about 10.000000 seconds 00:20:37.005 00:20:37.005 Latency(us) 00:20:37.005 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.005 =================================================================================================================== 00:20:37.005 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:37.005 [2024-07-15 16:02:05.879504] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:37.005 16:02:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3795885 00:20:37.262 16:02:06 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:37.262 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:37.262 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:37.262 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:37.262 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:37.262 16:02:06 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:37.262 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:37.262 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:37.262 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:37.262 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:37.262 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:37.262 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:37.262 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:37.262 16:02:06 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:37.262 16:02:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:37.262 16:02:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:37.262 16:02:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:37.262 16:02:06 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:37.262 16:02:06 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3796037 00:20:37.262 16:02:06 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:37.262 16:02:06 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:37.262 16:02:06 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3796037 /var/tmp/bdevperf.sock 00:20:37.262 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3796037 ']' 00:20:37.262 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:37.262 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:37.262 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:37.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:37.262 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:37.262 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:37.262 [2024-07-15 16:02:06.102816] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:20:37.262 [2024-07-15 16:02:06.102865] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3796037 ] 00:20:37.262 EAL: No free 2048 kB hugepages reported on node 1 00:20:37.262 [2024-07-15 16:02:06.153445] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.520 [2024-07-15 16:02:06.232587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:38.084 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:38.084 16:02:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:38.084 16:02:06 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:38.342 [2024-07-15 16:02:07.079468] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:38.342 [2024-07-15 16:02:07.081344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fd6af0 (9): Bad file descriptor 00:20:38.342 [2024-07-15 16:02:07.082343] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:38.342 [2024-07-15 16:02:07.082353] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:38.342 [2024-07-15 16:02:07.082361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:38.342 request: 00:20:38.342 { 00:20:38.342 "name": "TLSTEST", 00:20:38.342 "trtype": "tcp", 00:20:38.342 "traddr": "10.0.0.2", 00:20:38.342 "adrfam": "ipv4", 00:20:38.342 "trsvcid": "4420", 00:20:38.342 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.342 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:38.342 "prchk_reftag": false, 00:20:38.342 "prchk_guard": false, 00:20:38.342 "hdgst": false, 00:20:38.342 "ddgst": false, 00:20:38.342 "method": "bdev_nvme_attach_controller", 00:20:38.342 "req_id": 1 00:20:38.342 } 00:20:38.342 Got JSON-RPC error response 00:20:38.342 response: 00:20:38.342 { 00:20:38.342 "code": -5, 00:20:38.342 "message": "Input/output error" 00:20:38.342 } 00:20:38.342 16:02:07 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3796037 00:20:38.342 16:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3796037 ']' 00:20:38.342 16:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3796037 00:20:38.342 16:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:38.342 16:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:38.342 16:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3796037 00:20:38.342 16:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:38.342 16:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:38.342 16:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3796037' 00:20:38.342 killing process with pid 3796037 00:20:38.342 16:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3796037 00:20:38.342 Received shutdown signal, test time was about 10.000000 seconds 00:20:38.342 00:20:38.342 Latency(us) 00:20:38.342 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.342 =================================================================================================================== 00:20:38.342 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:38.342 16:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3796037 00:20:38.600 16:02:07 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:38.600 16:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:38.600 16:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:38.600 16:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:38.600 16:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:38.600 16:02:07 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 3791058 00:20:38.600 16:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3791058 ']' 00:20:38.600 16:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3791058 00:20:38.600 16:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:38.600 16:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:38.600 16:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3791058 00:20:38.600 16:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:38.600 16:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:38.600 16:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3791058' 00:20:38.600 killing process with pid 3791058 00:20:38.600 16:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3791058 00:20:38.600 [2024-07-15 16:02:07.366938] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:38.600 16:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3791058 00:20:38.858 16:02:07 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:38.858 16:02:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:38.858 16:02:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:38.858 16:02:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:38.858 16:02:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:38.858 16:02:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:20:38.858 16:02:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:38.858 16:02:07 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:38.858 16:02:07 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:20:38.858 16:02:07 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.ksN3WaWpt5 00:20:38.858 16:02:07 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:38.858 16:02:07 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.ksN3WaWpt5 00:20:38.858 16:02:07 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:20:38.858 16:02:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:38.858 16:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:38.858 16:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.858 16:02:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3796374 00:20:38.858 16:02:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3796374 00:20:38.858 16:02:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:38.858 16:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3796374 ']' 00:20:38.858 16:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.858 16:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:38.858 16:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.858 16:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:38.858 16:02:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.858 [2024-07-15 16:02:07.664094] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:20:38.858 [2024-07-15 16:02:07.664141] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:38.858 EAL: No free 2048 kB hugepages reported on node 1 00:20:38.858 [2024-07-15 16:02:07.720035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.116 [2024-07-15 16:02:07.798526] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:39.116 [2024-07-15 16:02:07.798558] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:39.116 [2024-07-15 16:02:07.798565] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:39.116 [2024-07-15 16:02:07.798571] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:39.116 [2024-07-15 16:02:07.798577] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:39.116 [2024-07-15 16:02:07.798593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:39.680 16:02:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:39.680 16:02:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:39.680 16:02:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:39.680 16:02:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:39.680 16:02:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:39.680 16:02:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:39.680 16:02:08 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.ksN3WaWpt5 00:20:39.681 16:02:08 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ksN3WaWpt5 00:20:39.681 16:02:08 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:39.938 [2024-07-15 16:02:08.653792] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:39.938 16:02:08 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:39.938 16:02:08 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:40.196 [2024-07-15 16:02:08.994681] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:40.196 [2024-07-15 16:02:08.994863] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:40.196 16:02:09 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:40.454 malloc0 00:20:40.454 16:02:09 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:40.454 16:02:09 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ksN3WaWpt5 00:20:40.712 [2024-07-15 16:02:09.508029] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:40.712 16:02:09 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ksN3WaWpt5 00:20:40.712 16:02:09 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:40.712 16:02:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:40.712 16:02:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:40.712 16:02:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ksN3WaWpt5' 00:20:40.712 16:02:09 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:40.712 16:02:09 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:40.712 16:02:09 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3796686 00:20:40.712 16:02:09 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:40.712 16:02:09 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3796686 /var/tmp/bdevperf.sock 00:20:40.712 16:02:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3796686 ']' 00:20:40.712 16:02:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:40.712 16:02:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:40.712 16:02:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:40.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:40.712 16:02:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:40.712 16:02:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:40.712 [2024-07-15 16:02:09.559624] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:20:40.712 [2024-07-15 16:02:09.559668] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3796686 ] 00:20:40.712 EAL: No free 2048 kB hugepages reported on node 1 00:20:40.712 [2024-07-15 16:02:09.610137] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.969 [2024-07-15 16:02:09.683357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:40.969 16:02:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:40.969 16:02:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:40.969 16:02:09 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ksN3WaWpt5 00:20:41.227 [2024-07-15 16:02:09.920626] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:41.227 [2024-07-15 16:02:09.920704] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:41.227 TLSTESTn1 00:20:41.227 16:02:10 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:41.227 Running I/O for 10 seconds... 00:20:51.256 00:20:51.256 Latency(us) 00:20:51.256 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.256 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:51.256 Verification LBA range: start 0x0 length 0x2000 00:20:51.256 TLSTESTn1 : 10.02 5340.55 20.86 0.00 0.00 23930.41 5470.83 53568.56 00:20:51.256 =================================================================================================================== 00:20:51.256 Total : 5340.55 20.86 0.00 0.00 23930.41 5470.83 53568.56 00:20:51.256 0 00:20:51.256 16:02:20 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:51.256 16:02:20 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3796686 00:20:51.256 16:02:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3796686 ']' 00:20:51.256 16:02:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3796686 00:20:51.256 16:02:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:51.256 16:02:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:51.256 16:02:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3796686 00:20:51.514 16:02:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:51.514 16:02:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:51.514 16:02:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3796686' 00:20:51.514 killing process with pid 3796686 00:20:51.514 16:02:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3796686 00:20:51.514 Received shutdown signal, test time was about 10.000000 seconds 00:20:51.514 00:20:51.514 Latency(us) 00:20:51.514 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.514 =================================================================================================================== 00:20:51.514 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:51.514 [2024-07-15 16:02:20.195859] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:51.514 16:02:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3796686 00:20:51.514 16:02:20 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.ksN3WaWpt5 00:20:51.514 16:02:20 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ksN3WaWpt5 00:20:51.514 16:02:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:51.514 16:02:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ksN3WaWpt5 00:20:51.514 16:02:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:51.514 16:02:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:51.514 16:02:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:51.514 16:02:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:51.514 16:02:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ksN3WaWpt5 00:20:51.514 16:02:20 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:51.514 16:02:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:51.514 16:02:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:51.514 16:02:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ksN3WaWpt5' 00:20:51.514 16:02:20 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:51.514 16:02:20 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3798514 00:20:51.514 16:02:20 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:51.514 16:02:20 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:51.514 16:02:20 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3798514 /var/tmp/bdevperf.sock 00:20:51.514 16:02:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3798514 ']' 00:20:51.514 16:02:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:51.514 16:02:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:51.514 16:02:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:51.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:51.514 16:02:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:51.514 16:02:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:51.514 [2024-07-15 16:02:20.429065] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:20:51.514 [2024-07-15 16:02:20.429114] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3798514 ] 00:20:51.772 EAL: No free 2048 kB hugepages reported on node 1 00:20:51.772 [2024-07-15 16:02:20.479326] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.772 [2024-07-15 16:02:20.547726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:52.338 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:52.338 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:52.338 16:02:21 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ksN3WaWpt5 00:20:52.597 [2024-07-15 16:02:21.389701] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:52.597 [2024-07-15 16:02:21.389749] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:52.597 [2024-07-15 16:02:21.389756] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.ksN3WaWpt5 00:20:52.597 request: 00:20:52.597 { 00:20:52.597 "name": "TLSTEST", 00:20:52.597 "trtype": "tcp", 00:20:52.597 "traddr": "10.0.0.2", 00:20:52.597 "adrfam": "ipv4", 00:20:52.597 "trsvcid": "4420", 00:20:52.597 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:52.597 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:52.597 "prchk_reftag": false, 00:20:52.597 "prchk_guard": false, 00:20:52.597 "hdgst": false, 00:20:52.597 "ddgst": false, 00:20:52.597 "psk": "/tmp/tmp.ksN3WaWpt5", 00:20:52.597 "method": "bdev_nvme_attach_controller", 00:20:52.597 "req_id": 1 00:20:52.597 } 00:20:52.597 Got JSON-RPC error response 00:20:52.597 response: 00:20:52.597 { 00:20:52.597 "code": -1, 00:20:52.597 "message": "Operation not permitted" 00:20:52.597 } 00:20:52.597 16:02:21 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3798514 00:20:52.597 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3798514 ']' 00:20:52.597 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3798514 00:20:52.597 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:52.597 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:52.597 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3798514 00:20:52.597 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:52.597 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:52.597 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3798514' 00:20:52.597 killing process with pid 3798514 00:20:52.597 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3798514 00:20:52.597 Received shutdown signal, test time was about 10.000000 seconds 00:20:52.597 00:20:52.597 Latency(us) 00:20:52.597 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:52.597 =================================================================================================================== 00:20:52.597 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:52.597 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3798514 00:20:52.856 16:02:21 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:52.856 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:52.856 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:52.856 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:52.856 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:52.856 16:02:21 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 3796374 00:20:52.856 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3796374 ']' 00:20:52.856 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3796374 00:20:52.856 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:52.856 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:52.856 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3796374 00:20:52.856 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:52.856 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:52.856 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3796374' 00:20:52.856 killing process with pid 3796374 00:20:52.856 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3796374 00:20:52.856 [2024-07-15 16:02:21.674419] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:52.856 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3796374 00:20:53.114 16:02:21 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:20:53.115 16:02:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:53.115 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:53.115 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:53.115 16:02:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3798759 00:20:53.115 16:02:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3798759 00:20:53.115 16:02:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:53.115 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3798759 ']' 00:20:53.115 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.115 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:53.115 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.115 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:53.115 16:02:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:53.115 [2024-07-15 16:02:21.919677] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:20:53.115 [2024-07-15 16:02:21.919724] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:53.115 EAL: No free 2048 kB hugepages reported on node 1 00:20:53.115 [2024-07-15 16:02:21.974864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.115 [2024-07-15 16:02:22.042065] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:53.115 [2024-07-15 16:02:22.042107] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:53.115 [2024-07-15 16:02:22.042113] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:53.115 [2024-07-15 16:02:22.042119] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:53.115 [2024-07-15 16:02:22.042124] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:53.115 [2024-07-15 16:02:22.042159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:54.050 16:02:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:54.050 16:02:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:54.050 16:02:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:54.050 16:02:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:54.050 16:02:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:54.050 16:02:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:54.050 16:02:22 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.ksN3WaWpt5 00:20:54.050 16:02:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:54.050 16:02:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.ksN3WaWpt5 00:20:54.050 16:02:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:20:54.050 16:02:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:54.050 16:02:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:20:54.050 16:02:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:54.050 16:02:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.ksN3WaWpt5 00:20:54.050 16:02:22 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ksN3WaWpt5 00:20:54.050 16:02:22 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:54.050 [2024-07-15 16:02:22.908618] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:54.050 16:02:22 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:54.308 16:02:23 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:54.566 [2024-07-15 16:02:23.257523] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:54.566 [2024-07-15 16:02:23.257721] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:54.566 16:02:23 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:54.566 malloc0 00:20:54.566 16:02:23 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:54.823 16:02:23 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ksN3WaWpt5 00:20:54.823 [2024-07-15 16:02:23.751111] tcp.c:3603:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:54.823 [2024-07-15 16:02:23.751137] tcp.c:3689:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:20:54.823 [2024-07-15 16:02:23.751176] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:54.823 request: 00:20:54.823 { 00:20:54.823 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:54.823 "host": "nqn.2016-06.io.spdk:host1", 00:20:54.823 "psk": "/tmp/tmp.ksN3WaWpt5", 00:20:54.823 "method": "nvmf_subsystem_add_host", 00:20:54.823 "req_id": 1 00:20:54.823 } 00:20:54.823 Got JSON-RPC error response 00:20:54.823 response: 00:20:54.823 { 00:20:54.823 "code": -32603, 00:20:54.823 "message": "Internal error" 00:20:54.823 } 00:20:55.080 16:02:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:55.080 16:02:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:55.080 16:02:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:55.080 16:02:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:55.080 16:02:23 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 3798759 00:20:55.080 16:02:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3798759 ']' 00:20:55.080 16:02:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3798759 00:20:55.080 16:02:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:55.080 16:02:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:55.080 16:02:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3798759 00:20:55.080 16:02:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:55.080 16:02:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:55.080 16:02:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3798759' 00:20:55.080 killing process with pid 3798759 00:20:55.080 16:02:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3798759 00:20:55.080 16:02:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3798759 00:20:55.080 16:02:24 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.ksN3WaWpt5 00:20:55.080 16:02:24 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:20:55.080 16:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:55.080 16:02:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:55.080 16:02:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:55.080 16:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3799035 00:20:55.080 16:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:55.337 16:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3799035 00:20:55.337 16:02:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3799035 ']' 00:20:55.337 16:02:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.338 16:02:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:55.338 16:02:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.338 16:02:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:55.338 16:02:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:55.338 [2024-07-15 16:02:24.062399] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:20:55.338 [2024-07-15 16:02:24.062445] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:55.338 EAL: No free 2048 kB hugepages reported on node 1 00:20:55.338 [2024-07-15 16:02:24.118099] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.338 [2024-07-15 16:02:24.196293] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:55.338 [2024-07-15 16:02:24.196328] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:55.338 [2024-07-15 16:02:24.196336] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:55.338 [2024-07-15 16:02:24.196342] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:55.338 [2024-07-15 16:02:24.196347] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:55.338 [2024-07-15 16:02:24.196367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:56.268 16:02:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:56.268 16:02:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:56.268 16:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:56.268 16:02:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:56.268 16:02:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:56.268 16:02:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:56.268 16:02:24 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.ksN3WaWpt5 00:20:56.268 16:02:24 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ksN3WaWpt5 00:20:56.268 16:02:24 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:56.268 [2024-07-15 16:02:25.047173] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:56.268 16:02:25 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:56.526 16:02:25 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:56.526 [2024-07-15 16:02:25.380034] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:56.526 [2024-07-15 16:02:25.380237] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:56.526 16:02:25 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:56.784 malloc0 00:20:56.784 16:02:25 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:57.042 16:02:25 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ksN3WaWpt5 00:20:57.042 [2024-07-15 16:02:25.893575] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:57.042 16:02:25 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=3799507 00:20:57.042 16:02:25 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:57.042 16:02:25 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:57.042 16:02:25 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 3799507 /var/tmp/bdevperf.sock 00:20:57.042 16:02:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3799507 ']' 00:20:57.042 16:02:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:57.042 16:02:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:57.042 16:02:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:57.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:57.043 16:02:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:57.043 16:02:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:57.043 [2024-07-15 16:02:25.952792] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:20:57.043 [2024-07-15 16:02:25.952839] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3799507 ] 00:20:57.043 EAL: No free 2048 kB hugepages reported on node 1 00:20:57.302 [2024-07-15 16:02:26.002881] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.302 [2024-07-15 16:02:26.075716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:57.869 16:02:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:57.869 16:02:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:57.869 16:02:26 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ksN3WaWpt5 00:20:58.128 [2024-07-15 16:02:26.897327] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:58.128 [2024-07-15 16:02:26.897399] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:58.128 TLSTESTn1 00:20:58.128 16:02:26 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:58.388 16:02:27 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:20:58.388 "subsystems": [ 00:20:58.388 { 00:20:58.388 "subsystem": "keyring", 00:20:58.388 "config": [] 00:20:58.388 }, 00:20:58.388 { 00:20:58.388 "subsystem": "iobuf", 00:20:58.388 "config": [ 00:20:58.388 { 00:20:58.388 "method": "iobuf_set_options", 00:20:58.388 "params": { 00:20:58.388 "small_pool_count": 8192, 00:20:58.388 "large_pool_count": 1024, 00:20:58.388 "small_bufsize": 8192, 00:20:58.388 "large_bufsize": 135168 00:20:58.388 } 00:20:58.388 } 00:20:58.388 ] 00:20:58.388 }, 00:20:58.388 { 00:20:58.388 "subsystem": "sock", 00:20:58.388 "config": [ 00:20:58.388 { 00:20:58.388 "method": "sock_set_default_impl", 00:20:58.388 "params": { 00:20:58.388 "impl_name": "posix" 00:20:58.388 } 00:20:58.388 }, 00:20:58.388 { 00:20:58.388 "method": "sock_impl_set_options", 00:20:58.388 "params": { 00:20:58.388 "impl_name": "ssl", 00:20:58.388 "recv_buf_size": 4096, 00:20:58.388 "send_buf_size": 4096, 00:20:58.388 "enable_recv_pipe": true, 00:20:58.388 "enable_quickack": false, 00:20:58.388 "enable_placement_id": 0, 00:20:58.388 "enable_zerocopy_send_server": true, 00:20:58.388 "enable_zerocopy_send_client": false, 00:20:58.388 "zerocopy_threshold": 0, 00:20:58.388 "tls_version": 0, 00:20:58.388 "enable_ktls": false 00:20:58.388 } 00:20:58.388 }, 00:20:58.388 { 00:20:58.388 "method": "sock_impl_set_options", 00:20:58.388 "params": { 00:20:58.388 "impl_name": "posix", 00:20:58.388 "recv_buf_size": 2097152, 00:20:58.388 "send_buf_size": 2097152, 00:20:58.388 "enable_recv_pipe": true, 00:20:58.388 "enable_quickack": false, 00:20:58.388 "enable_placement_id": 0, 00:20:58.388 "enable_zerocopy_send_server": true, 00:20:58.388 "enable_zerocopy_send_client": false, 00:20:58.388 "zerocopy_threshold": 0, 00:20:58.388 "tls_version": 0, 00:20:58.388 "enable_ktls": false 00:20:58.388 } 00:20:58.388 } 00:20:58.388 ] 00:20:58.388 }, 00:20:58.388 { 00:20:58.388 "subsystem": "vmd", 00:20:58.388 "config": [] 00:20:58.388 }, 00:20:58.388 { 00:20:58.388 "subsystem": "accel", 00:20:58.388 "config": [ 00:20:58.388 { 00:20:58.388 "method": "accel_set_options", 00:20:58.388 "params": { 00:20:58.388 "small_cache_size": 128, 00:20:58.388 "large_cache_size": 16, 00:20:58.388 "task_count": 2048, 00:20:58.388 "sequence_count": 2048, 00:20:58.388 "buf_count": 2048 00:20:58.388 } 00:20:58.388 } 00:20:58.388 ] 00:20:58.388 }, 00:20:58.388 { 00:20:58.388 "subsystem": "bdev", 00:20:58.388 "config": [ 00:20:58.388 { 00:20:58.388 "method": "bdev_set_options", 00:20:58.388 "params": { 00:20:58.388 "bdev_io_pool_size": 65535, 00:20:58.388 "bdev_io_cache_size": 256, 00:20:58.388 "bdev_auto_examine": true, 00:20:58.388 "iobuf_small_cache_size": 128, 00:20:58.388 "iobuf_large_cache_size": 16 00:20:58.388 } 00:20:58.388 }, 00:20:58.388 { 00:20:58.388 "method": "bdev_raid_set_options", 00:20:58.388 "params": { 00:20:58.388 "process_window_size_kb": 1024 00:20:58.388 } 00:20:58.388 }, 00:20:58.388 { 00:20:58.388 "method": "bdev_iscsi_set_options", 00:20:58.388 "params": { 00:20:58.388 "timeout_sec": 30 00:20:58.388 } 00:20:58.388 }, 00:20:58.388 { 00:20:58.388 "method": "bdev_nvme_set_options", 00:20:58.388 "params": { 00:20:58.388 "action_on_timeout": "none", 00:20:58.388 "timeout_us": 0, 00:20:58.388 "timeout_admin_us": 0, 00:20:58.388 "keep_alive_timeout_ms": 10000, 00:20:58.388 "arbitration_burst": 0, 00:20:58.388 "low_priority_weight": 0, 00:20:58.388 "medium_priority_weight": 0, 00:20:58.388 "high_priority_weight": 0, 00:20:58.388 "nvme_adminq_poll_period_us": 10000, 00:20:58.388 "nvme_ioq_poll_period_us": 0, 00:20:58.388 "io_queue_requests": 0, 00:20:58.388 "delay_cmd_submit": true, 00:20:58.388 "transport_retry_count": 4, 00:20:58.388 "bdev_retry_count": 3, 00:20:58.388 "transport_ack_timeout": 0, 00:20:58.388 "ctrlr_loss_timeout_sec": 0, 00:20:58.388 "reconnect_delay_sec": 0, 00:20:58.388 "fast_io_fail_timeout_sec": 0, 00:20:58.388 "disable_auto_failback": false, 00:20:58.388 "generate_uuids": false, 00:20:58.388 "transport_tos": 0, 00:20:58.388 "nvme_error_stat": false, 00:20:58.388 "rdma_srq_size": 0, 00:20:58.388 "io_path_stat": false, 00:20:58.388 "allow_accel_sequence": false, 00:20:58.388 "rdma_max_cq_size": 0, 00:20:58.388 "rdma_cm_event_timeout_ms": 0, 00:20:58.388 "dhchap_digests": [ 00:20:58.388 "sha256", 00:20:58.388 "sha384", 00:20:58.388 "sha512" 00:20:58.388 ], 00:20:58.388 "dhchap_dhgroups": [ 00:20:58.388 "null", 00:20:58.388 "ffdhe2048", 00:20:58.388 "ffdhe3072", 00:20:58.388 "ffdhe4096", 00:20:58.388 "ffdhe6144", 00:20:58.388 "ffdhe8192" 00:20:58.388 ] 00:20:58.388 } 00:20:58.388 }, 00:20:58.388 { 00:20:58.388 "method": "bdev_nvme_set_hotplug", 00:20:58.388 "params": { 00:20:58.388 "period_us": 100000, 00:20:58.388 "enable": false 00:20:58.388 } 00:20:58.388 }, 00:20:58.388 { 00:20:58.388 "method": "bdev_malloc_create", 00:20:58.388 "params": { 00:20:58.388 "name": "malloc0", 00:20:58.388 "num_blocks": 8192, 00:20:58.388 "block_size": 4096, 00:20:58.388 "physical_block_size": 4096, 00:20:58.388 "uuid": "f922f37b-a83a-4d42-91ba-5c907eba9403", 00:20:58.388 "optimal_io_boundary": 0 00:20:58.388 } 00:20:58.388 }, 00:20:58.388 { 00:20:58.388 "method": "bdev_wait_for_examine" 00:20:58.388 } 00:20:58.388 ] 00:20:58.388 }, 00:20:58.388 { 00:20:58.388 "subsystem": "nbd", 00:20:58.388 "config": [] 00:20:58.388 }, 00:20:58.388 { 00:20:58.388 "subsystem": "scheduler", 00:20:58.388 "config": [ 00:20:58.388 { 00:20:58.388 "method": "framework_set_scheduler", 00:20:58.388 "params": { 00:20:58.388 "name": "static" 00:20:58.388 } 00:20:58.388 } 00:20:58.388 ] 00:20:58.388 }, 00:20:58.388 { 00:20:58.388 "subsystem": "nvmf", 00:20:58.388 "config": [ 00:20:58.388 { 00:20:58.388 "method": "nvmf_set_config", 00:20:58.388 "params": { 00:20:58.388 "discovery_filter": "match_any", 00:20:58.388 "admin_cmd_passthru": { 00:20:58.388 "identify_ctrlr": false 00:20:58.388 } 00:20:58.388 } 00:20:58.389 }, 00:20:58.389 { 00:20:58.389 "method": "nvmf_set_max_subsystems", 00:20:58.389 "params": { 00:20:58.389 "max_subsystems": 1024 00:20:58.389 } 00:20:58.389 }, 00:20:58.389 { 00:20:58.389 "method": "nvmf_set_crdt", 00:20:58.389 "params": { 00:20:58.389 "crdt1": 0, 00:20:58.389 "crdt2": 0, 00:20:58.389 "crdt3": 0 00:20:58.389 } 00:20:58.389 }, 00:20:58.389 { 00:20:58.389 "method": "nvmf_create_transport", 00:20:58.389 "params": { 00:20:58.389 "trtype": "TCP", 00:20:58.389 "max_queue_depth": 128, 00:20:58.389 "max_io_qpairs_per_ctrlr": 127, 00:20:58.389 "in_capsule_data_size": 4096, 00:20:58.389 "max_io_size": 131072, 00:20:58.389 "io_unit_size": 131072, 00:20:58.389 "max_aq_depth": 128, 00:20:58.389 "num_shared_buffers": 511, 00:20:58.389 "buf_cache_size": 4294967295, 00:20:58.389 "dif_insert_or_strip": false, 00:20:58.389 "zcopy": false, 00:20:58.389 "c2h_success": false, 00:20:58.389 "sock_priority": 0, 00:20:58.389 "abort_timeout_sec": 1, 00:20:58.389 "ack_timeout": 0, 00:20:58.389 "data_wr_pool_size": 0 00:20:58.389 } 00:20:58.389 }, 00:20:58.389 { 00:20:58.389 "method": "nvmf_create_subsystem", 00:20:58.389 "params": { 00:20:58.389 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:58.389 "allow_any_host": false, 00:20:58.389 "serial_number": "SPDK00000000000001", 00:20:58.389 "model_number": "SPDK bdev Controller", 00:20:58.389 "max_namespaces": 10, 00:20:58.389 "min_cntlid": 1, 00:20:58.389 "max_cntlid": 65519, 00:20:58.389 "ana_reporting": false 00:20:58.389 } 00:20:58.389 }, 00:20:58.389 { 00:20:58.389 "method": "nvmf_subsystem_add_host", 00:20:58.389 "params": { 00:20:58.389 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:58.389 "host": "nqn.2016-06.io.spdk:host1", 00:20:58.389 "psk": "/tmp/tmp.ksN3WaWpt5" 00:20:58.389 } 00:20:58.389 }, 00:20:58.389 { 00:20:58.389 "method": "nvmf_subsystem_add_ns", 00:20:58.389 "params": { 00:20:58.389 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:58.389 "namespace": { 00:20:58.389 "nsid": 1, 00:20:58.389 "bdev_name": "malloc0", 00:20:58.389 "nguid": "F922F37BA83A4D4291BA5C907EBA9403", 00:20:58.389 "uuid": "f922f37b-a83a-4d42-91ba-5c907eba9403", 00:20:58.389 "no_auto_visible": false 00:20:58.389 } 00:20:58.389 } 00:20:58.389 }, 00:20:58.389 { 00:20:58.389 "method": "nvmf_subsystem_add_listener", 00:20:58.389 "params": { 00:20:58.389 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:58.389 "listen_address": { 00:20:58.389 "trtype": "TCP", 00:20:58.389 "adrfam": "IPv4", 00:20:58.389 "traddr": "10.0.0.2", 00:20:58.389 "trsvcid": "4420" 00:20:58.389 }, 00:20:58.389 "secure_channel": true 00:20:58.389 } 00:20:58.389 } 00:20:58.389 ] 00:20:58.389 } 00:20:58.389 ] 00:20:58.389 }' 00:20:58.389 16:02:27 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:58.648 16:02:27 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:20:58.648 "subsystems": [ 00:20:58.648 { 00:20:58.648 "subsystem": "keyring", 00:20:58.648 "config": [] 00:20:58.648 }, 00:20:58.648 { 00:20:58.648 "subsystem": "iobuf", 00:20:58.648 "config": [ 00:20:58.648 { 00:20:58.648 "method": "iobuf_set_options", 00:20:58.648 "params": { 00:20:58.648 "small_pool_count": 8192, 00:20:58.648 "large_pool_count": 1024, 00:20:58.648 "small_bufsize": 8192, 00:20:58.648 "large_bufsize": 135168 00:20:58.648 } 00:20:58.648 } 00:20:58.648 ] 00:20:58.648 }, 00:20:58.648 { 00:20:58.648 "subsystem": "sock", 00:20:58.648 "config": [ 00:20:58.648 { 00:20:58.648 "method": "sock_set_default_impl", 00:20:58.648 "params": { 00:20:58.648 "impl_name": "posix" 00:20:58.648 } 00:20:58.648 }, 00:20:58.648 { 00:20:58.648 "method": "sock_impl_set_options", 00:20:58.648 "params": { 00:20:58.648 "impl_name": "ssl", 00:20:58.648 "recv_buf_size": 4096, 00:20:58.648 "send_buf_size": 4096, 00:20:58.648 "enable_recv_pipe": true, 00:20:58.648 "enable_quickack": false, 00:20:58.648 "enable_placement_id": 0, 00:20:58.648 "enable_zerocopy_send_server": true, 00:20:58.648 "enable_zerocopy_send_client": false, 00:20:58.648 "zerocopy_threshold": 0, 00:20:58.648 "tls_version": 0, 00:20:58.648 "enable_ktls": false 00:20:58.649 } 00:20:58.649 }, 00:20:58.649 { 00:20:58.649 "method": "sock_impl_set_options", 00:20:58.649 "params": { 00:20:58.649 "impl_name": "posix", 00:20:58.649 "recv_buf_size": 2097152, 00:20:58.649 "send_buf_size": 2097152, 00:20:58.649 "enable_recv_pipe": true, 00:20:58.649 "enable_quickack": false, 00:20:58.649 "enable_placement_id": 0, 00:20:58.649 "enable_zerocopy_send_server": true, 00:20:58.649 "enable_zerocopy_send_client": false, 00:20:58.649 "zerocopy_threshold": 0, 00:20:58.649 "tls_version": 0, 00:20:58.649 "enable_ktls": false 00:20:58.649 } 00:20:58.649 } 00:20:58.649 ] 00:20:58.649 }, 00:20:58.649 { 00:20:58.649 "subsystem": "vmd", 00:20:58.649 "config": [] 00:20:58.649 }, 00:20:58.649 { 00:20:58.649 "subsystem": "accel", 00:20:58.649 "config": [ 00:20:58.649 { 00:20:58.649 "method": "accel_set_options", 00:20:58.649 "params": { 00:20:58.649 "small_cache_size": 128, 00:20:58.649 "large_cache_size": 16, 00:20:58.649 "task_count": 2048, 00:20:58.649 "sequence_count": 2048, 00:20:58.649 "buf_count": 2048 00:20:58.649 } 00:20:58.649 } 00:20:58.649 ] 00:20:58.649 }, 00:20:58.649 { 00:20:58.649 "subsystem": "bdev", 00:20:58.649 "config": [ 00:20:58.649 { 00:20:58.649 "method": "bdev_set_options", 00:20:58.649 "params": { 00:20:58.649 "bdev_io_pool_size": 65535, 00:20:58.649 "bdev_io_cache_size": 256, 00:20:58.649 "bdev_auto_examine": true, 00:20:58.649 "iobuf_small_cache_size": 128, 00:20:58.649 "iobuf_large_cache_size": 16 00:20:58.649 } 00:20:58.649 }, 00:20:58.649 { 00:20:58.649 "method": "bdev_raid_set_options", 00:20:58.649 "params": { 00:20:58.649 "process_window_size_kb": 1024 00:20:58.649 } 00:20:58.649 }, 00:20:58.649 { 00:20:58.649 "method": "bdev_iscsi_set_options", 00:20:58.649 "params": { 00:20:58.649 "timeout_sec": 30 00:20:58.649 } 00:20:58.649 }, 00:20:58.649 { 00:20:58.649 "method": "bdev_nvme_set_options", 00:20:58.649 "params": { 00:20:58.649 "action_on_timeout": "none", 00:20:58.649 "timeout_us": 0, 00:20:58.649 "timeout_admin_us": 0, 00:20:58.649 "keep_alive_timeout_ms": 10000, 00:20:58.649 "arbitration_burst": 0, 00:20:58.649 "low_priority_weight": 0, 00:20:58.649 "medium_priority_weight": 0, 00:20:58.649 "high_priority_weight": 0, 00:20:58.649 "nvme_adminq_poll_period_us": 10000, 00:20:58.649 "nvme_ioq_poll_period_us": 0, 00:20:58.649 "io_queue_requests": 512, 00:20:58.649 "delay_cmd_submit": true, 00:20:58.649 "transport_retry_count": 4, 00:20:58.649 "bdev_retry_count": 3, 00:20:58.649 "transport_ack_timeout": 0, 00:20:58.649 "ctrlr_loss_timeout_sec": 0, 00:20:58.649 "reconnect_delay_sec": 0, 00:20:58.649 "fast_io_fail_timeout_sec": 0, 00:20:58.649 "disable_auto_failback": false, 00:20:58.649 "generate_uuids": false, 00:20:58.649 "transport_tos": 0, 00:20:58.649 "nvme_error_stat": false, 00:20:58.649 "rdma_srq_size": 0, 00:20:58.649 "io_path_stat": false, 00:20:58.649 "allow_accel_sequence": false, 00:20:58.649 "rdma_max_cq_size": 0, 00:20:58.649 "rdma_cm_event_timeout_ms": 0, 00:20:58.649 "dhchap_digests": [ 00:20:58.649 "sha256", 00:20:58.649 "sha384", 00:20:58.649 "sha512" 00:20:58.649 ], 00:20:58.649 "dhchap_dhgroups": [ 00:20:58.649 "null", 00:20:58.649 "ffdhe2048", 00:20:58.649 "ffdhe3072", 00:20:58.649 "ffdhe4096", 00:20:58.649 "ffdhe6144", 00:20:58.649 "ffdhe8192" 00:20:58.649 ] 00:20:58.649 } 00:20:58.649 }, 00:20:58.649 { 00:20:58.649 "method": "bdev_nvme_attach_controller", 00:20:58.649 "params": { 00:20:58.649 "name": "TLSTEST", 00:20:58.649 "trtype": "TCP", 00:20:58.649 "adrfam": "IPv4", 00:20:58.649 "traddr": "10.0.0.2", 00:20:58.649 "trsvcid": "4420", 00:20:58.649 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:58.649 "prchk_reftag": false, 00:20:58.649 "prchk_guard": false, 00:20:58.649 "ctrlr_loss_timeout_sec": 0, 00:20:58.649 "reconnect_delay_sec": 0, 00:20:58.649 "fast_io_fail_timeout_sec": 0, 00:20:58.649 "psk": "/tmp/tmp.ksN3WaWpt5", 00:20:58.649 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:58.649 "hdgst": false, 00:20:58.649 "ddgst": false 00:20:58.649 } 00:20:58.649 }, 00:20:58.649 { 00:20:58.649 "method": "bdev_nvme_set_hotplug", 00:20:58.649 "params": { 00:20:58.649 "period_us": 100000, 00:20:58.649 "enable": false 00:20:58.649 } 00:20:58.649 }, 00:20:58.649 { 00:20:58.649 "method": "bdev_wait_for_examine" 00:20:58.649 } 00:20:58.649 ] 00:20:58.649 }, 00:20:58.649 { 00:20:58.649 "subsystem": "nbd", 00:20:58.649 "config": [] 00:20:58.649 } 00:20:58.649 ] 00:20:58.649 }' 00:20:58.649 16:02:27 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 3799507 00:20:58.649 16:02:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3799507 ']' 00:20:58.649 16:02:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3799507 00:20:58.649 16:02:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:58.649 16:02:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:58.649 16:02:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3799507 00:20:58.649 16:02:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:58.649 16:02:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:58.649 16:02:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3799507' 00:20:58.649 killing process with pid 3799507 00:20:58.649 16:02:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3799507 00:20:58.649 Received shutdown signal, test time was about 10.000000 seconds 00:20:58.649 00:20:58.649 Latency(us) 00:20:58.649 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.650 =================================================================================================================== 00:20:58.650 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:58.650 [2024-07-15 16:02:27.533741] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:58.650 16:02:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3799507 00:20:58.909 16:02:27 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 3799035 00:20:58.909 16:02:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3799035 ']' 00:20:58.909 16:02:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3799035 00:20:58.909 16:02:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:58.910 16:02:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:58.910 16:02:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3799035 00:20:58.910 16:02:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:58.910 16:02:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:58.910 16:02:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3799035' 00:20:58.910 killing process with pid 3799035 00:20:58.910 16:02:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3799035 00:20:58.910 [2024-07-15 16:02:27.760234] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:58.910 16:02:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3799035 00:20:59.170 16:02:27 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:59.170 16:02:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:59.170 16:02:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:59.170 16:02:27 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:20:59.170 "subsystems": [ 00:20:59.170 { 00:20:59.170 "subsystem": "keyring", 00:20:59.170 "config": [] 00:20:59.170 }, 00:20:59.170 { 00:20:59.170 "subsystem": "iobuf", 00:20:59.170 "config": [ 00:20:59.170 { 00:20:59.170 "method": "iobuf_set_options", 00:20:59.170 "params": { 00:20:59.170 "small_pool_count": 8192, 00:20:59.170 "large_pool_count": 1024, 00:20:59.170 "small_bufsize": 8192, 00:20:59.170 "large_bufsize": 135168 00:20:59.170 } 00:20:59.170 } 00:20:59.170 ] 00:20:59.170 }, 00:20:59.170 { 00:20:59.170 "subsystem": "sock", 00:20:59.170 "config": [ 00:20:59.170 { 00:20:59.170 "method": "sock_set_default_impl", 00:20:59.170 "params": { 00:20:59.170 "impl_name": "posix" 00:20:59.170 } 00:20:59.170 }, 00:20:59.170 { 00:20:59.170 "method": "sock_impl_set_options", 00:20:59.170 "params": { 00:20:59.170 "impl_name": "ssl", 00:20:59.170 "recv_buf_size": 4096, 00:20:59.170 "send_buf_size": 4096, 00:20:59.170 "enable_recv_pipe": true, 00:20:59.170 "enable_quickack": false, 00:20:59.170 "enable_placement_id": 0, 00:20:59.170 "enable_zerocopy_send_server": true, 00:20:59.170 "enable_zerocopy_send_client": false, 00:20:59.170 "zerocopy_threshold": 0, 00:20:59.170 "tls_version": 0, 00:20:59.170 "enable_ktls": false 00:20:59.170 } 00:20:59.170 }, 00:20:59.170 { 00:20:59.170 "method": "sock_impl_set_options", 00:20:59.170 "params": { 00:20:59.170 "impl_name": "posix", 00:20:59.170 "recv_buf_size": 2097152, 00:20:59.170 "send_buf_size": 2097152, 00:20:59.170 "enable_recv_pipe": true, 00:20:59.170 "enable_quickack": false, 00:20:59.170 "enable_placement_id": 0, 00:20:59.170 "enable_zerocopy_send_server": true, 00:20:59.170 "enable_zerocopy_send_client": false, 00:20:59.170 "zerocopy_threshold": 0, 00:20:59.170 "tls_version": 0, 00:20:59.170 "enable_ktls": false 00:20:59.170 } 00:20:59.170 } 00:20:59.170 ] 00:20:59.170 }, 00:20:59.170 { 00:20:59.170 "subsystem": "vmd", 00:20:59.170 "config": [] 00:20:59.170 }, 00:20:59.170 { 00:20:59.170 "subsystem": "accel", 00:20:59.170 "config": [ 00:20:59.170 { 00:20:59.170 "method": "accel_set_options", 00:20:59.170 "params": { 00:20:59.170 "small_cache_size": 128, 00:20:59.170 "large_cache_size": 16, 00:20:59.170 "task_count": 2048, 00:20:59.170 "sequence_count": 2048, 00:20:59.170 "buf_count": 2048 00:20:59.170 } 00:20:59.170 } 00:20:59.170 ] 00:20:59.170 }, 00:20:59.170 { 00:20:59.170 "subsystem": "bdev", 00:20:59.170 "config": [ 00:20:59.170 { 00:20:59.170 "method": "bdev_set_options", 00:20:59.170 "params": { 00:20:59.170 "bdev_io_pool_size": 65535, 00:20:59.170 "bdev_io_cache_size": 256, 00:20:59.170 "bdev_auto_examine": true, 00:20:59.170 "iobuf_small_cache_size": 128, 00:20:59.170 "iobuf_large_cache_size": 16 00:20:59.170 } 00:20:59.170 }, 00:20:59.170 { 00:20:59.170 "method": "bdev_raid_set_options", 00:20:59.170 "params": { 00:20:59.170 "process_window_size_kb": 1024 00:20:59.170 } 00:20:59.170 }, 00:20:59.170 { 00:20:59.170 "method": "bdev_iscsi_set_options", 00:20:59.170 "params": { 00:20:59.170 "timeout_sec": 30 00:20:59.170 } 00:20:59.170 }, 00:20:59.170 { 00:20:59.170 "method": "bdev_nvme_set_options", 00:20:59.170 "params": { 00:20:59.170 "action_on_timeout": "none", 00:20:59.170 "timeout_us": 0, 00:20:59.170 "timeout_admin_us": 0, 00:20:59.170 "keep_alive_timeout_ms": 10000, 00:20:59.170 "arbitration_burst": 0, 00:20:59.170 "low_priority_weight": 0, 00:20:59.170 "medium_priority_weight": 0, 00:20:59.170 "high_priority_weight": 0, 00:20:59.170 "nvme_adminq_poll_period_us": 10000, 00:20:59.170 "nvme_ioq_poll_period_us": 0, 00:20:59.170 "io_queue_requests": 0, 00:20:59.170 "delay_cmd_submit": true, 00:20:59.170 "transport_retry_count": 4, 00:20:59.170 "bdev_retry_count": 3, 00:20:59.170 "transport_ack_timeout": 0, 00:20:59.170 "ctrlr_loss_timeout_sec": 0, 00:20:59.170 "reconnect_delay_sec": 0, 00:20:59.170 "fast_io_fail_timeout_sec": 0, 00:20:59.170 "disable_auto_failback": false, 00:20:59.170 "generate_uuids": false, 00:20:59.170 "transport_tos": 0, 00:20:59.170 "nvme_error_stat": false, 00:20:59.170 "rdma_srq_size": 0, 00:20:59.170 "io_path_stat": false, 00:20:59.170 "allow_accel_sequence": false, 00:20:59.170 "rdma_max_cq_size": 0, 00:20:59.170 "rdma_cm_event_timeout_ms": 0, 00:20:59.170 "dhchap_digests": [ 00:20:59.170 "sha256", 00:20:59.170 "sha384", 00:20:59.170 "sha512" 00:20:59.170 ], 00:20:59.170 "dhchap_dhgroups": [ 00:20:59.170 "null", 00:20:59.170 "ffdhe2048", 00:20:59.170 "ffdhe3072", 00:20:59.170 "ffdhe4096", 00:20:59.170 "ffdhe6144", 00:20:59.170 "ffdhe8192" 00:20:59.171 ] 00:20:59.171 } 00:20:59.171 }, 00:20:59.171 { 00:20:59.171 "method": "bdev_nvme_set_hotplug", 00:20:59.171 "params": { 00:20:59.171 "period_us": 100000, 00:20:59.171 "enable": false 00:20:59.171 } 00:20:59.171 }, 00:20:59.171 { 00:20:59.171 "method": "bdev_malloc_create", 00:20:59.171 "params": { 00:20:59.171 "name": "malloc0", 00:20:59.171 "num_blocks": 8192, 00:20:59.171 "block_size": 4096, 00:20:59.171 "physical_block_size": 4096, 00:20:59.171 "uuid": "f922f37b-a83a-4d42-91ba-5c907eba9403", 00:20:59.171 "optimal_io_boundary": 0 00:20:59.171 } 00:20:59.171 }, 00:20:59.171 { 00:20:59.171 "method": "bdev_wait_for_examine" 00:20:59.171 } 00:20:59.171 ] 00:20:59.171 }, 00:20:59.171 { 00:20:59.171 "subsystem": "nbd", 00:20:59.171 "config": [] 00:20:59.171 }, 00:20:59.171 { 00:20:59.171 "subsystem": "scheduler", 00:20:59.171 "config": [ 00:20:59.171 { 00:20:59.171 "method": "framework_set_scheduler", 00:20:59.171 "params": { 00:20:59.171 "name": "static" 00:20:59.171 } 00:20:59.171 } 00:20:59.171 ] 00:20:59.171 }, 00:20:59.171 { 00:20:59.171 "subsystem": "nvmf", 00:20:59.171 "config": [ 00:20:59.171 { 00:20:59.171 "method": "nvmf_set_config", 00:20:59.171 "params": { 00:20:59.171 "discovery_filter": "match_any", 00:20:59.171 "admin_cmd_passthru": { 00:20:59.171 "identify_ctrlr": false 00:20:59.171 } 00:20:59.171 } 00:20:59.171 }, 00:20:59.171 { 00:20:59.171 "method": "nvmf_set_max_subsystems", 00:20:59.171 "params": { 00:20:59.171 "max_subsystems": 1024 00:20:59.171 } 00:20:59.171 }, 00:20:59.171 { 00:20:59.171 "method": "nvmf_set_crdt", 00:20:59.171 "params": { 00:20:59.171 "crdt1": 0, 00:20:59.171 "crdt2": 0, 00:20:59.171 "crdt3": 0 00:20:59.171 } 00:20:59.171 }, 00:20:59.171 { 00:20:59.171 "method": "nvmf_create_transport", 00:20:59.171 "params": { 00:20:59.171 "trtype": "TCP", 00:20:59.171 "max_queue_depth": 128, 00:20:59.171 "max_io_qpairs_per_ctrlr": 127, 00:20:59.171 "in_capsule_data_size": 4096, 00:20:59.171 "max_io_size": 131072, 00:20:59.171 "io_unit_size": 131072, 00:20:59.171 "max_aq_depth": 128, 00:20:59.171 "num_shared_buffers": 511, 00:20:59.171 "buf_cache_size": 4294967295, 00:20:59.171 "dif_insert_or_strip": false, 00:20:59.171 "zcopy": false, 00:20:59.171 "c2h_success": false, 00:20:59.171 "sock_priority": 0, 00:20:59.171 "abort_timeout_sec": 1, 00:20:59.171 "ack_timeout": 0, 00:20:59.171 "data_wr_pool_size": 0 00:20:59.171 } 00:20:59.171 }, 00:20:59.171 { 00:20:59.171 "method": "nvmf_create_subsystem", 00:20:59.171 "params": { 00:20:59.171 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.171 "allow_any_host": false, 00:20:59.171 "serial_number": "SPDK00000000000001", 00:20:59.171 "model_number": "SPDK bdev Controller", 00:20:59.171 "max_namespaces": 10, 00:20:59.171 "min_cntlid": 1, 00:20:59.171 "max_cntlid": 65519, 00:20:59.171 "ana_reporting": false 00:20:59.171 } 00:20:59.171 }, 00:20:59.171 { 00:20:59.171 "method": "nvmf_subsystem_add_host", 00:20:59.171 "params": { 00:20:59.171 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.171 "host": "nqn.2016-06.io.spdk:host1", 00:20:59.171 "psk": "/tmp/tmp.ksN3WaWpt5" 00:20:59.171 } 00:20:59.171 }, 00:20:59.171 { 00:20:59.171 "method": "nvmf_subsystem_add_ns", 00:20:59.171 "params": { 00:20:59.171 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.171 "namespace": { 00:20:59.171 "nsid": 1, 00:20:59.171 "bdev_name": "malloc0", 00:20:59.171 "nguid": "F922F37BA83A4D4291BA5C907EBA9403", 00:20:59.171 "uuid": "f922f37b-a83a-4d42-91ba-5c907eba9403", 00:20:59.171 "no_auto_visible": false 00:20:59.171 } 00:20:59.171 } 00:20:59.171 }, 00:20:59.171 { 00:20:59.171 "method": "nvmf_subsystem_add_listener", 00:20:59.171 "params": { 00:20:59.171 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.171 "listen_address": { 00:20:59.171 "trtype": "TCP", 00:20:59.171 "adrfam": "IPv4", 00:20:59.171 "traddr": "10.0.0.2", 00:20:59.171 "trsvcid": "4420" 00:20:59.171 }, 00:20:59.171 "secure_channel": true 00:20:59.171 } 00:20:59.171 } 00:20:59.171 ] 00:20:59.171 } 00:20:59.171 ] 00:20:59.171 }' 00:20:59.171 16:02:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:59.171 16:02:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3799764 00:20:59.171 16:02:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:59.171 16:02:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3799764 00:20:59.171 16:02:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3799764 ']' 00:20:59.171 16:02:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:59.171 16:02:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:59.171 16:02:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:59.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:59.171 16:02:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:59.171 16:02:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:59.171 [2024-07-15 16:02:27.995726] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:20:59.171 [2024-07-15 16:02:27.995770] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:59.171 EAL: No free 2048 kB hugepages reported on node 1 00:20:59.171 [2024-07-15 16:02:28.051459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.431 [2024-07-15 16:02:28.130768] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:59.431 [2024-07-15 16:02:28.130801] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:59.431 [2024-07-15 16:02:28.130808] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:59.431 [2024-07-15 16:02:28.130814] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:59.431 [2024-07-15 16:02:28.130819] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:59.431 [2024-07-15 16:02:28.130868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:59.431 [2024-07-15 16:02:28.333193] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:59.431 [2024-07-15 16:02:28.349167] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:59.689 [2024-07-15 16:02:28.365223] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:59.689 [2024-07-15 16:02:28.378555] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:59.949 16:02:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:59.949 16:02:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:59.949 16:02:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:59.949 16:02:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:59.949 16:02:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:59.949 16:02:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:59.949 16:02:28 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=3800006 00:20:59.949 16:02:28 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 3800006 /var/tmp/bdevperf.sock 00:20:59.949 16:02:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3800006 ']' 00:20:59.949 16:02:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:59.949 16:02:28 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:59.949 16:02:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:59.949 16:02:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:59.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:59.949 16:02:28 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:20:59.949 "subsystems": [ 00:20:59.949 { 00:20:59.949 "subsystem": "keyring", 00:20:59.949 "config": [] 00:20:59.949 }, 00:20:59.949 { 00:20:59.949 "subsystem": "iobuf", 00:20:59.949 "config": [ 00:20:59.949 { 00:20:59.949 "method": "iobuf_set_options", 00:20:59.949 "params": { 00:20:59.949 "small_pool_count": 8192, 00:20:59.949 "large_pool_count": 1024, 00:20:59.949 "small_bufsize": 8192, 00:20:59.949 "large_bufsize": 135168 00:20:59.949 } 00:20:59.949 } 00:20:59.949 ] 00:20:59.949 }, 00:20:59.949 { 00:20:59.949 "subsystem": "sock", 00:20:59.949 "config": [ 00:20:59.949 { 00:20:59.949 "method": "sock_set_default_impl", 00:20:59.949 "params": { 00:20:59.949 "impl_name": "posix" 00:20:59.949 } 00:20:59.949 }, 00:20:59.949 { 00:20:59.949 "method": "sock_impl_set_options", 00:20:59.949 "params": { 00:20:59.949 "impl_name": "ssl", 00:20:59.949 "recv_buf_size": 4096, 00:20:59.949 "send_buf_size": 4096, 00:20:59.949 "enable_recv_pipe": true, 00:20:59.949 "enable_quickack": false, 00:20:59.949 "enable_placement_id": 0, 00:20:59.949 "enable_zerocopy_send_server": true, 00:20:59.949 "enable_zerocopy_send_client": false, 00:20:59.949 "zerocopy_threshold": 0, 00:20:59.949 "tls_version": 0, 00:20:59.949 "enable_ktls": false 00:20:59.949 } 00:20:59.949 }, 00:20:59.949 { 00:20:59.949 "method": "sock_impl_set_options", 00:20:59.949 "params": { 00:20:59.949 "impl_name": "posix", 00:20:59.949 "recv_buf_size": 2097152, 00:20:59.949 "send_buf_size": 2097152, 00:20:59.949 "enable_recv_pipe": true, 00:20:59.949 "enable_quickack": false, 00:20:59.949 "enable_placement_id": 0, 00:20:59.949 "enable_zerocopy_send_server": true, 00:20:59.949 "enable_zerocopy_send_client": false, 00:20:59.949 "zerocopy_threshold": 0, 00:20:59.949 "tls_version": 0, 00:20:59.949 "enable_ktls": false 00:20:59.949 } 00:20:59.949 } 00:20:59.949 ] 00:20:59.949 }, 00:20:59.949 { 00:20:59.949 "subsystem": "vmd", 00:20:59.949 "config": [] 00:20:59.949 }, 00:20:59.949 { 00:20:59.949 "subsystem": "accel", 00:20:59.949 "config": [ 00:20:59.949 { 00:20:59.949 "method": "accel_set_options", 00:20:59.949 "params": { 00:20:59.949 "small_cache_size": 128, 00:20:59.949 "large_cache_size": 16, 00:20:59.949 "task_count": 2048, 00:20:59.949 "sequence_count": 2048, 00:20:59.949 "buf_count": 2048 00:20:59.949 } 00:20:59.949 } 00:20:59.949 ] 00:20:59.949 }, 00:20:59.949 { 00:20:59.949 "subsystem": "bdev", 00:20:59.949 "config": [ 00:20:59.949 { 00:20:59.949 "method": "bdev_set_options", 00:20:59.949 "params": { 00:20:59.949 "bdev_io_pool_size": 65535, 00:20:59.949 "bdev_io_cache_size": 256, 00:20:59.949 "bdev_auto_examine": true, 00:20:59.949 "iobuf_small_cache_size": 128, 00:20:59.949 "iobuf_large_cache_size": 16 00:20:59.949 } 00:20:59.949 }, 00:20:59.949 { 00:20:59.949 "method": "bdev_raid_set_options", 00:20:59.949 "params": { 00:20:59.949 "process_window_size_kb": 1024 00:20:59.949 } 00:20:59.949 }, 00:20:59.949 { 00:20:59.949 "method": "bdev_iscsi_set_options", 00:20:59.949 "params": { 00:20:59.949 "timeout_sec": 30 00:20:59.949 } 00:20:59.949 }, 00:20:59.949 { 00:20:59.949 "method": "bdev_nvme_set_options", 00:20:59.949 "params": { 00:20:59.949 "action_on_timeout": "none", 00:20:59.949 "timeout_us": 0, 00:20:59.949 "timeout_admin_us": 0, 00:20:59.949 "keep_alive_timeout_ms": 10000, 00:20:59.949 "arbitration_burst": 0, 00:20:59.949 "low_priority_weight": 0, 00:20:59.949 "medium_priority_weight": 0, 00:20:59.949 "high_priority_weight": 0, 00:20:59.949 "nvme_adminq_poll_period_us": 10000, 00:20:59.949 "nvme_ioq_poll_period_us": 0, 00:20:59.949 "io_queue_requests": 512, 00:20:59.949 "delay_cmd_submit": true, 00:20:59.949 "transport_retry_count": 4, 00:20:59.950 "bdev_retry_count": 3, 00:20:59.950 "transport_ack_timeout": 0, 00:20:59.950 "ctrlr_loss_timeout_sec": 0, 00:20:59.950 "reconnect_delay_sec": 0, 00:20:59.950 "fast_io_fail_timeout_sec": 0, 00:20:59.950 "disable_auto_failback": false, 00:20:59.950 "generate_uuids": false, 00:20:59.950 "transport_tos": 0, 00:20:59.950 "nvme_error_stat": false, 00:20:59.950 "rdma_srq_size": 0, 00:20:59.950 "io_path_stat": false, 00:20:59.950 "allow_accel_sequence": false, 00:20:59.950 "rdma_max_cq_size": 0, 00:20:59.950 "rdma_cm_event_timeout_ms": 0, 00:20:59.950 "dhchap_digests": [ 00:20:59.950 "sha256", 00:20:59.950 "sha384", 00:20:59.950 "sha512" 00:20:59.950 ], 00:20:59.950 "dhchap_dhgroups": [ 00:20:59.950 "null", 00:20:59.950 "ffdhe2048", 00:20:59.950 "ffdhe3072", 00:20:59.950 "ffdhe4096", 00:20:59.950 "ffdhe6144", 00:20:59.950 "ffdhe8192" 00:20:59.950 ] 00:20:59.950 } 00:20:59.950 }, 00:20:59.950 { 00:20:59.950 "method": "bdev_nvme_attach_controller", 00:20:59.950 "params": { 00:20:59.950 "name": "TLSTEST", 00:20:59.950 "trtype": "TCP", 00:20:59.950 "adrfam": "IPv4", 00:20:59.950 "traddr": "10.0.0.2", 00:20:59.950 "trsvcid": "4420", 00:20:59.950 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.950 "prchk_reftag": false, 00:20:59.950 "prchk_guard": false, 00:20:59.950 "ctrlr_loss_timeout_sec": 0, 00:20:59.950 "reconnect_delay_sec": 0, 00:20:59.950 "fast_io_fail_timeout_sec": 0, 00:20:59.950 "psk": "/tmp/tmp.ksN3WaWpt5", 00:20:59.950 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:59.950 "hdgst": false, 00:20:59.950 "ddgst": false 00:20:59.950 } 00:20:59.950 }, 00:20:59.950 { 00:20:59.950 "method": "bdev_nvme_set_hotplug", 00:20:59.950 "params": { 00:20:59.950 "period_us": 100000, 00:20:59.950 "enable": false 00:20:59.950 } 00:20:59.950 }, 00:20:59.950 { 00:20:59.950 "method": "bdev_wait_for_examine" 00:20:59.950 } 00:20:59.950 ] 00:20:59.950 }, 00:20:59.950 { 00:20:59.950 "subsystem": "nbd", 00:20:59.950 "config": [] 00:20:59.950 } 00:20:59.950 ] 00:20:59.950 }' 00:20:59.950 16:02:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:59.950 16:02:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:00.209 [2024-07-15 16:02:28.886274] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:21:00.209 [2024-07-15 16:02:28.886322] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3800006 ] 00:21:00.209 EAL: No free 2048 kB hugepages reported on node 1 00:21:00.209 [2024-07-15 16:02:28.935299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.209 [2024-07-15 16:02:29.007100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:00.467 [2024-07-15 16:02:29.150036] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:00.467 [2024-07-15 16:02:29.150129] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:01.055 16:02:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:01.055 16:02:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:01.055 16:02:29 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:01.055 Running I/O for 10 seconds... 00:21:11.063 00:21:11.063 Latency(us) 00:21:11.063 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:11.063 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:11.063 Verification LBA range: start 0x0 length 0x2000 00:21:11.063 TLSTESTn1 : 10.05 4096.61 16.00 0.00 0.00 31160.97 6781.55 66561.78 00:21:11.063 =================================================================================================================== 00:21:11.063 Total : 4096.61 16.00 0.00 0.00 31160.97 6781.55 66561.78 00:21:11.063 0 00:21:11.063 16:02:39 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:11.063 16:02:39 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 3800006 00:21:11.063 16:02:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3800006 ']' 00:21:11.063 16:02:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3800006 00:21:11.063 16:02:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:11.063 16:02:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:11.063 16:02:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3800006 00:21:11.063 16:02:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:11.063 16:02:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:11.063 16:02:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3800006' 00:21:11.063 killing process with pid 3800006 00:21:11.063 16:02:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3800006 00:21:11.063 Received shutdown signal, test time was about 10.000000 seconds 00:21:11.063 00:21:11.063 Latency(us) 00:21:11.063 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:11.063 =================================================================================================================== 00:21:11.063 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:11.063 [2024-07-15 16:02:39.900815] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:11.063 16:02:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3800006 00:21:11.322 16:02:40 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 3799764 00:21:11.322 16:02:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3799764 ']' 00:21:11.322 16:02:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3799764 00:21:11.322 16:02:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:11.322 16:02:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:11.322 16:02:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3799764 00:21:11.322 16:02:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:11.322 16:02:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:11.322 16:02:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3799764' 00:21:11.322 killing process with pid 3799764 00:21:11.322 16:02:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3799764 00:21:11.322 [2024-07-15 16:02:40.134722] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:11.322 16:02:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3799764 00:21:11.581 16:02:40 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:21:11.581 16:02:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:11.581 16:02:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:11.581 16:02:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.581 16:02:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3801846 00:21:11.581 16:02:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3801846 00:21:11.581 16:02:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:11.581 16:02:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3801846 ']' 00:21:11.581 16:02:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:11.581 16:02:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:11.581 16:02:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:11.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:11.581 16:02:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:11.581 16:02:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.581 [2024-07-15 16:02:40.379248] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:21:11.581 [2024-07-15 16:02:40.379293] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:11.581 EAL: No free 2048 kB hugepages reported on node 1 00:21:11.581 [2024-07-15 16:02:40.435490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.581 [2024-07-15 16:02:40.502880] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:11.581 [2024-07-15 16:02:40.502921] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:11.581 [2024-07-15 16:02:40.502928] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:11.581 [2024-07-15 16:02:40.502934] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:11.581 [2024-07-15 16:02:40.502939] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:11.581 [2024-07-15 16:02:40.502972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:12.516 16:02:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:12.516 16:02:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:12.516 16:02:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:12.516 16:02:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:12.516 16:02:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:12.516 16:02:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:12.516 16:02:41 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.ksN3WaWpt5 00:21:12.516 16:02:41 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ksN3WaWpt5 00:21:12.516 16:02:41 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:12.516 [2024-07-15 16:02:41.374823] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:12.516 16:02:41 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:12.774 16:02:41 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:13.032 [2024-07-15 16:02:41.711700] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:13.032 [2024-07-15 16:02:41.711922] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:13.032 16:02:41 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:13.032 malloc0 00:21:13.032 16:02:41 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:13.290 16:02:42 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ksN3WaWpt5 00:21:13.549 [2024-07-15 16:02:42.233344] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:13.549 16:02:42 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:13.549 16:02:42 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=3802110 00:21:13.549 16:02:42 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:13.549 16:02:42 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 3802110 /var/tmp/bdevperf.sock 00:21:13.549 16:02:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3802110 ']' 00:21:13.549 16:02:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:13.549 16:02:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:13.549 16:02:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:13.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:13.549 16:02:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:13.549 16:02:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.549 [2024-07-15 16:02:42.283288] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:21:13.549 [2024-07-15 16:02:42.283332] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3802110 ] 00:21:13.549 EAL: No free 2048 kB hugepages reported on node 1 00:21:13.549 [2024-07-15 16:02:42.335797] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.549 [2024-07-15 16:02:42.408215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:14.483 16:02:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:14.483 16:02:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:14.483 16:02:43 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ksN3WaWpt5 00:21:14.483 16:02:43 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:14.483 [2024-07-15 16:02:43.412093] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:14.740 nvme0n1 00:21:14.740 16:02:43 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:14.740 Running I/O for 1 seconds... 00:21:15.674 00:21:15.674 Latency(us) 00:21:15.674 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:15.674 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:15.674 Verification LBA range: start 0x0 length 0x2000 00:21:15.674 nvme0n1 : 1.01 5177.34 20.22 0.00 0.00 24535.10 4843.97 59267.34 00:21:15.674 =================================================================================================================== 00:21:15.674 Total : 5177.34 20.22 0.00 0.00 24535.10 4843.97 59267.34 00:21:15.674 0 00:21:15.932 16:02:44 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 3802110 00:21:15.932 16:02:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3802110 ']' 00:21:15.932 16:02:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3802110 00:21:15.932 16:02:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:15.932 16:02:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:15.932 16:02:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3802110 00:21:15.932 16:02:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:15.932 16:02:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:15.932 16:02:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3802110' 00:21:15.932 killing process with pid 3802110 00:21:15.932 16:02:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3802110 00:21:15.932 Received shutdown signal, test time was about 1.000000 seconds 00:21:15.932 00:21:15.932 Latency(us) 00:21:15.932 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:15.932 =================================================================================================================== 00:21:15.932 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:15.932 16:02:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3802110 00:21:15.932 16:02:44 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 3801846 00:21:15.932 16:02:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3801846 ']' 00:21:15.932 16:02:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3801846 00:21:15.932 16:02:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:15.932 16:02:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:15.932 16:02:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3801846 00:21:16.191 16:02:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:16.191 16:02:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:16.191 16:02:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3801846' 00:21:16.191 killing process with pid 3801846 00:21:16.191 16:02:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3801846 00:21:16.191 [2024-07-15 16:02:44.894346] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:16.191 16:02:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3801846 00:21:16.192 16:02:45 nvmf_tcp.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:21:16.192 16:02:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:16.192 16:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:16.192 16:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:16.192 16:02:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3802588 00:21:16.192 16:02:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3802588 00:21:16.192 16:02:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:16.192 16:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3802588 ']' 00:21:16.192 16:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:16.192 16:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:16.192 16:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:16.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:16.192 16:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:16.192 16:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:16.450 [2024-07-15 16:02:45.140001] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:21:16.451 [2024-07-15 16:02:45.140047] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:16.451 EAL: No free 2048 kB hugepages reported on node 1 00:21:16.451 [2024-07-15 16:02:45.195702] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.451 [2024-07-15 16:02:45.274504] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:16.451 [2024-07-15 16:02:45.274539] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:16.451 [2024-07-15 16:02:45.274545] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:16.451 [2024-07-15 16:02:45.274552] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:16.451 [2024-07-15 16:02:45.274557] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:16.451 [2024-07-15 16:02:45.274590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:17.016 16:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:17.016 16:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:17.016 16:02:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:17.016 16:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:17.016 16:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.275 16:02:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:17.275 16:02:45 nvmf_tcp.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:21:17.275 16:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.275 16:02:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.275 [2024-07-15 16:02:45.970650] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:17.275 malloc0 00:21:17.275 [2024-07-15 16:02:45.998900] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:17.275 [2024-07-15 16:02:45.999099] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:17.275 16:02:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.275 16:02:46 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=3802829 00:21:17.275 16:02:46 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:17.275 16:02:46 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 3802829 /var/tmp/bdevperf.sock 00:21:17.275 16:02:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3802829 ']' 00:21:17.275 16:02:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:17.275 16:02:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:17.275 16:02:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:17.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:17.275 16:02:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:17.275 16:02:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.275 [2024-07-15 16:02:46.068991] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:21:17.275 [2024-07-15 16:02:46.069029] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3802829 ] 00:21:17.275 EAL: No free 2048 kB hugepages reported on node 1 00:21:17.275 [2024-07-15 16:02:46.122503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.275 [2024-07-15 16:02:46.195861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:18.210 16:02:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:18.210 16:02:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:18.210 16:02:46 nvmf_tcp.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ksN3WaWpt5 00:21:18.210 16:02:47 nvmf_tcp.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:18.467 [2024-07-15 16:02:47.199909] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:18.467 nvme0n1 00:21:18.467 16:02:47 nvmf_tcp.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:18.467 Running I/O for 1 seconds... 00:21:19.838 00:21:19.838 Latency(us) 00:21:19.838 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:19.838 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:19.838 Verification LBA range: start 0x0 length 0x2000 00:21:19.839 nvme0n1 : 1.02 4804.20 18.77 0.00 0.00 26424.47 4786.98 69753.10 00:21:19.839 =================================================================================================================== 00:21:19.839 Total : 4804.20 18.77 0.00 0.00 26424.47 4786.98 69753.10 00:21:19.839 0 00:21:19.839 16:02:48 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:21:19.839 16:02:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.839 16:02:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:19.839 16:02:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.839 16:02:48 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:21:19.839 "subsystems": [ 00:21:19.839 { 00:21:19.839 "subsystem": "keyring", 00:21:19.839 "config": [ 00:21:19.839 { 00:21:19.839 "method": "keyring_file_add_key", 00:21:19.839 "params": { 00:21:19.839 "name": "key0", 00:21:19.839 "path": "/tmp/tmp.ksN3WaWpt5" 00:21:19.839 } 00:21:19.839 } 00:21:19.839 ] 00:21:19.839 }, 00:21:19.839 { 00:21:19.839 "subsystem": "iobuf", 00:21:19.839 "config": [ 00:21:19.839 { 00:21:19.839 "method": "iobuf_set_options", 00:21:19.839 "params": { 00:21:19.839 "small_pool_count": 8192, 00:21:19.839 "large_pool_count": 1024, 00:21:19.839 "small_bufsize": 8192, 00:21:19.839 "large_bufsize": 135168 00:21:19.839 } 00:21:19.839 } 00:21:19.839 ] 00:21:19.839 }, 00:21:19.839 { 00:21:19.839 "subsystem": "sock", 00:21:19.839 "config": [ 00:21:19.839 { 00:21:19.839 "method": "sock_set_default_impl", 00:21:19.839 "params": { 00:21:19.839 "impl_name": "posix" 00:21:19.839 } 00:21:19.839 }, 00:21:19.839 { 00:21:19.839 "method": "sock_impl_set_options", 00:21:19.839 "params": { 00:21:19.839 "impl_name": "ssl", 00:21:19.839 "recv_buf_size": 4096, 00:21:19.839 "send_buf_size": 4096, 00:21:19.839 "enable_recv_pipe": true, 00:21:19.839 "enable_quickack": false, 00:21:19.839 "enable_placement_id": 0, 00:21:19.839 "enable_zerocopy_send_server": true, 00:21:19.839 "enable_zerocopy_send_client": false, 00:21:19.839 "zerocopy_threshold": 0, 00:21:19.839 "tls_version": 0, 00:21:19.839 "enable_ktls": false 00:21:19.839 } 00:21:19.839 }, 00:21:19.839 { 00:21:19.839 "method": "sock_impl_set_options", 00:21:19.839 "params": { 00:21:19.839 "impl_name": "posix", 00:21:19.839 "recv_buf_size": 2097152, 00:21:19.839 "send_buf_size": 2097152, 00:21:19.839 "enable_recv_pipe": true, 00:21:19.839 "enable_quickack": false, 00:21:19.839 "enable_placement_id": 0, 00:21:19.839 "enable_zerocopy_send_server": true, 00:21:19.839 "enable_zerocopy_send_client": false, 00:21:19.839 "zerocopy_threshold": 0, 00:21:19.839 "tls_version": 0, 00:21:19.839 "enable_ktls": false 00:21:19.839 } 00:21:19.839 } 00:21:19.839 ] 00:21:19.839 }, 00:21:19.839 { 00:21:19.839 "subsystem": "vmd", 00:21:19.839 "config": [] 00:21:19.839 }, 00:21:19.839 { 00:21:19.839 "subsystem": "accel", 00:21:19.839 "config": [ 00:21:19.839 { 00:21:19.839 "method": "accel_set_options", 00:21:19.839 "params": { 00:21:19.839 "small_cache_size": 128, 00:21:19.839 "large_cache_size": 16, 00:21:19.839 "task_count": 2048, 00:21:19.839 "sequence_count": 2048, 00:21:19.839 "buf_count": 2048 00:21:19.839 } 00:21:19.839 } 00:21:19.839 ] 00:21:19.839 }, 00:21:19.839 { 00:21:19.839 "subsystem": "bdev", 00:21:19.839 "config": [ 00:21:19.839 { 00:21:19.839 "method": "bdev_set_options", 00:21:19.839 "params": { 00:21:19.839 "bdev_io_pool_size": 65535, 00:21:19.839 "bdev_io_cache_size": 256, 00:21:19.839 "bdev_auto_examine": true, 00:21:19.839 "iobuf_small_cache_size": 128, 00:21:19.839 "iobuf_large_cache_size": 16 00:21:19.839 } 00:21:19.839 }, 00:21:19.839 { 00:21:19.839 "method": "bdev_raid_set_options", 00:21:19.839 "params": { 00:21:19.839 "process_window_size_kb": 1024 00:21:19.839 } 00:21:19.839 }, 00:21:19.839 { 00:21:19.839 "method": "bdev_iscsi_set_options", 00:21:19.839 "params": { 00:21:19.839 "timeout_sec": 30 00:21:19.839 } 00:21:19.839 }, 00:21:19.839 { 00:21:19.839 "method": "bdev_nvme_set_options", 00:21:19.839 "params": { 00:21:19.839 "action_on_timeout": "none", 00:21:19.839 "timeout_us": 0, 00:21:19.839 "timeout_admin_us": 0, 00:21:19.839 "keep_alive_timeout_ms": 10000, 00:21:19.839 "arbitration_burst": 0, 00:21:19.839 "low_priority_weight": 0, 00:21:19.839 "medium_priority_weight": 0, 00:21:19.839 "high_priority_weight": 0, 00:21:19.839 "nvme_adminq_poll_period_us": 10000, 00:21:19.839 "nvme_ioq_poll_period_us": 0, 00:21:19.839 "io_queue_requests": 0, 00:21:19.839 "delay_cmd_submit": true, 00:21:19.839 "transport_retry_count": 4, 00:21:19.839 "bdev_retry_count": 3, 00:21:19.839 "transport_ack_timeout": 0, 00:21:19.839 "ctrlr_loss_timeout_sec": 0, 00:21:19.839 "reconnect_delay_sec": 0, 00:21:19.839 "fast_io_fail_timeout_sec": 0, 00:21:19.839 "disable_auto_failback": false, 00:21:19.839 "generate_uuids": false, 00:21:19.839 "transport_tos": 0, 00:21:19.839 "nvme_error_stat": false, 00:21:19.839 "rdma_srq_size": 0, 00:21:19.839 "io_path_stat": false, 00:21:19.839 "allow_accel_sequence": false, 00:21:19.839 "rdma_max_cq_size": 0, 00:21:19.839 "rdma_cm_event_timeout_ms": 0, 00:21:19.839 "dhchap_digests": [ 00:21:19.839 "sha256", 00:21:19.839 "sha384", 00:21:19.839 "sha512" 00:21:19.839 ], 00:21:19.839 "dhchap_dhgroups": [ 00:21:19.839 "null", 00:21:19.839 "ffdhe2048", 00:21:19.839 "ffdhe3072", 00:21:19.839 "ffdhe4096", 00:21:19.839 "ffdhe6144", 00:21:19.839 "ffdhe8192" 00:21:19.839 ] 00:21:19.839 } 00:21:19.839 }, 00:21:19.839 { 00:21:19.839 "method": "bdev_nvme_set_hotplug", 00:21:19.839 "params": { 00:21:19.839 "period_us": 100000, 00:21:19.839 "enable": false 00:21:19.839 } 00:21:19.839 }, 00:21:19.839 { 00:21:19.839 "method": "bdev_malloc_create", 00:21:19.839 "params": { 00:21:19.839 "name": "malloc0", 00:21:19.839 "num_blocks": 8192, 00:21:19.839 "block_size": 4096, 00:21:19.839 "physical_block_size": 4096, 00:21:19.839 "uuid": "f19adb39-073a-4a65-99c4-b0ef7bf86cb6", 00:21:19.839 "optimal_io_boundary": 0 00:21:19.839 } 00:21:19.839 }, 00:21:19.839 { 00:21:19.839 "method": "bdev_wait_for_examine" 00:21:19.839 } 00:21:19.839 ] 00:21:19.839 }, 00:21:19.839 { 00:21:19.839 "subsystem": "nbd", 00:21:19.839 "config": [] 00:21:19.839 }, 00:21:19.839 { 00:21:19.839 "subsystem": "scheduler", 00:21:19.839 "config": [ 00:21:19.839 { 00:21:19.839 "method": "framework_set_scheduler", 00:21:19.839 "params": { 00:21:19.839 "name": "static" 00:21:19.839 } 00:21:19.839 } 00:21:19.839 ] 00:21:19.839 }, 00:21:19.839 { 00:21:19.839 "subsystem": "nvmf", 00:21:19.839 "config": [ 00:21:19.839 { 00:21:19.839 "method": "nvmf_set_config", 00:21:19.839 "params": { 00:21:19.839 "discovery_filter": "match_any", 00:21:19.839 "admin_cmd_passthru": { 00:21:19.839 "identify_ctrlr": false 00:21:19.839 } 00:21:19.839 } 00:21:19.839 }, 00:21:19.839 { 00:21:19.839 "method": "nvmf_set_max_subsystems", 00:21:19.839 "params": { 00:21:19.839 "max_subsystems": 1024 00:21:19.839 } 00:21:19.839 }, 00:21:19.839 { 00:21:19.839 "method": "nvmf_set_crdt", 00:21:19.839 "params": { 00:21:19.839 "crdt1": 0, 00:21:19.839 "crdt2": 0, 00:21:19.839 "crdt3": 0 00:21:19.839 } 00:21:19.839 }, 00:21:19.839 { 00:21:19.839 "method": "nvmf_create_transport", 00:21:19.839 "params": { 00:21:19.839 "trtype": "TCP", 00:21:19.839 "max_queue_depth": 128, 00:21:19.839 "max_io_qpairs_per_ctrlr": 127, 00:21:19.839 "in_capsule_data_size": 4096, 00:21:19.839 "max_io_size": 131072, 00:21:19.839 "io_unit_size": 131072, 00:21:19.839 "max_aq_depth": 128, 00:21:19.839 "num_shared_buffers": 511, 00:21:19.839 "buf_cache_size": 4294967295, 00:21:19.839 "dif_insert_or_strip": false, 00:21:19.839 "zcopy": false, 00:21:19.839 "c2h_success": false, 00:21:19.839 "sock_priority": 0, 00:21:19.839 "abort_timeout_sec": 1, 00:21:19.839 "ack_timeout": 0, 00:21:19.839 "data_wr_pool_size": 0 00:21:19.839 } 00:21:19.839 }, 00:21:19.839 { 00:21:19.839 "method": "nvmf_create_subsystem", 00:21:19.839 "params": { 00:21:19.839 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:19.839 "allow_any_host": false, 00:21:19.839 "serial_number": "00000000000000000000", 00:21:19.839 "model_number": "SPDK bdev Controller", 00:21:19.839 "max_namespaces": 32, 00:21:19.839 "min_cntlid": 1, 00:21:19.839 "max_cntlid": 65519, 00:21:19.839 "ana_reporting": false 00:21:19.839 } 00:21:19.839 }, 00:21:19.839 { 00:21:19.839 "method": "nvmf_subsystem_add_host", 00:21:19.839 "params": { 00:21:19.839 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:19.839 "host": "nqn.2016-06.io.spdk:host1", 00:21:19.839 "psk": "key0" 00:21:19.839 } 00:21:19.839 }, 00:21:19.839 { 00:21:19.839 "method": "nvmf_subsystem_add_ns", 00:21:19.839 "params": { 00:21:19.839 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:19.839 "namespace": { 00:21:19.839 "nsid": 1, 00:21:19.839 "bdev_name": "malloc0", 00:21:19.839 "nguid": "F19ADB39073A4A6599C4B0EF7BF86CB6", 00:21:19.839 "uuid": "f19adb39-073a-4a65-99c4-b0ef7bf86cb6", 00:21:19.839 "no_auto_visible": false 00:21:19.839 } 00:21:19.839 } 00:21:19.839 }, 00:21:19.839 { 00:21:19.839 "method": "nvmf_subsystem_add_listener", 00:21:19.839 "params": { 00:21:19.839 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:19.839 "listen_address": { 00:21:19.839 "trtype": "TCP", 00:21:19.839 "adrfam": "IPv4", 00:21:19.839 "traddr": "10.0.0.2", 00:21:19.839 "trsvcid": "4420" 00:21:19.839 }, 00:21:19.839 "secure_channel": false, 00:21:19.839 "sock_impl": "ssl" 00:21:19.839 } 00:21:19.839 } 00:21:19.839 ] 00:21:19.839 } 00:21:19.839 ] 00:21:19.839 }' 00:21:19.839 16:02:48 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:20.097 16:02:48 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:21:20.097 "subsystems": [ 00:21:20.097 { 00:21:20.097 "subsystem": "keyring", 00:21:20.097 "config": [ 00:21:20.097 { 00:21:20.097 "method": "keyring_file_add_key", 00:21:20.097 "params": { 00:21:20.097 "name": "key0", 00:21:20.097 "path": "/tmp/tmp.ksN3WaWpt5" 00:21:20.097 } 00:21:20.097 } 00:21:20.097 ] 00:21:20.097 }, 00:21:20.097 { 00:21:20.097 "subsystem": "iobuf", 00:21:20.097 "config": [ 00:21:20.097 { 00:21:20.097 "method": "iobuf_set_options", 00:21:20.097 "params": { 00:21:20.097 "small_pool_count": 8192, 00:21:20.097 "large_pool_count": 1024, 00:21:20.097 "small_bufsize": 8192, 00:21:20.097 "large_bufsize": 135168 00:21:20.097 } 00:21:20.097 } 00:21:20.097 ] 00:21:20.097 }, 00:21:20.097 { 00:21:20.097 "subsystem": "sock", 00:21:20.097 "config": [ 00:21:20.097 { 00:21:20.097 "method": "sock_set_default_impl", 00:21:20.097 "params": { 00:21:20.097 "impl_name": "posix" 00:21:20.097 } 00:21:20.097 }, 00:21:20.097 { 00:21:20.097 "method": "sock_impl_set_options", 00:21:20.097 "params": { 00:21:20.097 "impl_name": "ssl", 00:21:20.097 "recv_buf_size": 4096, 00:21:20.097 "send_buf_size": 4096, 00:21:20.097 "enable_recv_pipe": true, 00:21:20.097 "enable_quickack": false, 00:21:20.097 "enable_placement_id": 0, 00:21:20.097 "enable_zerocopy_send_server": true, 00:21:20.097 "enable_zerocopy_send_client": false, 00:21:20.097 "zerocopy_threshold": 0, 00:21:20.097 "tls_version": 0, 00:21:20.097 "enable_ktls": false 00:21:20.097 } 00:21:20.097 }, 00:21:20.097 { 00:21:20.097 "method": "sock_impl_set_options", 00:21:20.097 "params": { 00:21:20.097 "impl_name": "posix", 00:21:20.097 "recv_buf_size": 2097152, 00:21:20.097 "send_buf_size": 2097152, 00:21:20.097 "enable_recv_pipe": true, 00:21:20.097 "enable_quickack": false, 00:21:20.097 "enable_placement_id": 0, 00:21:20.097 "enable_zerocopy_send_server": true, 00:21:20.097 "enable_zerocopy_send_client": false, 00:21:20.097 "zerocopy_threshold": 0, 00:21:20.097 "tls_version": 0, 00:21:20.097 "enable_ktls": false 00:21:20.097 } 00:21:20.097 } 00:21:20.097 ] 00:21:20.097 }, 00:21:20.097 { 00:21:20.097 "subsystem": "vmd", 00:21:20.097 "config": [] 00:21:20.097 }, 00:21:20.097 { 00:21:20.097 "subsystem": "accel", 00:21:20.097 "config": [ 00:21:20.097 { 00:21:20.097 "method": "accel_set_options", 00:21:20.097 "params": { 00:21:20.097 "small_cache_size": 128, 00:21:20.097 "large_cache_size": 16, 00:21:20.097 "task_count": 2048, 00:21:20.097 "sequence_count": 2048, 00:21:20.097 "buf_count": 2048 00:21:20.097 } 00:21:20.097 } 00:21:20.097 ] 00:21:20.097 }, 00:21:20.097 { 00:21:20.097 "subsystem": "bdev", 00:21:20.097 "config": [ 00:21:20.097 { 00:21:20.097 "method": "bdev_set_options", 00:21:20.097 "params": { 00:21:20.097 "bdev_io_pool_size": 65535, 00:21:20.097 "bdev_io_cache_size": 256, 00:21:20.097 "bdev_auto_examine": true, 00:21:20.097 "iobuf_small_cache_size": 128, 00:21:20.097 "iobuf_large_cache_size": 16 00:21:20.097 } 00:21:20.097 }, 00:21:20.097 { 00:21:20.097 "method": "bdev_raid_set_options", 00:21:20.097 "params": { 00:21:20.097 "process_window_size_kb": 1024 00:21:20.097 } 00:21:20.097 }, 00:21:20.097 { 00:21:20.097 "method": "bdev_iscsi_set_options", 00:21:20.097 "params": { 00:21:20.097 "timeout_sec": 30 00:21:20.097 } 00:21:20.097 }, 00:21:20.097 { 00:21:20.097 "method": "bdev_nvme_set_options", 00:21:20.097 "params": { 00:21:20.097 "action_on_timeout": "none", 00:21:20.097 "timeout_us": 0, 00:21:20.097 "timeout_admin_us": 0, 00:21:20.097 "keep_alive_timeout_ms": 10000, 00:21:20.097 "arbitration_burst": 0, 00:21:20.097 "low_priority_weight": 0, 00:21:20.097 "medium_priority_weight": 0, 00:21:20.097 "high_priority_weight": 0, 00:21:20.097 "nvme_adminq_poll_period_us": 10000, 00:21:20.097 "nvme_ioq_poll_period_us": 0, 00:21:20.097 "io_queue_requests": 512, 00:21:20.097 "delay_cmd_submit": true, 00:21:20.097 "transport_retry_count": 4, 00:21:20.097 "bdev_retry_count": 3, 00:21:20.097 "transport_ack_timeout": 0, 00:21:20.097 "ctrlr_loss_timeout_sec": 0, 00:21:20.097 "reconnect_delay_sec": 0, 00:21:20.097 "fast_io_fail_timeout_sec": 0, 00:21:20.097 "disable_auto_failback": false, 00:21:20.097 "generate_uuids": false, 00:21:20.097 "transport_tos": 0, 00:21:20.097 "nvme_error_stat": false, 00:21:20.097 "rdma_srq_size": 0, 00:21:20.097 "io_path_stat": false, 00:21:20.097 "allow_accel_sequence": false, 00:21:20.097 "rdma_max_cq_size": 0, 00:21:20.097 "rdma_cm_event_timeout_ms": 0, 00:21:20.097 "dhchap_digests": [ 00:21:20.097 "sha256", 00:21:20.097 "sha384", 00:21:20.097 "sha512" 00:21:20.097 ], 00:21:20.097 "dhchap_dhgroups": [ 00:21:20.097 "null", 00:21:20.097 "ffdhe2048", 00:21:20.097 "ffdhe3072", 00:21:20.097 "ffdhe4096", 00:21:20.097 "ffdhe6144", 00:21:20.097 "ffdhe8192" 00:21:20.097 ] 00:21:20.097 } 00:21:20.097 }, 00:21:20.097 { 00:21:20.097 "method": "bdev_nvme_attach_controller", 00:21:20.097 "params": { 00:21:20.097 "name": "nvme0", 00:21:20.097 "trtype": "TCP", 00:21:20.097 "adrfam": "IPv4", 00:21:20.097 "traddr": "10.0.0.2", 00:21:20.097 "trsvcid": "4420", 00:21:20.097 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.097 "prchk_reftag": false, 00:21:20.097 "prchk_guard": false, 00:21:20.097 "ctrlr_loss_timeout_sec": 0, 00:21:20.097 "reconnect_delay_sec": 0, 00:21:20.097 "fast_io_fail_timeout_sec": 0, 00:21:20.097 "psk": "key0", 00:21:20.097 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:20.097 "hdgst": false, 00:21:20.097 "ddgst": false 00:21:20.097 } 00:21:20.097 }, 00:21:20.097 { 00:21:20.097 "method": "bdev_nvme_set_hotplug", 00:21:20.097 "params": { 00:21:20.097 "period_us": 100000, 00:21:20.097 "enable": false 00:21:20.097 } 00:21:20.097 }, 00:21:20.097 { 00:21:20.097 "method": "bdev_enable_histogram", 00:21:20.097 "params": { 00:21:20.097 "name": "nvme0n1", 00:21:20.097 "enable": true 00:21:20.097 } 00:21:20.097 }, 00:21:20.098 { 00:21:20.098 "method": "bdev_wait_for_examine" 00:21:20.098 } 00:21:20.098 ] 00:21:20.098 }, 00:21:20.098 { 00:21:20.098 "subsystem": "nbd", 00:21:20.098 "config": [] 00:21:20.098 } 00:21:20.098 ] 00:21:20.098 }' 00:21:20.098 16:02:48 nvmf_tcp.nvmf_tls -- target/tls.sh@268 -- # killprocess 3802829 00:21:20.098 16:02:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3802829 ']' 00:21:20.098 16:02:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3802829 00:21:20.098 16:02:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:20.098 16:02:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:20.098 16:02:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3802829 00:21:20.098 16:02:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:20.098 16:02:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:20.098 16:02:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3802829' 00:21:20.098 killing process with pid 3802829 00:21:20.098 16:02:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3802829 00:21:20.098 Received shutdown signal, test time was about 1.000000 seconds 00:21:20.098 00:21:20.098 Latency(us) 00:21:20.098 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.098 =================================================================================================================== 00:21:20.098 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:20.098 16:02:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3802829 00:21:20.098 16:02:49 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # killprocess 3802588 00:21:20.098 16:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3802588 ']' 00:21:20.098 16:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3802588 00:21:20.098 16:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:20.098 16:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:20.098 16:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3802588 00:21:20.356 16:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:20.356 16:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:20.356 16:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3802588' 00:21:20.356 killing process with pid 3802588 00:21:20.356 16:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3802588 00:21:20.356 16:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3802588 00:21:20.356 16:02:49 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:21:20.356 16:02:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:20.356 16:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:20.356 16:02:49 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:21:20.356 "subsystems": [ 00:21:20.356 { 00:21:20.356 "subsystem": "keyring", 00:21:20.356 "config": [ 00:21:20.356 { 00:21:20.356 "method": "keyring_file_add_key", 00:21:20.356 "params": { 00:21:20.356 "name": "key0", 00:21:20.356 "path": "/tmp/tmp.ksN3WaWpt5" 00:21:20.356 } 00:21:20.356 } 00:21:20.356 ] 00:21:20.356 }, 00:21:20.356 { 00:21:20.356 "subsystem": "iobuf", 00:21:20.356 "config": [ 00:21:20.356 { 00:21:20.356 "method": "iobuf_set_options", 00:21:20.356 "params": { 00:21:20.356 "small_pool_count": 8192, 00:21:20.356 "large_pool_count": 1024, 00:21:20.356 "small_bufsize": 8192, 00:21:20.356 "large_bufsize": 135168 00:21:20.356 } 00:21:20.356 } 00:21:20.356 ] 00:21:20.356 }, 00:21:20.356 { 00:21:20.356 "subsystem": "sock", 00:21:20.356 "config": [ 00:21:20.356 { 00:21:20.356 "method": "sock_set_default_impl", 00:21:20.356 "params": { 00:21:20.356 "impl_name": "posix" 00:21:20.356 } 00:21:20.356 }, 00:21:20.356 { 00:21:20.356 "method": "sock_impl_set_options", 00:21:20.356 "params": { 00:21:20.356 "impl_name": "ssl", 00:21:20.356 "recv_buf_size": 4096, 00:21:20.356 "send_buf_size": 4096, 00:21:20.356 "enable_recv_pipe": true, 00:21:20.356 "enable_quickack": false, 00:21:20.356 "enable_placement_id": 0, 00:21:20.356 "enable_zerocopy_send_server": true, 00:21:20.356 "enable_zerocopy_send_client": false, 00:21:20.356 "zerocopy_threshold": 0, 00:21:20.356 "tls_version": 0, 00:21:20.356 "enable_ktls": false 00:21:20.356 } 00:21:20.356 }, 00:21:20.356 { 00:21:20.356 "method": "sock_impl_set_options", 00:21:20.356 "params": { 00:21:20.356 "impl_name": "posix", 00:21:20.356 "recv_buf_size": 2097152, 00:21:20.356 "send_buf_size": 2097152, 00:21:20.356 "enable_recv_pipe": true, 00:21:20.356 "enable_quickack": false, 00:21:20.356 "enable_placement_id": 0, 00:21:20.356 "enable_zerocopy_send_server": true, 00:21:20.356 "enable_zerocopy_send_client": false, 00:21:20.356 "zerocopy_threshold": 0, 00:21:20.356 "tls_version": 0, 00:21:20.356 "enable_ktls": false 00:21:20.356 } 00:21:20.356 } 00:21:20.356 ] 00:21:20.356 }, 00:21:20.356 { 00:21:20.356 "subsystem": "vmd", 00:21:20.356 "config": [] 00:21:20.356 }, 00:21:20.356 { 00:21:20.356 "subsystem": "accel", 00:21:20.356 "config": [ 00:21:20.356 { 00:21:20.356 "method": "accel_set_options", 00:21:20.356 "params": { 00:21:20.356 "small_cache_size": 128, 00:21:20.356 "large_cache_size": 16, 00:21:20.356 "task_count": 2048, 00:21:20.356 "sequence_count": 2048, 00:21:20.356 "buf_count": 2048 00:21:20.356 } 00:21:20.356 } 00:21:20.356 ] 00:21:20.356 }, 00:21:20.356 { 00:21:20.356 "subsystem": "bdev", 00:21:20.356 "config": [ 00:21:20.356 { 00:21:20.356 "method": "bdev_set_options", 00:21:20.356 "params": { 00:21:20.356 "bdev_io_pool_size": 65535, 00:21:20.356 "bdev_io_cache_size": 256, 00:21:20.356 "bdev_auto_examine": true, 00:21:20.356 "iobuf_small_cache_size": 128, 00:21:20.356 "iobuf_large_cache_size": 16 00:21:20.356 } 00:21:20.356 }, 00:21:20.356 { 00:21:20.356 "method": "bdev_raid_set_options", 00:21:20.356 "params": { 00:21:20.356 "process_window_size_kb": 1024 00:21:20.356 } 00:21:20.356 }, 00:21:20.356 { 00:21:20.356 "method": "bdev_iscsi_set_options", 00:21:20.356 "params": { 00:21:20.356 "timeout_sec": 30 00:21:20.356 } 00:21:20.356 }, 00:21:20.356 { 00:21:20.356 "method": "bdev_nvme_set_options", 00:21:20.356 "params": { 00:21:20.356 "action_on_timeout": "none", 00:21:20.356 "timeout_us": 0, 00:21:20.356 "timeout_admin_us": 0, 00:21:20.356 "keep_alive_timeout_ms": 10000, 00:21:20.356 "arbitration_burst": 0, 00:21:20.356 "low_priority_weight": 0, 00:21:20.356 "medium_priority_weight": 0, 00:21:20.356 "high_priority_weight": 0, 00:21:20.356 "nvme_adminq_poll_period_us": 10000, 00:21:20.356 "nvme_ioq_poll_period_us": 0, 00:21:20.356 "io_queue_requests": 0, 00:21:20.356 "delay_cmd_submit": true, 00:21:20.356 "transport_retry_count": 4, 00:21:20.356 "bdev_retry_count": 3, 00:21:20.356 "transport_ack_timeout": 0, 00:21:20.356 "ctrlr_loss_timeout_sec": 0, 00:21:20.356 "reconnect_delay_sec": 0, 00:21:20.356 "fast_io_fail_timeout_sec": 0, 00:21:20.356 "disable_auto_failback": false, 00:21:20.356 "generate_uuids": false, 00:21:20.356 "transport_tos": 0, 00:21:20.356 "nvme_error_stat": false, 00:21:20.356 "rdma_srq_size": 0, 00:21:20.356 "io_path_stat": false, 00:21:20.356 "allow_accel_sequence": false, 00:21:20.356 "rdma_max_cq_size": 0, 00:21:20.356 "rdma_cm_event_timeout_ms": 0, 00:21:20.356 "dhchap_digests": [ 00:21:20.356 "sha256", 00:21:20.356 "sha384", 00:21:20.356 "sha512" 00:21:20.356 ], 00:21:20.356 "dhchap_dhgroups": [ 00:21:20.356 "null", 00:21:20.356 "ffdhe2048", 00:21:20.356 "ffdhe3072", 00:21:20.356 "ffdhe4096", 00:21:20.356 "ffdhe6144", 00:21:20.356 "ffdhe8192" 00:21:20.356 ] 00:21:20.356 } 00:21:20.356 }, 00:21:20.356 { 00:21:20.356 "method": "bdev_nvme_set_hotplug", 00:21:20.356 "params": { 00:21:20.356 "period_us": 100000, 00:21:20.356 "enable": false 00:21:20.356 } 00:21:20.356 }, 00:21:20.356 { 00:21:20.356 "method": "bdev_malloc_create", 00:21:20.356 "params": { 00:21:20.356 "name": "malloc0", 00:21:20.356 "num_blocks": 8192, 00:21:20.356 "block_size": 4096, 00:21:20.356 "physical_block_size": 4096, 00:21:20.356 "uuid": "f19adb39-073a-4a65-99c4-b0ef7bf86cb6", 00:21:20.356 "optimal_io_boundary": 0 00:21:20.356 } 00:21:20.356 }, 00:21:20.356 { 00:21:20.356 "method": "bdev_wait_for_examine" 00:21:20.356 } 00:21:20.356 ] 00:21:20.356 }, 00:21:20.356 { 00:21:20.356 "subsystem": "nbd", 00:21:20.356 "config": [] 00:21:20.356 }, 00:21:20.356 { 00:21:20.356 "subsystem": "scheduler", 00:21:20.356 "config": [ 00:21:20.356 { 00:21:20.356 "method": "framework_set_scheduler", 00:21:20.356 "params": { 00:21:20.356 "name": "static" 00:21:20.357 } 00:21:20.357 } 00:21:20.357 ] 00:21:20.357 }, 00:21:20.357 { 00:21:20.357 "subsystem": "nvmf", 00:21:20.357 "config": [ 00:21:20.357 { 00:21:20.357 "method": "nvmf_set_config", 00:21:20.357 "params": { 00:21:20.357 "discovery_filter": "match_any", 00:21:20.357 "admin_cmd_passthru": { 00:21:20.357 "identify_ctrlr": false 00:21:20.357 } 00:21:20.357 } 00:21:20.357 }, 00:21:20.357 { 00:21:20.357 "method": "nvmf_set_max_subsystems", 00:21:20.357 "params": { 00:21:20.357 "max_subsystems": 1024 00:21:20.357 } 00:21:20.357 }, 00:21:20.357 { 00:21:20.357 "method": "nvmf_set_crdt", 00:21:20.357 "params": { 00:21:20.357 "crdt1": 0, 00:21:20.357 "crdt2": 0, 00:21:20.357 "crdt3": 0 00:21:20.357 } 00:21:20.357 }, 00:21:20.357 { 00:21:20.357 "method": "nvmf_create_transport", 00:21:20.357 "params": { 00:21:20.357 "trtype": "TCP", 00:21:20.357 "max_queue_depth": 128, 00:21:20.357 "max_io_qpairs_per_ctrlr": 127, 00:21:20.357 "in_capsule_data_size": 4096, 00:21:20.357 "max_io_size": 131072, 00:21:20.357 "io_unit_size": 131072, 00:21:20.357 "max_aq_depth": 128, 00:21:20.357 "num_shared_buffers": 511, 00:21:20.357 "buf_cache_size": 4294967295, 00:21:20.357 "dif_insert_or_strip": false, 00:21:20.357 "zcopy": false, 00:21:20.357 "c2h_success": false, 00:21:20.357 "sock_priority": 0, 00:21:20.357 "abort_timeout_sec": 1, 00:21:20.357 "ack_timeout": 0, 00:21:20.357 "data_wr_pool_size": 0 00:21:20.357 } 00:21:20.357 }, 00:21:20.357 { 00:21:20.357 "method": "nvmf_create_subsystem", 00:21:20.357 "params": { 00:21:20.357 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.357 "allow_any_host": false, 00:21:20.357 "serial_number": "00000000000000000000", 00:21:20.357 "model_number": "SPDK bdev Controller", 00:21:20.357 "max_namespaces": 32, 00:21:20.357 "min_cntlid": 1, 00:21:20.357 "max_cntlid": 65519, 00:21:20.357 "ana_reporting": false 00:21:20.357 } 00:21:20.357 }, 00:21:20.357 { 00:21:20.357 "method": "nvmf_subsystem_add_host", 00:21:20.357 "params": { 00:21:20.357 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.357 "host": "nqn.2016-06.io.spdk:host1", 00:21:20.357 "psk": "key0" 00:21:20.357 } 00:21:20.357 }, 00:21:20.357 { 00:21:20.357 "method": "nvmf_subsystem_add_ns", 00:21:20.357 "params": { 00:21:20.357 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.357 "namespace": { 00:21:20.357 "nsid": 1, 00:21:20.357 "bdev_name": "malloc0", 00:21:20.357 "nguid": "F19ADB39073A4A6599C4B0EF7BF86CB6", 00:21:20.357 "uuid": "f19adb39-073a-4a65-99c4-b0ef7bf86cb6", 00:21:20.357 "no_auto_visible": false 00:21:20.357 } 00:21:20.357 } 00:21:20.357 }, 00:21:20.357 { 00:21:20.357 "method": "nvmf_subsystem_add_listener", 00:21:20.357 "params": { 00:21:20.357 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.357 "listen_address": { 00:21:20.357 "trtype": "TCP", 00:21:20.357 "adrfam": "IPv4", 00:21:20.357 "traddr": "10.0.0.2", 00:21:20.357 "trsvcid": "4420" 00:21:20.357 }, 00:21:20.357 "secure_channel": false, 00:21:20.357 "sock_impl": "ssl" 00:21:20.357 } 00:21:20.357 } 00:21:20.357 ] 00:21:20.357 } 00:21:20.357 ] 00:21:20.357 }' 00:21:20.357 16:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:20.357 16:02:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3803312 00:21:20.357 16:02:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:20.357 16:02:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3803312 00:21:20.357 16:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3803312 ']' 00:21:20.357 16:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:20.357 16:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:20.357 16:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:20.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:20.357 16:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:20.357 16:02:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:20.614 [2024-07-15 16:02:49.306082] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:21:20.614 [2024-07-15 16:02:49.306126] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:20.614 EAL: No free 2048 kB hugepages reported on node 1 00:21:20.614 [2024-07-15 16:02:49.362429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.614 [2024-07-15 16:02:49.441056] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:20.614 [2024-07-15 16:02:49.441091] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:20.615 [2024-07-15 16:02:49.441099] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:20.615 [2024-07-15 16:02:49.441105] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:20.615 [2024-07-15 16:02:49.441110] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:20.615 [2024-07-15 16:02:49.441158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:20.872 [2024-07-15 16:02:49.653109] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:20.872 [2024-07-15 16:02:49.685143] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:20.872 [2024-07-15 16:02:49.696545] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:21.437 16:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:21.437 16:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:21.437 16:02:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:21.437 16:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:21.437 16:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.437 16:02:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:21.437 16:02:50 nvmf_tcp.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=3803555 00:21:21.437 16:02:50 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 3803555 /var/tmp/bdevperf.sock 00:21:21.437 16:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3803555 ']' 00:21:21.437 16:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:21.437 16:02:50 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:21.437 16:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:21.437 16:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:21.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:21.437 16:02:50 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:21:21.437 "subsystems": [ 00:21:21.437 { 00:21:21.437 "subsystem": "keyring", 00:21:21.437 "config": [ 00:21:21.437 { 00:21:21.437 "method": "keyring_file_add_key", 00:21:21.437 "params": { 00:21:21.437 "name": "key0", 00:21:21.437 "path": "/tmp/tmp.ksN3WaWpt5" 00:21:21.437 } 00:21:21.437 } 00:21:21.437 ] 00:21:21.437 }, 00:21:21.437 { 00:21:21.437 "subsystem": "iobuf", 00:21:21.437 "config": [ 00:21:21.437 { 00:21:21.437 "method": "iobuf_set_options", 00:21:21.437 "params": { 00:21:21.437 "small_pool_count": 8192, 00:21:21.437 "large_pool_count": 1024, 00:21:21.437 "small_bufsize": 8192, 00:21:21.437 "large_bufsize": 135168 00:21:21.437 } 00:21:21.437 } 00:21:21.437 ] 00:21:21.437 }, 00:21:21.437 { 00:21:21.437 "subsystem": "sock", 00:21:21.437 "config": [ 00:21:21.437 { 00:21:21.437 "method": "sock_set_default_impl", 00:21:21.437 "params": { 00:21:21.437 "impl_name": "posix" 00:21:21.437 } 00:21:21.437 }, 00:21:21.437 { 00:21:21.437 "method": "sock_impl_set_options", 00:21:21.437 "params": { 00:21:21.438 "impl_name": "ssl", 00:21:21.438 "recv_buf_size": 4096, 00:21:21.438 "send_buf_size": 4096, 00:21:21.438 "enable_recv_pipe": true, 00:21:21.438 "enable_quickack": false, 00:21:21.438 "enable_placement_id": 0, 00:21:21.438 "enable_zerocopy_send_server": true, 00:21:21.438 "enable_zerocopy_send_client": false, 00:21:21.438 "zerocopy_threshold": 0, 00:21:21.438 "tls_version": 0, 00:21:21.438 "enable_ktls": false 00:21:21.438 } 00:21:21.438 }, 00:21:21.438 { 00:21:21.438 "method": "sock_impl_set_options", 00:21:21.438 "params": { 00:21:21.438 "impl_name": "posix", 00:21:21.438 "recv_buf_size": 2097152, 00:21:21.438 "send_buf_size": 2097152, 00:21:21.438 "enable_recv_pipe": true, 00:21:21.438 "enable_quickack": false, 00:21:21.438 "enable_placement_id": 0, 00:21:21.438 "enable_zerocopy_send_server": true, 00:21:21.438 "enable_zerocopy_send_client": false, 00:21:21.438 "zerocopy_threshold": 0, 00:21:21.438 "tls_version": 0, 00:21:21.438 "enable_ktls": false 00:21:21.438 } 00:21:21.438 } 00:21:21.438 ] 00:21:21.438 }, 00:21:21.438 { 00:21:21.438 "subsystem": "vmd", 00:21:21.438 "config": [] 00:21:21.438 }, 00:21:21.438 { 00:21:21.438 "subsystem": "accel", 00:21:21.438 "config": [ 00:21:21.438 { 00:21:21.438 "method": "accel_set_options", 00:21:21.438 "params": { 00:21:21.438 "small_cache_size": 128, 00:21:21.438 "large_cache_size": 16, 00:21:21.438 "task_count": 2048, 00:21:21.438 "sequence_count": 2048, 00:21:21.438 "buf_count": 2048 00:21:21.438 } 00:21:21.438 } 00:21:21.438 ] 00:21:21.438 }, 00:21:21.438 { 00:21:21.438 "subsystem": "bdev", 00:21:21.438 "config": [ 00:21:21.438 { 00:21:21.438 "method": "bdev_set_options", 00:21:21.438 "params": { 00:21:21.438 "bdev_io_pool_size": 65535, 00:21:21.438 "bdev_io_cache_size": 256, 00:21:21.438 "bdev_auto_examine": true, 00:21:21.438 "iobuf_small_cache_size": 128, 00:21:21.438 "iobuf_large_cache_size": 16 00:21:21.438 } 00:21:21.438 }, 00:21:21.438 { 00:21:21.438 "method": "bdev_raid_set_options", 00:21:21.438 "params": { 00:21:21.438 "process_window_size_kb": 1024 00:21:21.438 } 00:21:21.438 }, 00:21:21.438 { 00:21:21.438 "method": "bdev_iscsi_set_options", 00:21:21.438 "params": { 00:21:21.438 "timeout_sec": 30 00:21:21.438 } 00:21:21.438 }, 00:21:21.438 { 00:21:21.438 "method": "bdev_nvme_set_options", 00:21:21.438 "params": { 00:21:21.438 "action_on_timeout": "none", 00:21:21.438 "timeout_us": 0, 00:21:21.438 "timeout_admin_us": 0, 00:21:21.438 "keep_alive_timeout_ms": 10000, 00:21:21.438 "arbitration_burst": 0, 00:21:21.438 "low_priority_weight": 0, 00:21:21.438 "medium_priority_weight": 0, 00:21:21.438 "high_priority_weight": 0, 00:21:21.438 "nvme_adminq_poll_period_us": 10000, 00:21:21.438 "nvme_ioq_poll_period_us": 0, 00:21:21.438 "io_queue_requests": 512, 00:21:21.438 "delay_cmd_submit": true, 00:21:21.438 "transport_retry_count": 4, 00:21:21.438 "bdev_retry_count": 3, 00:21:21.438 "transport_ack_timeout": 0, 00:21:21.438 "ctrlr_loss_timeout_sec": 0, 00:21:21.438 "reconnect_delay_sec": 0, 00:21:21.438 "fast_io_fail_timeout_sec": 0, 00:21:21.438 "disable_auto_failback": false, 00:21:21.438 "generate_uuids": false, 00:21:21.438 "transport_tos": 0, 00:21:21.438 "nvme_error_stat": false, 00:21:21.438 "rdma_srq_size": 0, 00:21:21.438 "io_path_stat": false, 00:21:21.438 "allow_accel_sequence": false, 00:21:21.438 "rdma_max_cq_size": 0, 00:21:21.438 "rdma_cm_event_timeout_ms": 0, 00:21:21.438 "dhchap_digests": [ 00:21:21.438 "sha256", 00:21:21.438 "sha384", 00:21:21.438 "sha512" 00:21:21.438 ], 00:21:21.438 "dhchap_dhgroups": [ 00:21:21.438 "null", 00:21:21.438 "ffdhe2048", 00:21:21.438 "ffdhe3072", 00:21:21.438 "ffdhe4096", 00:21:21.438 "ffdhe6144", 00:21:21.438 "ffdhe8192" 00:21:21.438 ] 00:21:21.438 } 00:21:21.438 }, 00:21:21.438 { 00:21:21.438 "method": "bdev_nvme_attach_controller", 00:21:21.438 "params": { 00:21:21.438 "name": "nvme0", 00:21:21.438 "trtype": "TCP", 00:21:21.438 "adrfam": "IPv4", 00:21:21.438 "traddr": "10.0.0.2", 00:21:21.438 "trsvcid": "4420", 00:21:21.438 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.438 "prchk_reftag": false, 00:21:21.438 "prchk_guard": false, 00:21:21.438 "ctrlr_loss_timeout_sec": 0, 00:21:21.438 "reconnect_delay_sec": 0, 00:21:21.438 "fast_io_fail_timeout_sec": 0, 00:21:21.438 "psk": "key0", 00:21:21.438 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:21.438 "hdgst": false, 00:21:21.438 "ddgst": false 00:21:21.438 } 00:21:21.438 }, 00:21:21.438 { 00:21:21.438 "method": "bdev_nvme_set_hotplug", 00:21:21.438 "params": { 00:21:21.438 "period_us": 100000, 00:21:21.438 "enable": false 00:21:21.438 } 00:21:21.438 }, 00:21:21.438 { 00:21:21.438 "method": "bdev_enable_histogram", 00:21:21.438 "params": { 00:21:21.438 "name": "nvme0n1", 00:21:21.438 "enable": true 00:21:21.438 } 00:21:21.438 }, 00:21:21.438 { 00:21:21.438 "method": "bdev_wait_for_examine" 00:21:21.438 } 00:21:21.438 ] 00:21:21.438 }, 00:21:21.438 { 00:21:21.438 "subsystem": "nbd", 00:21:21.438 "config": [] 00:21:21.438 } 00:21:21.438 ] 00:21:21.438 }' 00:21:21.438 16:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:21.438 16:02:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.438 [2024-07-15 16:02:50.184540] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:21:21.438 [2024-07-15 16:02:50.184584] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3803555 ] 00:21:21.438 EAL: No free 2048 kB hugepages reported on node 1 00:21:21.438 [2024-07-15 16:02:50.238418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.438 [2024-07-15 16:02:50.312674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:21.697 [2024-07-15 16:02:50.464174] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:22.262 16:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:22.262 16:02:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:22.262 16:02:51 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:22.262 16:02:51 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:21:22.262 16:02:51 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.262 16:02:51 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:22.520 Running I/O for 1 seconds... 00:21:23.460 00:21:23.460 Latency(us) 00:21:23.460 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:23.460 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:23.460 Verification LBA range: start 0x0 length 0x2000 00:21:23.460 nvme0n1 : 1.01 5216.81 20.38 0.00 0.00 24351.37 5014.93 46957.97 00:21:23.460 =================================================================================================================== 00:21:23.460 Total : 5216.81 20.38 0.00 0.00 24351.37 5014.93 46957.97 00:21:23.460 0 00:21:23.460 16:02:52 nvmf_tcp.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:21:23.460 16:02:52 nvmf_tcp.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:21:23.460 16:02:52 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:23.460 16:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:21:23.460 16:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:21:23.460 16:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:21:23.460 16:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:23.460 16:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:21:23.460 16:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:21:23.460 16:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:21:23.460 16:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:23.460 nvmf_trace.0 00:21:23.460 16:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:21:23.460 16:02:52 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 3803555 00:21:23.460 16:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3803555 ']' 00:21:23.460 16:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3803555 00:21:23.460 16:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:23.460 16:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:23.460 16:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3803555 00:21:23.719 16:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:23.719 16:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:23.719 16:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3803555' 00:21:23.719 killing process with pid 3803555 00:21:23.719 16:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3803555 00:21:23.719 Received shutdown signal, test time was about 1.000000 seconds 00:21:23.719 00:21:23.719 Latency(us) 00:21:23.719 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:23.719 =================================================================================================================== 00:21:23.719 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:23.719 16:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3803555 00:21:23.719 16:02:52 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:23.719 16:02:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:23.719 16:02:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:21:23.719 16:02:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:23.719 16:02:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:21:23.719 16:02:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:23.719 16:02:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:23.719 rmmod nvme_tcp 00:21:23.719 rmmod nvme_fabrics 00:21:23.719 rmmod nvme_keyring 00:21:23.719 16:02:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:23.977 16:02:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:21:23.977 16:02:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:21:23.977 16:02:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 3803312 ']' 00:21:23.977 16:02:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 3803312 00:21:23.977 16:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3803312 ']' 00:21:23.977 16:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3803312 00:21:23.977 16:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:23.977 16:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:23.977 16:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3803312 00:21:23.977 16:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:23.977 16:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:23.977 16:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3803312' 00:21:23.977 killing process with pid 3803312 00:21:23.977 16:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3803312 00:21:23.977 16:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3803312 00:21:23.977 16:02:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:23.977 16:02:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:23.977 16:02:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:23.977 16:02:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:23.977 16:02:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:23.977 16:02:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:23.977 16:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:23.977 16:02:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:26.558 16:02:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:26.558 16:02:54 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.hfmpPCsovm /tmp/tmp.Ab5JtgSzab /tmp/tmp.ksN3WaWpt5 00:21:26.558 00:21:26.558 real 1m24.283s 00:21:26.558 user 2m10.427s 00:21:26.558 sys 0m28.178s 00:21:26.558 16:02:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:26.558 16:02:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:26.558 ************************************ 00:21:26.558 END TEST nvmf_tls 00:21:26.558 ************************************ 00:21:26.558 16:02:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:26.558 16:02:54 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:26.558 16:02:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:26.558 16:02:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:26.558 16:02:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:26.558 ************************************ 00:21:26.558 START TEST nvmf_fips 00:21:26.558 ************************************ 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:26.558 * Looking for test storage... 00:21:26.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:26.558 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:21:26.559 Error setting digest 00:21:26.559 00F2F44FC87F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:26.559 00F2F44FC87F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:21:26.559 16:02:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:31.828 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:31.828 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:21:31.828 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:31.828 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:31.828 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:31.828 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:31.828 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:31.828 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:21:31.828 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:31.828 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:21:31.828 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:21:31.828 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:21:31.828 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:21:31.828 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:21:31.828 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:21:31.828 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:31.828 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:31.828 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:31.828 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:31.828 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:31.828 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:31.829 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:31.829 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:31.829 Found net devices under 0000:86:00.0: cvl_0_0 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:31.829 Found net devices under 0000:86:00.1: cvl_0_1 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:31.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:31.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:21:31.829 00:21:31.829 --- 10.0.0.2 ping statistics --- 00:21:31.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.829 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:31.829 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:31.829 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:21:31.829 00:21:31.829 --- 10.0.0.1 ping statistics --- 00:21:31.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.829 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=3807390 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 3807390 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 3807390 ']' 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:31.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:31.829 16:03:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:31.829 [2024-07-15 16:03:00.611492] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:21:31.829 [2024-07-15 16:03:00.611541] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:31.829 EAL: No free 2048 kB hugepages reported on node 1 00:21:31.829 [2024-07-15 16:03:00.670997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.829 [2024-07-15 16:03:00.748212] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:31.829 [2024-07-15 16:03:00.748253] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:31.829 [2024-07-15 16:03:00.748260] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:31.829 [2024-07-15 16:03:00.748267] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:31.830 [2024-07-15 16:03:00.748272] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:31.830 [2024-07-15 16:03:00.748291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:32.765 16:03:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:32.765 16:03:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:21:32.765 16:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:32.765 16:03:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:32.765 16:03:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:32.765 16:03:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:32.765 16:03:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:21:32.765 16:03:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:32.765 16:03:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:32.765 16:03:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:32.765 16:03:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:32.765 16:03:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:32.765 16:03:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:32.765 16:03:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:32.765 [2024-07-15 16:03:01.599547] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:32.765 [2024-07-15 16:03:01.615556] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:32.765 [2024-07-15 16:03:01.615742] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:32.765 [2024-07-15 16:03:01.643696] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:32.765 malloc0 00:21:32.765 16:03:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:32.765 16:03:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=3807702 00:21:32.765 16:03:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 3807702 /var/tmp/bdevperf.sock 00:21:32.765 16:03:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:32.765 16:03:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 3807702 ']' 00:21:32.765 16:03:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:32.765 16:03:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:32.765 16:03:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:32.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:32.765 16:03:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:32.765 16:03:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:33.023 [2024-07-15 16:03:01.731907] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:21:33.023 [2024-07-15 16:03:01.731955] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3807702 ] 00:21:33.023 EAL: No free 2048 kB hugepages reported on node 1 00:21:33.023 [2024-07-15 16:03:01.782671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.024 [2024-07-15 16:03:01.860718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:33.590 16:03:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:33.590 16:03:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:21:33.590 16:03:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:33.850 [2024-07-15 16:03:02.674358] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:33.850 [2024-07-15 16:03:02.674446] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:33.850 TLSTESTn1 00:21:33.850 16:03:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:34.108 Running I/O for 10 seconds... 00:21:44.074 00:21:44.074 Latency(us) 00:21:44.074 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.074 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:44.074 Verification LBA range: start 0x0 length 0x2000 00:21:44.074 TLSTESTn1 : 10.01 5460.79 21.33 0.00 0.00 23403.57 4957.94 53340.61 00:21:44.074 =================================================================================================================== 00:21:44.074 Total : 5460.79 21.33 0.00 0.00 23403.57 4957.94 53340.61 00:21:44.074 0 00:21:44.074 16:03:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:44.074 16:03:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:44.074 16:03:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:21:44.074 16:03:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:21:44.074 16:03:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:21:44.074 16:03:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:44.074 16:03:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:21:44.074 16:03:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:21:44.074 16:03:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:21:44.074 16:03:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:44.074 nvmf_trace.0 00:21:44.074 16:03:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:21:44.074 16:03:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3807702 00:21:44.074 16:03:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 3807702 ']' 00:21:44.074 16:03:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 3807702 00:21:44.074 16:03:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:21:44.074 16:03:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:44.074 16:03:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3807702 00:21:44.332 16:03:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:44.332 16:03:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:44.332 16:03:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3807702' 00:21:44.332 killing process with pid 3807702 00:21:44.332 16:03:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 3807702 00:21:44.332 Received shutdown signal, test time was about 10.000000 seconds 00:21:44.332 00:21:44.332 Latency(us) 00:21:44.332 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.332 =================================================================================================================== 00:21:44.332 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:44.332 [2024-07-15 16:03:13.046936] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:44.332 16:03:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 3807702 00:21:44.332 16:03:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:44.332 16:03:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:44.332 16:03:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:21:44.332 16:03:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:44.332 16:03:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:21:44.332 16:03:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:44.332 16:03:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:44.332 rmmod nvme_tcp 00:21:44.332 rmmod nvme_fabrics 00:21:44.332 rmmod nvme_keyring 00:21:44.590 16:03:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:44.590 16:03:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:21:44.590 16:03:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:21:44.590 16:03:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 3807390 ']' 00:21:44.590 16:03:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 3807390 00:21:44.590 16:03:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 3807390 ']' 00:21:44.590 16:03:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 3807390 00:21:44.590 16:03:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:21:44.590 16:03:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:44.590 16:03:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3807390 00:21:44.590 16:03:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:44.590 16:03:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:44.590 16:03:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3807390' 00:21:44.590 killing process with pid 3807390 00:21:44.590 16:03:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 3807390 00:21:44.590 [2024-07-15 16:03:13.324138] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:44.590 16:03:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 3807390 00:21:44.590 16:03:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:44.590 16:03:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:44.590 16:03:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:44.590 16:03:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:44.590 16:03:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:44.590 16:03:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.590 16:03:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:44.590 16:03:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.123 16:03:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:47.123 16:03:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:47.123 00:21:47.123 real 0m20.557s 00:21:47.123 user 0m22.778s 00:21:47.123 sys 0m8.588s 00:21:47.123 16:03:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:47.123 16:03:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:47.123 ************************************ 00:21:47.123 END TEST nvmf_fips 00:21:47.123 ************************************ 00:21:47.123 16:03:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:47.123 16:03:15 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:21:47.123 16:03:15 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:21:47.123 16:03:15 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:21:47.123 16:03:15 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:21:47.123 16:03:15 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:21:47.123 16:03:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:52.391 16:03:20 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:52.391 16:03:20 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:52.392 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:52.392 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:52.392 Found net devices under 0000:86:00.0: cvl_0_0 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:52.392 Found net devices under 0000:86:00.1: cvl_0_1 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:21:52.392 16:03:20 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:52.392 16:03:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:52.392 16:03:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:52.392 16:03:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:52.392 ************************************ 00:21:52.392 START TEST nvmf_perf_adq 00:21:52.392 ************************************ 00:21:52.392 16:03:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:52.392 * Looking for test storage... 00:21:52.392 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:52.392 16:03:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:52.392 16:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:52.392 16:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:52.392 16:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:52.392 16:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:52.392 16:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:52.392 16:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:52.392 16:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:52.392 16:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:52.392 16:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:52.392 16:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:52.392 16:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:52.392 16:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:52.392 16:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:52.392 16:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:52.392 16:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:52.392 16:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:52.392 16:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:52.392 16:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:52.392 16:03:20 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:52.392 16:03:20 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:52.392 16:03:20 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:52.392 16:03:20 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.392 16:03:20 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.392 16:03:20 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.392 16:03:20 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:52.392 16:03:20 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.392 16:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:21:52.392 16:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:52.392 16:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:52.392 16:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:52.392 16:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:52.392 16:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:52.392 16:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:52.392 16:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:52.392 16:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:52.392 16:03:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:52.392 16:03:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:52.392 16:03:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:56.614 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:56.614 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:56.614 Found net devices under 0000:86:00.0: cvl_0_0 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:56.614 Found net devices under 0000:86:00.1: cvl_0_1 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:21:56.614 16:03:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:21:57.989 16:03:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:21:59.890 16:03:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:05.166 16:03:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:22:05.166 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:05.166 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:05.166 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:05.166 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:05.166 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:05.166 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:05.166 16:03:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:05.166 16:03:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.166 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:05.166 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:05.166 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:05.166 16:03:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.166 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:05.166 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:05.166 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:05.166 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:05.166 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:05.166 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:05.166 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:05.166 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:05.166 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:05.166 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:05.166 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:05.166 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:05.166 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:05.166 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:05.167 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:05.167 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:05.167 Found net devices under 0000:86:00.0: cvl_0_0 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:05.167 Found net devices under 0000:86:00.1: cvl_0_1 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:05.167 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:05.167 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:22:05.167 00:22:05.167 --- 10.0.0.2 ping statistics --- 00:22:05.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.167 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:05.167 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:05.167 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:22:05.167 00:22:05.167 --- 10.0.0.1 ping statistics --- 00:22:05.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.167 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3817810 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3817810 00:22:05.167 16:03:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 3817810 ']' 00:22:05.168 16:03:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.168 16:03:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:05.168 16:03:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.168 16:03:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:05.168 16:03:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:05.168 16:03:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:05.168 [2024-07-15 16:03:33.953928] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:22:05.168 [2024-07-15 16:03:33.953969] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:05.168 EAL: No free 2048 kB hugepages reported on node 1 00:22:05.168 [2024-07-15 16:03:34.009753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:05.168 [2024-07-15 16:03:34.091246] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:05.168 [2024-07-15 16:03:34.091282] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:05.168 [2024-07-15 16:03:34.091289] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:05.168 [2024-07-15 16:03:34.091295] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:05.168 [2024-07-15 16:03:34.091300] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:05.168 [2024-07-15 16:03:34.091341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:05.168 [2024-07-15 16:03:34.091440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:05.168 [2024-07-15 16:03:34.091460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:05.168 [2024-07-15 16:03:34.091462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:06.103 16:03:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:06.103 16:03:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:22:06.103 16:03:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:06.103 16:03:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:06.103 16:03:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:06.103 16:03:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:06.103 16:03:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:22:06.103 16:03:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:06.103 16:03:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:06.103 16:03:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.103 16:03:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:06.103 16:03:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.103 16:03:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:06.103 16:03:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:06.103 16:03:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.103 16:03:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:06.103 16:03:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.103 16:03:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:06.103 16:03:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.103 16:03:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:06.103 16:03:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.103 16:03:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:06.103 16:03:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.103 16:03:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:06.103 [2024-07-15 16:03:34.945394] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:06.103 16:03:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.103 16:03:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:06.103 16:03:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.103 16:03:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:06.103 Malloc1 00:22:06.103 16:03:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.103 16:03:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:06.103 16:03:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.103 16:03:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:06.103 16:03:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.103 16:03:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:06.103 16:03:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.103 16:03:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:06.103 16:03:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.103 16:03:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:06.103 16:03:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.103 16:03:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:06.103 [2024-07-15 16:03:34.997193] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:06.103 16:03:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.103 16:03:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=3818058 00:22:06.103 16:03:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:22:06.103 16:03:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:06.103 EAL: No free 2048 kB hugepages reported on node 1 00:22:08.631 16:03:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:22:08.631 16:03:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.631 16:03:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:08.631 16:03:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.631 16:03:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:22:08.631 "tick_rate": 2300000000, 00:22:08.631 "poll_groups": [ 00:22:08.631 { 00:22:08.631 "name": "nvmf_tgt_poll_group_000", 00:22:08.631 "admin_qpairs": 1, 00:22:08.631 "io_qpairs": 1, 00:22:08.631 "current_admin_qpairs": 1, 00:22:08.631 "current_io_qpairs": 1, 00:22:08.631 "pending_bdev_io": 0, 00:22:08.631 "completed_nvme_io": 20007, 00:22:08.631 "transports": [ 00:22:08.631 { 00:22:08.631 "trtype": "TCP" 00:22:08.631 } 00:22:08.631 ] 00:22:08.631 }, 00:22:08.631 { 00:22:08.631 "name": "nvmf_tgt_poll_group_001", 00:22:08.631 "admin_qpairs": 0, 00:22:08.631 "io_qpairs": 1, 00:22:08.631 "current_admin_qpairs": 0, 00:22:08.631 "current_io_qpairs": 1, 00:22:08.631 "pending_bdev_io": 0, 00:22:08.631 "completed_nvme_io": 20259, 00:22:08.631 "transports": [ 00:22:08.631 { 00:22:08.631 "trtype": "TCP" 00:22:08.631 } 00:22:08.631 ] 00:22:08.631 }, 00:22:08.631 { 00:22:08.631 "name": "nvmf_tgt_poll_group_002", 00:22:08.631 "admin_qpairs": 0, 00:22:08.631 "io_qpairs": 1, 00:22:08.631 "current_admin_qpairs": 0, 00:22:08.631 "current_io_qpairs": 1, 00:22:08.631 "pending_bdev_io": 0, 00:22:08.631 "completed_nvme_io": 20107, 00:22:08.631 "transports": [ 00:22:08.631 { 00:22:08.631 "trtype": "TCP" 00:22:08.631 } 00:22:08.631 ] 00:22:08.631 }, 00:22:08.631 { 00:22:08.631 "name": "nvmf_tgt_poll_group_003", 00:22:08.631 "admin_qpairs": 0, 00:22:08.631 "io_qpairs": 1, 00:22:08.631 "current_admin_qpairs": 0, 00:22:08.631 "current_io_qpairs": 1, 00:22:08.631 "pending_bdev_io": 0, 00:22:08.631 "completed_nvme_io": 20157, 00:22:08.631 "transports": [ 00:22:08.631 { 00:22:08.631 "trtype": "TCP" 00:22:08.631 } 00:22:08.631 ] 00:22:08.631 } 00:22:08.631 ] 00:22:08.631 }' 00:22:08.631 16:03:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:08.631 16:03:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:22:08.631 16:03:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:22:08.631 16:03:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:22:08.631 16:03:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 3818058 00:22:16.739 Initializing NVMe Controllers 00:22:16.739 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:16.739 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:16.739 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:16.739 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:16.739 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:16.739 Initialization complete. Launching workers. 00:22:16.739 ======================================================== 00:22:16.739 Latency(us) 00:22:16.739 Device Information : IOPS MiB/s Average min max 00:22:16.739 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10595.90 41.39 6040.52 2031.87 9639.73 00:22:16.739 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10701.30 41.80 5980.35 2377.31 9946.69 00:22:16.739 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10606.00 41.43 6034.08 2188.22 9846.05 00:22:16.739 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10605.30 41.43 6034.99 1719.48 9932.55 00:22:16.739 ======================================================== 00:22:16.739 Total : 42508.50 166.05 6022.38 1719.48 9946.69 00:22:16.739 00:22:16.739 16:03:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:22:16.739 16:03:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:16.739 16:03:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:16.739 16:03:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:16.739 16:03:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:16.739 16:03:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:16.739 16:03:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:16.739 rmmod nvme_tcp 00:22:16.739 rmmod nvme_fabrics 00:22:16.739 rmmod nvme_keyring 00:22:16.739 16:03:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:16.739 16:03:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:16.739 16:03:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:16.739 16:03:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3817810 ']' 00:22:16.739 16:03:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3817810 00:22:16.739 16:03:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 3817810 ']' 00:22:16.739 16:03:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 3817810 00:22:16.739 16:03:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:22:16.739 16:03:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:16.739 16:03:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3817810 00:22:16.739 16:03:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:16.739 16:03:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:16.739 16:03:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3817810' 00:22:16.739 killing process with pid 3817810 00:22:16.739 16:03:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 3817810 00:22:16.739 16:03:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 3817810 00:22:16.739 16:03:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:16.739 16:03:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:16.739 16:03:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:16.739 16:03:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:16.739 16:03:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:16.739 16:03:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.739 16:03:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:16.739 16:03:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:18.638 16:03:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:18.638 16:03:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:22:18.638 16:03:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:20.011 16:03:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:21.944 16:03:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:27.208 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:27.208 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:27.208 Found net devices under 0000:86:00.0: cvl_0_0 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:27.208 Found net devices under 0000:86:00.1: cvl_0_1 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:27.208 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:27.208 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.311 ms 00:22:27.208 00:22:27.208 --- 10.0.0.2 ping statistics --- 00:22:27.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:27.208 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:27.208 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:27.208 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:22:27.208 00:22:27.208 --- 10.0.0.1 ping statistics --- 00:22:27.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:27.208 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:27.208 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:27.209 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:27.209 16:03:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:27.209 16:03:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:22:27.209 16:03:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:27.209 16:03:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:27.209 16:03:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:27.209 net.core.busy_poll = 1 00:22:27.209 16:03:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:27.209 net.core.busy_read = 1 00:22:27.209 16:03:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:27.209 16:03:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:27.209 16:03:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:27.209 16:03:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:27.209 16:03:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:27.209 16:03:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:27.209 16:03:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:27.209 16:03:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:27.209 16:03:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:27.209 16:03:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:27.209 16:03:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3821831 00:22:27.209 16:03:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3821831 00:22:27.209 16:03:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 3821831 ']' 00:22:27.209 16:03:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:27.209 16:03:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:27.209 16:03:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:27.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:27.209 16:03:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:27.209 16:03:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:27.466 [2024-07-15 16:03:56.168483] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:22:27.466 [2024-07-15 16:03:56.168531] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:27.466 EAL: No free 2048 kB hugepages reported on node 1 00:22:27.466 [2024-07-15 16:03:56.230698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:27.466 [2024-07-15 16:03:56.312201] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:27.466 [2024-07-15 16:03:56.312239] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:27.466 [2024-07-15 16:03:56.312246] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:27.466 [2024-07-15 16:03:56.312253] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:27.466 [2024-07-15 16:03:56.312258] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:27.466 [2024-07-15 16:03:56.312298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:27.466 [2024-07-15 16:03:56.312388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:27.466 [2024-07-15 16:03:56.312702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:27.466 [2024-07-15 16:03:56.312704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.398 16:03:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:28.398 16:03:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:22:28.398 16:03:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:28.398 16:03:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:28.398 16:03:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:28.398 16:03:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:28.398 16:03:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:22:28.398 16:03:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:28.398 16:03:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:28.398 16:03:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.398 16:03:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:28.398 16:03:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.398 16:03:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:28.398 16:03:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:28.398 16:03:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.398 16:03:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:28.398 16:03:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.398 16:03:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:28.398 16:03:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.398 16:03:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:28.398 16:03:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.398 16:03:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:28.398 16:03:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.398 16:03:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:28.398 [2024-07-15 16:03:57.180924] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:28.398 16:03:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.398 16:03:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:28.398 16:03:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.398 16:03:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:28.398 Malloc1 00:22:28.398 16:03:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.398 16:03:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:28.398 16:03:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.398 16:03:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:28.398 16:03:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.398 16:03:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:28.398 16:03:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.398 16:03:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:28.398 16:03:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.398 16:03:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:28.398 16:03:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.398 16:03:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:28.398 [2024-07-15 16:03:57.224371] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:28.398 16:03:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.398 16:03:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=3822090 00:22:28.398 16:03:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:22:28.398 16:03:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:28.398 EAL: No free 2048 kB hugepages reported on node 1 00:22:30.927 16:03:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:22:30.927 16:03:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.927 16:03:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:30.927 16:03:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.927 16:03:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:22:30.927 "tick_rate": 2300000000, 00:22:30.927 "poll_groups": [ 00:22:30.927 { 00:22:30.927 "name": "nvmf_tgt_poll_group_000", 00:22:30.927 "admin_qpairs": 1, 00:22:30.927 "io_qpairs": 2, 00:22:30.927 "current_admin_qpairs": 1, 00:22:30.927 "current_io_qpairs": 2, 00:22:30.927 "pending_bdev_io": 0, 00:22:30.927 "completed_nvme_io": 28618, 00:22:30.927 "transports": [ 00:22:30.927 { 00:22:30.927 "trtype": "TCP" 00:22:30.927 } 00:22:30.927 ] 00:22:30.927 }, 00:22:30.927 { 00:22:30.927 "name": "nvmf_tgt_poll_group_001", 00:22:30.927 "admin_qpairs": 0, 00:22:30.927 "io_qpairs": 2, 00:22:30.927 "current_admin_qpairs": 0, 00:22:30.927 "current_io_qpairs": 2, 00:22:30.927 "pending_bdev_io": 0, 00:22:30.927 "completed_nvme_io": 27859, 00:22:30.927 "transports": [ 00:22:30.927 { 00:22:30.927 "trtype": "TCP" 00:22:30.927 } 00:22:30.927 ] 00:22:30.927 }, 00:22:30.927 { 00:22:30.927 "name": "nvmf_tgt_poll_group_002", 00:22:30.927 "admin_qpairs": 0, 00:22:30.927 "io_qpairs": 0, 00:22:30.927 "current_admin_qpairs": 0, 00:22:30.927 "current_io_qpairs": 0, 00:22:30.927 "pending_bdev_io": 0, 00:22:30.927 "completed_nvme_io": 0, 00:22:30.927 "transports": [ 00:22:30.927 { 00:22:30.927 "trtype": "TCP" 00:22:30.927 } 00:22:30.927 ] 00:22:30.927 }, 00:22:30.927 { 00:22:30.927 "name": "nvmf_tgt_poll_group_003", 00:22:30.927 "admin_qpairs": 0, 00:22:30.927 "io_qpairs": 0, 00:22:30.927 "current_admin_qpairs": 0, 00:22:30.927 "current_io_qpairs": 0, 00:22:30.927 "pending_bdev_io": 0, 00:22:30.927 "completed_nvme_io": 0, 00:22:30.927 "transports": [ 00:22:30.927 { 00:22:30.927 "trtype": "TCP" 00:22:30.927 } 00:22:30.927 ] 00:22:30.927 } 00:22:30.927 ] 00:22:30.927 }' 00:22:30.927 16:03:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:22:30.927 16:03:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:30.927 16:03:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:22:30.927 16:03:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:22:30.927 16:03:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 3822090 00:22:39.034 Initializing NVMe Controllers 00:22:39.034 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:39.034 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:39.034 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:39.034 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:39.034 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:39.034 Initialization complete. Launching workers. 00:22:39.034 ======================================================== 00:22:39.034 Latency(us) 00:22:39.034 Device Information : IOPS MiB/s Average min max 00:22:39.034 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 8745.90 34.16 7340.78 1328.08 52864.33 00:22:39.034 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 8820.60 34.46 7257.25 1341.63 51484.99 00:22:39.034 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6458.10 25.23 9939.18 1508.73 53032.29 00:22:39.034 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6305.60 24.63 10152.24 1438.82 53238.97 00:22:39.034 ======================================================== 00:22:39.034 Total : 30330.20 118.48 8454.25 1328.08 53238.97 00:22:39.034 00:22:39.034 16:04:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:22:39.034 16:04:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:39.034 16:04:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:39.034 16:04:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:39.034 16:04:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:39.034 16:04:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:39.034 16:04:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:39.034 rmmod nvme_tcp 00:22:39.034 rmmod nvme_fabrics 00:22:39.034 rmmod nvme_keyring 00:22:39.034 16:04:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:39.034 16:04:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:39.034 16:04:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:39.034 16:04:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3821831 ']' 00:22:39.034 16:04:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3821831 00:22:39.034 16:04:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 3821831 ']' 00:22:39.034 16:04:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 3821831 00:22:39.034 16:04:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:22:39.034 16:04:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:39.034 16:04:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3821831 00:22:39.034 16:04:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:39.034 16:04:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:39.034 16:04:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3821831' 00:22:39.034 killing process with pid 3821831 00:22:39.034 16:04:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 3821831 00:22:39.034 16:04:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 3821831 00:22:39.034 16:04:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:39.034 16:04:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:39.034 16:04:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:39.034 16:04:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:39.034 16:04:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:39.034 16:04:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.034 16:04:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:39.034 16:04:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:40.935 16:04:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:40.935 16:04:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:22:40.935 00:22:40.935 real 0m49.367s 00:22:40.935 user 2m49.584s 00:22:40.935 sys 0m9.059s 00:22:40.935 16:04:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:40.935 16:04:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:40.935 ************************************ 00:22:40.935 END TEST nvmf_perf_adq 00:22:40.935 ************************************ 00:22:40.935 16:04:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:40.935 16:04:09 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:40.935 16:04:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:40.935 16:04:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:40.935 16:04:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:40.935 ************************************ 00:22:40.935 START TEST nvmf_shutdown 00:22:40.935 ************************************ 00:22:40.935 16:04:09 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:41.194 * Looking for test storage... 00:22:41.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:41.194 16:04:09 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:41.194 16:04:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:41.194 16:04:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:41.194 16:04:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:41.194 16:04:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:41.194 16:04:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:41.194 16:04:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:41.194 16:04:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:41.194 16:04:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:41.194 16:04:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:41.194 16:04:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:41.194 16:04:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:41.194 16:04:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:41.194 16:04:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:41.194 16:04:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:41.194 16:04:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:41.194 16:04:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:41.194 16:04:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:41.194 16:04:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:41.194 16:04:09 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:41.194 16:04:09 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:41.194 16:04:09 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:41.194 16:04:09 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.194 16:04:09 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.194 16:04:09 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.194 16:04:09 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:41.194 16:04:09 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.194 16:04:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:22:41.194 16:04:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:41.194 16:04:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:41.194 16:04:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:41.194 16:04:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:41.194 16:04:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:41.194 16:04:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:41.194 16:04:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:41.194 16:04:09 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:41.194 16:04:09 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:41.194 16:04:09 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:41.194 16:04:09 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:41.194 16:04:09 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:41.194 16:04:09 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:41.194 16:04:09 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:41.194 ************************************ 00:22:41.194 START TEST nvmf_shutdown_tc1 00:22:41.194 ************************************ 00:22:41.194 16:04:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:22:41.194 16:04:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:22:41.194 16:04:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:41.194 16:04:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:41.194 16:04:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:41.194 16:04:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:41.194 16:04:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:41.194 16:04:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:41.194 16:04:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.194 16:04:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:41.194 16:04:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.194 16:04:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:41.194 16:04:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:41.194 16:04:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:41.194 16:04:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:46.466 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:46.466 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:46.466 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:46.466 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:46.466 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:46.466 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:46.466 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:46.466 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:22:46.466 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:46.466 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:22:46.466 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:22:46.466 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:22:46.466 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:22:46.466 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:22:46.466 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:46.466 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:46.466 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:46.466 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:46.466 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:46.466 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:46.466 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:46.466 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:46.466 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:46.466 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:46.466 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:46.466 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:46.466 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:46.466 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:46.466 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:46.466 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:46.466 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:46.466 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:46.467 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:46.467 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:46.467 Found net devices under 0000:86:00.0: cvl_0_0 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:46.467 Found net devices under 0000:86:00.1: cvl_0_1 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:46.467 16:04:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:46.467 16:04:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:46.467 16:04:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:46.467 16:04:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:46.467 16:04:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:46.467 16:04:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:46.467 16:04:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:46.467 16:04:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:46.467 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:46.467 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:22:46.467 00:22:46.467 --- 10.0.0.2 ping statistics --- 00:22:46.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.467 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:22:46.467 16:04:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:46.467 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:46.467 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:22:46.467 00:22:46.467 --- 10.0.0.1 ping statistics --- 00:22:46.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.467 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:22:46.467 16:04:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:46.467 16:04:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:22:46.467 16:04:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:46.467 16:04:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:46.467 16:04:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:46.467 16:04:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:46.467 16:04:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:46.467 16:04:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:46.467 16:04:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:46.467 16:04:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:46.467 16:04:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:46.467 16:04:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:46.467 16:04:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:46.467 16:04:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=3827077 00:22:46.467 16:04:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 3827077 00:22:46.467 16:04:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:46.467 16:04:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 3827077 ']' 00:22:46.467 16:04:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:46.467 16:04:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:46.467 16:04:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:46.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:46.467 16:04:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:46.467 16:04:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:46.467 [2024-07-15 16:04:15.234026] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:22:46.467 [2024-07-15 16:04:15.234073] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:46.467 EAL: No free 2048 kB hugepages reported on node 1 00:22:46.467 [2024-07-15 16:04:15.292775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:46.467 [2024-07-15 16:04:15.373563] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:46.467 [2024-07-15 16:04:15.373598] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:46.467 [2024-07-15 16:04:15.373605] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:46.467 [2024-07-15 16:04:15.373610] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:46.467 [2024-07-15 16:04:15.373616] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:46.467 [2024-07-15 16:04:15.373711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:46.467 [2024-07-15 16:04:15.373735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:46.467 [2024-07-15 16:04:15.373833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:46.467 [2024-07-15 16:04:15.373835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:47.400 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:47.400 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:22:47.400 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:47.400 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:47.400 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:47.400 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:47.400 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:47.400 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.400 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:47.400 [2024-07-15 16:04:16.086287] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:47.400 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.400 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:47.400 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:47.400 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:47.400 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:47.400 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:47.400 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:47.400 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:47.400 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:47.400 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:47.400 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:47.400 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:47.400 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:47.400 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:47.400 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:47.400 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:47.400 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:47.400 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:47.400 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:47.400 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:47.400 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:47.401 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:47.401 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:47.401 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:47.401 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:47.401 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:47.401 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:47.401 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.401 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:47.401 Malloc1 00:22:47.401 [2024-07-15 16:04:16.181986] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:47.401 Malloc2 00:22:47.401 Malloc3 00:22:47.401 Malloc4 00:22:47.401 Malloc5 00:22:47.658 Malloc6 00:22:47.658 Malloc7 00:22:47.658 Malloc8 00:22:47.658 Malloc9 00:22:47.658 Malloc10 00:22:47.658 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.658 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:47.658 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:47.658 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:47.918 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=3827359 00:22:47.918 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 3827359 /var/tmp/bdevperf.sock 00:22:47.918 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 3827359 ']' 00:22:47.918 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:47.918 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:47.918 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:47.918 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:47.918 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:47.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:47.918 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:22:47.918 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:47.918 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:22:47.918 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:47.918 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:47.918 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:47.918 { 00:22:47.918 "params": { 00:22:47.918 "name": "Nvme$subsystem", 00:22:47.918 "trtype": "$TEST_TRANSPORT", 00:22:47.918 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:47.918 "adrfam": "ipv4", 00:22:47.918 "trsvcid": "$NVMF_PORT", 00:22:47.918 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:47.918 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:47.918 "hdgst": ${hdgst:-false}, 00:22:47.918 "ddgst": ${ddgst:-false} 00:22:47.918 }, 00:22:47.918 "method": "bdev_nvme_attach_controller" 00:22:47.918 } 00:22:47.918 EOF 00:22:47.918 )") 00:22:47.918 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:47.918 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:47.918 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:47.918 { 00:22:47.918 "params": { 00:22:47.918 "name": "Nvme$subsystem", 00:22:47.918 "trtype": "$TEST_TRANSPORT", 00:22:47.918 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:47.918 "adrfam": "ipv4", 00:22:47.918 "trsvcid": "$NVMF_PORT", 00:22:47.918 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:47.918 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:47.918 "hdgst": ${hdgst:-false}, 00:22:47.918 "ddgst": ${ddgst:-false} 00:22:47.918 }, 00:22:47.918 "method": "bdev_nvme_attach_controller" 00:22:47.918 } 00:22:47.918 EOF 00:22:47.918 )") 00:22:47.918 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:47.918 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:47.918 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:47.918 { 00:22:47.918 "params": { 00:22:47.918 "name": "Nvme$subsystem", 00:22:47.918 "trtype": "$TEST_TRANSPORT", 00:22:47.918 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:47.918 "adrfam": "ipv4", 00:22:47.918 "trsvcid": "$NVMF_PORT", 00:22:47.918 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:47.918 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:47.918 "hdgst": ${hdgst:-false}, 00:22:47.918 "ddgst": ${ddgst:-false} 00:22:47.918 }, 00:22:47.918 "method": "bdev_nvme_attach_controller" 00:22:47.918 } 00:22:47.918 EOF 00:22:47.918 )") 00:22:47.918 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:47.918 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:47.918 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:47.918 { 00:22:47.918 "params": { 00:22:47.918 "name": "Nvme$subsystem", 00:22:47.918 "trtype": "$TEST_TRANSPORT", 00:22:47.918 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:47.918 "adrfam": "ipv4", 00:22:47.918 "trsvcid": "$NVMF_PORT", 00:22:47.918 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:47.918 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:47.918 "hdgst": ${hdgst:-false}, 00:22:47.918 "ddgst": ${ddgst:-false} 00:22:47.918 }, 00:22:47.918 "method": "bdev_nvme_attach_controller" 00:22:47.918 } 00:22:47.918 EOF 00:22:47.918 )") 00:22:47.918 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:47.918 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:47.918 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:47.918 { 00:22:47.918 "params": { 00:22:47.918 "name": "Nvme$subsystem", 00:22:47.918 "trtype": "$TEST_TRANSPORT", 00:22:47.918 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:47.918 "adrfam": "ipv4", 00:22:47.918 "trsvcid": "$NVMF_PORT", 00:22:47.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:47.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:47.919 "hdgst": ${hdgst:-false}, 00:22:47.919 "ddgst": ${ddgst:-false} 00:22:47.919 }, 00:22:47.919 "method": "bdev_nvme_attach_controller" 00:22:47.919 } 00:22:47.919 EOF 00:22:47.919 )") 00:22:47.919 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:47.919 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:47.919 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:47.919 { 00:22:47.919 "params": { 00:22:47.919 "name": "Nvme$subsystem", 00:22:47.919 "trtype": "$TEST_TRANSPORT", 00:22:47.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:47.919 "adrfam": "ipv4", 00:22:47.919 "trsvcid": "$NVMF_PORT", 00:22:47.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:47.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:47.919 "hdgst": ${hdgst:-false}, 00:22:47.919 "ddgst": ${ddgst:-false} 00:22:47.919 }, 00:22:47.919 "method": "bdev_nvme_attach_controller" 00:22:47.919 } 00:22:47.919 EOF 00:22:47.919 )") 00:22:47.919 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:47.919 [2024-07-15 16:04:16.645529] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:22:47.919 [2024-07-15 16:04:16.645578] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:47.919 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:47.919 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:47.919 { 00:22:47.919 "params": { 00:22:47.919 "name": "Nvme$subsystem", 00:22:47.919 "trtype": "$TEST_TRANSPORT", 00:22:47.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:47.919 "adrfam": "ipv4", 00:22:47.919 "trsvcid": "$NVMF_PORT", 00:22:47.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:47.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:47.919 "hdgst": ${hdgst:-false}, 00:22:47.919 "ddgst": ${ddgst:-false} 00:22:47.919 }, 00:22:47.919 "method": "bdev_nvme_attach_controller" 00:22:47.919 } 00:22:47.919 EOF 00:22:47.919 )") 00:22:47.919 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:47.919 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:47.919 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:47.919 { 00:22:47.919 "params": { 00:22:47.919 "name": "Nvme$subsystem", 00:22:47.919 "trtype": "$TEST_TRANSPORT", 00:22:47.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:47.919 "adrfam": "ipv4", 00:22:47.919 "trsvcid": "$NVMF_PORT", 00:22:47.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:47.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:47.919 "hdgst": ${hdgst:-false}, 00:22:47.919 "ddgst": ${ddgst:-false} 00:22:47.919 }, 00:22:47.919 "method": "bdev_nvme_attach_controller" 00:22:47.919 } 00:22:47.919 EOF 00:22:47.919 )") 00:22:47.919 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:47.919 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:47.919 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:47.919 { 00:22:47.919 "params": { 00:22:47.919 "name": "Nvme$subsystem", 00:22:47.919 "trtype": "$TEST_TRANSPORT", 00:22:47.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:47.919 "adrfam": "ipv4", 00:22:47.919 "trsvcid": "$NVMF_PORT", 00:22:47.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:47.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:47.919 "hdgst": ${hdgst:-false}, 00:22:47.919 "ddgst": ${ddgst:-false} 00:22:47.919 }, 00:22:47.919 "method": "bdev_nvme_attach_controller" 00:22:47.919 } 00:22:47.919 EOF 00:22:47.919 )") 00:22:47.919 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:47.919 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:47.919 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:47.919 { 00:22:47.919 "params": { 00:22:47.919 "name": "Nvme$subsystem", 00:22:47.919 "trtype": "$TEST_TRANSPORT", 00:22:47.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:47.919 "adrfam": "ipv4", 00:22:47.919 "trsvcid": "$NVMF_PORT", 00:22:47.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:47.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:47.919 "hdgst": ${hdgst:-false}, 00:22:47.919 "ddgst": ${ddgst:-false} 00:22:47.919 }, 00:22:47.919 "method": "bdev_nvme_attach_controller" 00:22:47.919 } 00:22:47.919 EOF 00:22:47.919 )") 00:22:47.919 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:47.919 EAL: No free 2048 kB hugepages reported on node 1 00:22:47.919 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:22:47.919 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:22:47.919 16:04:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:47.919 "params": { 00:22:47.919 "name": "Nvme1", 00:22:47.919 "trtype": "tcp", 00:22:47.919 "traddr": "10.0.0.2", 00:22:47.919 "adrfam": "ipv4", 00:22:47.919 "trsvcid": "4420", 00:22:47.919 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:47.919 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:47.919 "hdgst": false, 00:22:47.919 "ddgst": false 00:22:47.919 }, 00:22:47.919 "method": "bdev_nvme_attach_controller" 00:22:47.919 },{ 00:22:47.919 "params": { 00:22:47.919 "name": "Nvme2", 00:22:47.919 "trtype": "tcp", 00:22:47.919 "traddr": "10.0.0.2", 00:22:47.919 "adrfam": "ipv4", 00:22:47.919 "trsvcid": "4420", 00:22:47.919 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:47.919 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:47.919 "hdgst": false, 00:22:47.919 "ddgst": false 00:22:47.919 }, 00:22:47.919 "method": "bdev_nvme_attach_controller" 00:22:47.919 },{ 00:22:47.919 "params": { 00:22:47.919 "name": "Nvme3", 00:22:47.919 "trtype": "tcp", 00:22:47.919 "traddr": "10.0.0.2", 00:22:47.919 "adrfam": "ipv4", 00:22:47.919 "trsvcid": "4420", 00:22:47.919 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:47.919 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:47.919 "hdgst": false, 00:22:47.919 "ddgst": false 00:22:47.919 }, 00:22:47.919 "method": "bdev_nvme_attach_controller" 00:22:47.919 },{ 00:22:47.919 "params": { 00:22:47.919 "name": "Nvme4", 00:22:47.919 "trtype": "tcp", 00:22:47.919 "traddr": "10.0.0.2", 00:22:47.919 "adrfam": "ipv4", 00:22:47.919 "trsvcid": "4420", 00:22:47.919 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:47.919 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:47.919 "hdgst": false, 00:22:47.919 "ddgst": false 00:22:47.919 }, 00:22:47.919 "method": "bdev_nvme_attach_controller" 00:22:47.919 },{ 00:22:47.919 "params": { 00:22:47.919 "name": "Nvme5", 00:22:47.919 "trtype": "tcp", 00:22:47.919 "traddr": "10.0.0.2", 00:22:47.919 "adrfam": "ipv4", 00:22:47.919 "trsvcid": "4420", 00:22:47.919 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:47.919 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:47.919 "hdgst": false, 00:22:47.919 "ddgst": false 00:22:47.919 }, 00:22:47.919 "method": "bdev_nvme_attach_controller" 00:22:47.919 },{ 00:22:47.919 "params": { 00:22:47.919 "name": "Nvme6", 00:22:47.919 "trtype": "tcp", 00:22:47.919 "traddr": "10.0.0.2", 00:22:47.919 "adrfam": "ipv4", 00:22:47.919 "trsvcid": "4420", 00:22:47.919 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:47.919 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:47.919 "hdgst": false, 00:22:47.919 "ddgst": false 00:22:47.919 }, 00:22:47.919 "method": "bdev_nvme_attach_controller" 00:22:47.919 },{ 00:22:47.919 "params": { 00:22:47.919 "name": "Nvme7", 00:22:47.919 "trtype": "tcp", 00:22:47.919 "traddr": "10.0.0.2", 00:22:47.919 "adrfam": "ipv4", 00:22:47.919 "trsvcid": "4420", 00:22:47.919 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:47.919 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:47.919 "hdgst": false, 00:22:47.919 "ddgst": false 00:22:47.919 }, 00:22:47.919 "method": "bdev_nvme_attach_controller" 00:22:47.919 },{ 00:22:47.919 "params": { 00:22:47.919 "name": "Nvme8", 00:22:47.919 "trtype": "tcp", 00:22:47.919 "traddr": "10.0.0.2", 00:22:47.919 "adrfam": "ipv4", 00:22:47.919 "trsvcid": "4420", 00:22:47.919 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:47.919 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:47.919 "hdgst": false, 00:22:47.919 "ddgst": false 00:22:47.919 }, 00:22:47.919 "method": "bdev_nvme_attach_controller" 00:22:47.919 },{ 00:22:47.919 "params": { 00:22:47.919 "name": "Nvme9", 00:22:47.919 "trtype": "tcp", 00:22:47.919 "traddr": "10.0.0.2", 00:22:47.919 "adrfam": "ipv4", 00:22:47.919 "trsvcid": "4420", 00:22:47.919 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:47.919 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:47.919 "hdgst": false, 00:22:47.919 "ddgst": false 00:22:47.919 }, 00:22:47.919 "method": "bdev_nvme_attach_controller" 00:22:47.919 },{ 00:22:47.919 "params": { 00:22:47.919 "name": "Nvme10", 00:22:47.919 "trtype": "tcp", 00:22:47.919 "traddr": "10.0.0.2", 00:22:47.919 "adrfam": "ipv4", 00:22:47.919 "trsvcid": "4420", 00:22:47.919 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:47.919 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:47.919 "hdgst": false, 00:22:47.919 "ddgst": false 00:22:47.919 }, 00:22:47.919 "method": "bdev_nvme_attach_controller" 00:22:47.919 }' 00:22:47.919 [2024-07-15 16:04:16.701842] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.919 [2024-07-15 16:04:16.775557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:49.325 16:04:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:49.325 16:04:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:22:49.325 16:04:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:49.325 16:04:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.325 16:04:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:49.325 16:04:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.325 16:04:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 3827359 00:22:49.325 16:04:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:22:49.325 16:04:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:22:50.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3827359 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:50.700 16:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 3827077 00:22:50.700 16:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:50.700 16:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:50.700 16:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:22:50.700 16:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:22:50.700 16:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:50.700 16:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:50.700 { 00:22:50.700 "params": { 00:22:50.700 "name": "Nvme$subsystem", 00:22:50.700 "trtype": "$TEST_TRANSPORT", 00:22:50.700 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.700 "adrfam": "ipv4", 00:22:50.700 "trsvcid": "$NVMF_PORT", 00:22:50.700 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.700 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.700 "hdgst": ${hdgst:-false}, 00:22:50.700 "ddgst": ${ddgst:-false} 00:22:50.700 }, 00:22:50.700 "method": "bdev_nvme_attach_controller" 00:22:50.700 } 00:22:50.700 EOF 00:22:50.700 )") 00:22:50.700 16:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:50.700 16:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:50.700 16:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:50.700 { 00:22:50.700 "params": { 00:22:50.700 "name": "Nvme$subsystem", 00:22:50.700 "trtype": "$TEST_TRANSPORT", 00:22:50.700 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.700 "adrfam": "ipv4", 00:22:50.700 "trsvcid": "$NVMF_PORT", 00:22:50.700 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.700 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.700 "hdgst": ${hdgst:-false}, 00:22:50.700 "ddgst": ${ddgst:-false} 00:22:50.700 }, 00:22:50.700 "method": "bdev_nvme_attach_controller" 00:22:50.700 } 00:22:50.700 EOF 00:22:50.700 )") 00:22:50.700 16:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:50.700 16:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:50.700 16:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:50.700 { 00:22:50.700 "params": { 00:22:50.700 "name": "Nvme$subsystem", 00:22:50.700 "trtype": "$TEST_TRANSPORT", 00:22:50.700 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.700 "adrfam": "ipv4", 00:22:50.700 "trsvcid": "$NVMF_PORT", 00:22:50.700 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.700 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.700 "hdgst": ${hdgst:-false}, 00:22:50.700 "ddgst": ${ddgst:-false} 00:22:50.700 }, 00:22:50.700 "method": "bdev_nvme_attach_controller" 00:22:50.700 } 00:22:50.700 EOF 00:22:50.700 )") 00:22:50.700 16:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:50.700 16:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:50.700 16:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:50.700 { 00:22:50.700 "params": { 00:22:50.700 "name": "Nvme$subsystem", 00:22:50.700 "trtype": "$TEST_TRANSPORT", 00:22:50.700 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.700 "adrfam": "ipv4", 00:22:50.700 "trsvcid": "$NVMF_PORT", 00:22:50.700 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.700 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.700 "hdgst": ${hdgst:-false}, 00:22:50.700 "ddgst": ${ddgst:-false} 00:22:50.700 }, 00:22:50.700 "method": "bdev_nvme_attach_controller" 00:22:50.700 } 00:22:50.700 EOF 00:22:50.700 )") 00:22:50.700 16:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:50.700 16:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:50.700 16:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:50.700 { 00:22:50.700 "params": { 00:22:50.700 "name": "Nvme$subsystem", 00:22:50.700 "trtype": "$TEST_TRANSPORT", 00:22:50.700 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.700 "adrfam": "ipv4", 00:22:50.700 "trsvcid": "$NVMF_PORT", 00:22:50.700 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.700 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.700 "hdgst": ${hdgst:-false}, 00:22:50.700 "ddgst": ${ddgst:-false} 00:22:50.700 }, 00:22:50.700 "method": "bdev_nvme_attach_controller" 00:22:50.700 } 00:22:50.700 EOF 00:22:50.700 )") 00:22:50.700 16:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:50.700 16:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:50.700 16:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:50.700 { 00:22:50.700 "params": { 00:22:50.700 "name": "Nvme$subsystem", 00:22:50.700 "trtype": "$TEST_TRANSPORT", 00:22:50.700 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.700 "adrfam": "ipv4", 00:22:50.700 "trsvcid": "$NVMF_PORT", 00:22:50.700 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.700 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.700 "hdgst": ${hdgst:-false}, 00:22:50.700 "ddgst": ${ddgst:-false} 00:22:50.700 }, 00:22:50.700 "method": "bdev_nvme_attach_controller" 00:22:50.700 } 00:22:50.700 EOF 00:22:50.700 )") 00:22:50.700 16:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:50.700 16:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:50.700 16:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:50.700 { 00:22:50.700 "params": { 00:22:50.700 "name": "Nvme$subsystem", 00:22:50.700 "trtype": "$TEST_TRANSPORT", 00:22:50.700 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.700 "adrfam": "ipv4", 00:22:50.700 "trsvcid": "$NVMF_PORT", 00:22:50.700 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.700 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.700 "hdgst": ${hdgst:-false}, 00:22:50.700 "ddgst": ${ddgst:-false} 00:22:50.700 }, 00:22:50.700 "method": "bdev_nvme_attach_controller" 00:22:50.700 } 00:22:50.700 EOF 00:22:50.700 )") 00:22:50.700 16:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:50.700 16:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:50.700 16:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:50.700 { 00:22:50.700 "params": { 00:22:50.700 "name": "Nvme$subsystem", 00:22:50.700 "trtype": "$TEST_TRANSPORT", 00:22:50.700 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.700 "adrfam": "ipv4", 00:22:50.700 "trsvcid": "$NVMF_PORT", 00:22:50.700 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.700 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.700 "hdgst": ${hdgst:-false}, 00:22:50.700 "ddgst": ${ddgst:-false} 00:22:50.700 }, 00:22:50.700 "method": "bdev_nvme_attach_controller" 00:22:50.700 } 00:22:50.700 EOF 00:22:50.700 )") 00:22:50.700 16:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:50.700 [2024-07-15 16:04:19.274257] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:22:50.700 [2024-07-15 16:04:19.274310] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3827841 ] 00:22:50.700 16:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:50.700 16:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:50.700 { 00:22:50.700 "params": { 00:22:50.700 "name": "Nvme$subsystem", 00:22:50.700 "trtype": "$TEST_TRANSPORT", 00:22:50.700 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.700 "adrfam": "ipv4", 00:22:50.700 "trsvcid": "$NVMF_PORT", 00:22:50.700 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.700 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.700 "hdgst": ${hdgst:-false}, 00:22:50.700 "ddgst": ${ddgst:-false} 00:22:50.700 }, 00:22:50.700 "method": "bdev_nvme_attach_controller" 00:22:50.700 } 00:22:50.700 EOF 00:22:50.700 )") 00:22:50.700 16:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:50.700 16:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:50.700 16:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:50.700 { 00:22:50.700 "params": { 00:22:50.700 "name": "Nvme$subsystem", 00:22:50.700 "trtype": "$TEST_TRANSPORT", 00:22:50.700 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.700 "adrfam": "ipv4", 00:22:50.700 "trsvcid": "$NVMF_PORT", 00:22:50.700 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.700 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.700 "hdgst": ${hdgst:-false}, 00:22:50.700 "ddgst": ${ddgst:-false} 00:22:50.700 }, 00:22:50.700 "method": "bdev_nvme_attach_controller" 00:22:50.700 } 00:22:50.700 EOF 00:22:50.700 )") 00:22:50.700 16:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:50.701 16:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:22:50.701 16:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:22:50.701 16:04:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:50.701 "params": { 00:22:50.701 "name": "Nvme1", 00:22:50.701 "trtype": "tcp", 00:22:50.701 "traddr": "10.0.0.2", 00:22:50.701 "adrfam": "ipv4", 00:22:50.701 "trsvcid": "4420", 00:22:50.701 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:50.701 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:50.701 "hdgst": false, 00:22:50.701 "ddgst": false 00:22:50.701 }, 00:22:50.701 "method": "bdev_nvme_attach_controller" 00:22:50.701 },{ 00:22:50.701 "params": { 00:22:50.701 "name": "Nvme2", 00:22:50.701 "trtype": "tcp", 00:22:50.701 "traddr": "10.0.0.2", 00:22:50.701 "adrfam": "ipv4", 00:22:50.701 "trsvcid": "4420", 00:22:50.701 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:50.701 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:50.701 "hdgst": false, 00:22:50.701 "ddgst": false 00:22:50.701 }, 00:22:50.701 "method": "bdev_nvme_attach_controller" 00:22:50.701 },{ 00:22:50.701 "params": { 00:22:50.701 "name": "Nvme3", 00:22:50.701 "trtype": "tcp", 00:22:50.701 "traddr": "10.0.0.2", 00:22:50.701 "adrfam": "ipv4", 00:22:50.701 "trsvcid": "4420", 00:22:50.701 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:50.701 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:50.701 "hdgst": false, 00:22:50.701 "ddgst": false 00:22:50.701 }, 00:22:50.701 "method": "bdev_nvme_attach_controller" 00:22:50.701 },{ 00:22:50.701 "params": { 00:22:50.701 "name": "Nvme4", 00:22:50.701 "trtype": "tcp", 00:22:50.701 "traddr": "10.0.0.2", 00:22:50.701 "adrfam": "ipv4", 00:22:50.701 "trsvcid": "4420", 00:22:50.701 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:50.701 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:50.701 "hdgst": false, 00:22:50.701 "ddgst": false 00:22:50.701 }, 00:22:50.701 "method": "bdev_nvme_attach_controller" 00:22:50.701 },{ 00:22:50.701 "params": { 00:22:50.701 "name": "Nvme5", 00:22:50.701 "trtype": "tcp", 00:22:50.701 "traddr": "10.0.0.2", 00:22:50.701 "adrfam": "ipv4", 00:22:50.701 "trsvcid": "4420", 00:22:50.701 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:50.701 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:50.701 "hdgst": false, 00:22:50.701 "ddgst": false 00:22:50.701 }, 00:22:50.701 "method": "bdev_nvme_attach_controller" 00:22:50.701 },{ 00:22:50.701 "params": { 00:22:50.701 "name": "Nvme6", 00:22:50.701 "trtype": "tcp", 00:22:50.701 "traddr": "10.0.0.2", 00:22:50.701 "adrfam": "ipv4", 00:22:50.701 "trsvcid": "4420", 00:22:50.701 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:50.701 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:50.701 "hdgst": false, 00:22:50.701 "ddgst": false 00:22:50.701 }, 00:22:50.701 "method": "bdev_nvme_attach_controller" 00:22:50.701 },{ 00:22:50.701 "params": { 00:22:50.701 "name": "Nvme7", 00:22:50.701 "trtype": "tcp", 00:22:50.701 "traddr": "10.0.0.2", 00:22:50.701 "adrfam": "ipv4", 00:22:50.701 "trsvcid": "4420", 00:22:50.701 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:50.701 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:50.701 "hdgst": false, 00:22:50.701 "ddgst": false 00:22:50.701 }, 00:22:50.701 "method": "bdev_nvme_attach_controller" 00:22:50.701 },{ 00:22:50.701 "params": { 00:22:50.701 "name": "Nvme8", 00:22:50.701 "trtype": "tcp", 00:22:50.701 "traddr": "10.0.0.2", 00:22:50.701 "adrfam": "ipv4", 00:22:50.701 "trsvcid": "4420", 00:22:50.701 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:50.701 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:50.701 "hdgst": false, 00:22:50.701 "ddgst": false 00:22:50.701 }, 00:22:50.701 "method": "bdev_nvme_attach_controller" 00:22:50.701 },{ 00:22:50.701 "params": { 00:22:50.701 "name": "Nvme9", 00:22:50.701 "trtype": "tcp", 00:22:50.701 "traddr": "10.0.0.2", 00:22:50.701 "adrfam": "ipv4", 00:22:50.701 "trsvcid": "4420", 00:22:50.701 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:50.701 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:50.701 "hdgst": false, 00:22:50.701 "ddgst": false 00:22:50.701 }, 00:22:50.701 "method": "bdev_nvme_attach_controller" 00:22:50.701 },{ 00:22:50.701 "params": { 00:22:50.701 "name": "Nvme10", 00:22:50.701 "trtype": "tcp", 00:22:50.701 "traddr": "10.0.0.2", 00:22:50.701 "adrfam": "ipv4", 00:22:50.701 "trsvcid": "4420", 00:22:50.701 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:50.701 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:50.701 "hdgst": false, 00:22:50.701 "ddgst": false 00:22:50.701 }, 00:22:50.701 "method": "bdev_nvme_attach_controller" 00:22:50.701 }' 00:22:50.701 EAL: No free 2048 kB hugepages reported on node 1 00:22:50.701 [2024-07-15 16:04:19.331881] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:50.701 [2024-07-15 16:04:19.406653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.075 Running I/O for 1 seconds... 00:22:53.451 00:22:53.451 Latency(us) 00:22:53.451 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:53.451 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.451 Verification LBA range: start 0x0 length 0x400 00:22:53.451 Nvme1n1 : 1.14 281.84 17.61 0.00 0.00 223617.56 16412.49 215186.03 00:22:53.451 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.451 Verification LBA range: start 0x0 length 0x400 00:22:53.451 Nvme2n1 : 1.14 281.14 17.57 0.00 0.00 222417.61 17096.35 211538.81 00:22:53.451 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.451 Verification LBA range: start 0x0 length 0x400 00:22:53.451 Nvme3n1 : 1.13 284.33 17.77 0.00 0.00 216751.24 18692.01 214274.23 00:22:53.451 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.451 Verification LBA range: start 0x0 length 0x400 00:22:53.451 Nvme4n1 : 1.12 285.18 17.82 0.00 0.00 212778.47 14930.81 206979.78 00:22:53.451 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.451 Verification LBA range: start 0x0 length 0x400 00:22:53.451 Nvme5n1 : 1.07 243.42 15.21 0.00 0.00 243833.28 5356.86 218833.25 00:22:53.451 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.451 Verification LBA range: start 0x0 length 0x400 00:22:53.451 Nvme6n1 : 1.15 278.76 17.42 0.00 0.00 211748.95 17894.18 228863.11 00:22:53.451 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.451 Verification LBA range: start 0x0 length 0x400 00:22:53.451 Nvme7n1 : 1.14 280.52 17.53 0.00 0.00 207089.31 17894.18 218833.25 00:22:53.451 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.451 Verification LBA range: start 0x0 length 0x400 00:22:53.451 Nvme8n1 : 1.14 283.14 17.70 0.00 0.00 202178.56 13905.03 217009.64 00:22:53.451 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.451 Verification LBA range: start 0x0 length 0x400 00:22:53.451 Nvme9n1 : 1.15 277.23 17.33 0.00 0.00 203540.39 15728.64 224304.08 00:22:53.451 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:53.451 Verification LBA range: start 0x0 length 0x400 00:22:53.451 Nvme10n1 : 1.15 277.96 17.37 0.00 0.00 199784.72 16184.54 238892.97 00:22:53.451 =================================================================================================================== 00:22:53.451 Total : 2773.50 173.34 0.00 0.00 213805.84 5356.86 238892.97 00:22:53.451 16:04:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:22:53.451 16:04:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:53.451 16:04:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:53.451 16:04:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:53.451 16:04:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:53.451 16:04:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:53.451 16:04:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:22:53.451 16:04:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:53.451 16:04:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:22:53.451 16:04:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:53.451 16:04:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:53.451 rmmod nvme_tcp 00:22:53.451 rmmod nvme_fabrics 00:22:53.451 rmmod nvme_keyring 00:22:53.451 16:04:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:53.451 16:04:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:22:53.451 16:04:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:22:53.451 16:04:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 3827077 ']' 00:22:53.451 16:04:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 3827077 00:22:53.451 16:04:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 3827077 ']' 00:22:53.451 16:04:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 3827077 00:22:53.451 16:04:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:22:53.451 16:04:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:53.451 16:04:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3827077 00:22:53.451 16:04:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:53.451 16:04:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:53.451 16:04:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3827077' 00:22:53.451 killing process with pid 3827077 00:22:53.451 16:04:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 3827077 00:22:53.451 16:04:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 3827077 00:22:54.019 16:04:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:54.019 16:04:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:54.019 16:04:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:54.019 16:04:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:54.019 16:04:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:54.019 16:04:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.019 16:04:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:54.019 16:04:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.919 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:55.919 00:22:55.919 real 0m14.768s 00:22:55.919 user 0m34.859s 00:22:55.919 sys 0m5.148s 00:22:55.919 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:55.919 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:55.919 ************************************ 00:22:55.919 END TEST nvmf_shutdown_tc1 00:22:55.919 ************************************ 00:22:55.919 16:04:24 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:22:55.919 16:04:24 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:55.919 16:04:24 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:55.919 16:04:24 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:55.919 16:04:24 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:56.177 ************************************ 00:22:56.177 START TEST nvmf_shutdown_tc2 00:22:56.177 ************************************ 00:22:56.177 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:22:56.177 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:22:56.177 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:56.177 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:56.177 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:56.177 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:56.177 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:56.177 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:56.177 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.177 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:56.177 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.177 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:56.177 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:56.177 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:56.177 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:56.177 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:56.177 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:56.177 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:56.177 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:56.177 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:56.177 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:56.178 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:56.178 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:56.178 Found net devices under 0000:86:00.0: cvl_0_0 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:56.178 Found net devices under 0000:86:00.1: cvl_0_1 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:56.178 16:04:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:56.178 16:04:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:56.178 16:04:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:56.178 16:04:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:56.178 16:04:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:56.436 16:04:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:56.436 16:04:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:56.436 16:04:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:56.436 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:56.436 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:22:56.436 00:22:56.436 --- 10.0.0.2 ping statistics --- 00:22:56.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:56.436 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:22:56.436 16:04:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:56.436 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:56.436 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:22:56.436 00:22:56.436 --- 10.0.0.1 ping statistics --- 00:22:56.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:56.436 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:22:56.436 16:04:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:56.436 16:04:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:22:56.436 16:04:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:56.436 16:04:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:56.436 16:04:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:56.436 16:04:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:56.436 16:04:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:56.436 16:04:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:56.436 16:04:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:56.436 16:04:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:56.436 16:04:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:56.436 16:04:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:56.436 16:04:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:56.436 16:04:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3828917 00:22:56.436 16:04:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3828917 00:22:56.436 16:04:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:56.436 16:04:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3828917 ']' 00:22:56.436 16:04:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:56.436 16:04:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:56.436 16:04:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:56.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:56.436 16:04:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:56.436 16:04:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:56.436 [2024-07-15 16:04:25.235428] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:22:56.436 [2024-07-15 16:04:25.235470] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:56.436 EAL: No free 2048 kB hugepages reported on node 1 00:22:56.436 [2024-07-15 16:04:25.292863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:56.694 [2024-07-15 16:04:25.374260] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:56.694 [2024-07-15 16:04:25.374298] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:56.694 [2024-07-15 16:04:25.374305] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:56.694 [2024-07-15 16:04:25.374311] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:56.694 [2024-07-15 16:04:25.374316] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:56.694 [2024-07-15 16:04:25.374357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:56.694 [2024-07-15 16:04:25.374383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:56.694 [2024-07-15 16:04:25.374490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:56.694 [2024-07-15 16:04:25.374491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:57.258 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:57.258 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:22:57.258 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:57.258 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:57.258 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:57.258 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:57.258 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:57.258 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.258 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:57.258 [2024-07-15 16:04:26.076233] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:57.258 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.258 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:57.258 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:57.258 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:57.258 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:57.258 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:57.258 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:57.258 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:57.258 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:57.258 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:57.258 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:57.258 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:57.258 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:57.258 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:57.258 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:57.258 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:57.258 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:57.258 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:57.258 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:57.258 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:57.258 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:57.258 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:57.258 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:57.258 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:57.258 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:57.258 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:57.258 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:57.258 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.258 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:57.258 Malloc1 00:22:57.258 [2024-07-15 16:04:26.171931] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:57.515 Malloc2 00:22:57.515 Malloc3 00:22:57.515 Malloc4 00:22:57.515 Malloc5 00:22:57.515 Malloc6 00:22:57.515 Malloc7 00:22:57.774 Malloc8 00:22:57.774 Malloc9 00:22:57.774 Malloc10 00:22:57.774 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.774 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:57.774 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:57.774 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:57.774 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=3829218 00:22:57.774 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 3829218 /var/tmp/bdevperf.sock 00:22:57.774 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3829218 ']' 00:22:57.774 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:57.774 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:57.774 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:57.774 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:57.775 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:57.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:57.775 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:22:57.775 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:57.775 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:22:57.775 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:57.775 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:57.775 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:57.775 { 00:22:57.775 "params": { 00:22:57.775 "name": "Nvme$subsystem", 00:22:57.775 "trtype": "$TEST_TRANSPORT", 00:22:57.775 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.775 "adrfam": "ipv4", 00:22:57.775 "trsvcid": "$NVMF_PORT", 00:22:57.775 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.775 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.775 "hdgst": ${hdgst:-false}, 00:22:57.775 "ddgst": ${ddgst:-false} 00:22:57.775 }, 00:22:57.775 "method": "bdev_nvme_attach_controller" 00:22:57.775 } 00:22:57.775 EOF 00:22:57.775 )") 00:22:57.775 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:57.775 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:57.775 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:57.775 { 00:22:57.775 "params": { 00:22:57.775 "name": "Nvme$subsystem", 00:22:57.775 "trtype": "$TEST_TRANSPORT", 00:22:57.775 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.775 "adrfam": "ipv4", 00:22:57.775 "trsvcid": "$NVMF_PORT", 00:22:57.775 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.775 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.775 "hdgst": ${hdgst:-false}, 00:22:57.775 "ddgst": ${ddgst:-false} 00:22:57.775 }, 00:22:57.775 "method": "bdev_nvme_attach_controller" 00:22:57.775 } 00:22:57.775 EOF 00:22:57.775 )") 00:22:57.775 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:57.775 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:57.775 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:57.775 { 00:22:57.775 "params": { 00:22:57.775 "name": "Nvme$subsystem", 00:22:57.775 "trtype": "$TEST_TRANSPORT", 00:22:57.775 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.775 "adrfam": "ipv4", 00:22:57.775 "trsvcid": "$NVMF_PORT", 00:22:57.775 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.775 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.775 "hdgst": ${hdgst:-false}, 00:22:57.775 "ddgst": ${ddgst:-false} 00:22:57.775 }, 00:22:57.775 "method": "bdev_nvme_attach_controller" 00:22:57.775 } 00:22:57.775 EOF 00:22:57.775 )") 00:22:57.775 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:57.775 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:57.775 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:57.775 { 00:22:57.775 "params": { 00:22:57.775 "name": "Nvme$subsystem", 00:22:57.775 "trtype": "$TEST_TRANSPORT", 00:22:57.775 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.775 "adrfam": "ipv4", 00:22:57.775 "trsvcid": "$NVMF_PORT", 00:22:57.775 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.775 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.775 "hdgst": ${hdgst:-false}, 00:22:57.775 "ddgst": ${ddgst:-false} 00:22:57.775 }, 00:22:57.775 "method": "bdev_nvme_attach_controller" 00:22:57.775 } 00:22:57.775 EOF 00:22:57.775 )") 00:22:57.775 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:57.775 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:57.775 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:57.775 { 00:22:57.775 "params": { 00:22:57.775 "name": "Nvme$subsystem", 00:22:57.775 "trtype": "$TEST_TRANSPORT", 00:22:57.775 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.775 "adrfam": "ipv4", 00:22:57.775 "trsvcid": "$NVMF_PORT", 00:22:57.775 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.775 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.775 "hdgst": ${hdgst:-false}, 00:22:57.775 "ddgst": ${ddgst:-false} 00:22:57.775 }, 00:22:57.775 "method": "bdev_nvme_attach_controller" 00:22:57.775 } 00:22:57.775 EOF 00:22:57.775 )") 00:22:57.775 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:57.775 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:57.775 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:57.775 { 00:22:57.775 "params": { 00:22:57.775 "name": "Nvme$subsystem", 00:22:57.775 "trtype": "$TEST_TRANSPORT", 00:22:57.775 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.775 "adrfam": "ipv4", 00:22:57.775 "trsvcid": "$NVMF_PORT", 00:22:57.775 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.775 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.775 "hdgst": ${hdgst:-false}, 00:22:57.775 "ddgst": ${ddgst:-false} 00:22:57.775 }, 00:22:57.775 "method": "bdev_nvme_attach_controller" 00:22:57.775 } 00:22:57.775 EOF 00:22:57.775 )") 00:22:57.775 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:57.775 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:57.775 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:57.775 { 00:22:57.775 "params": { 00:22:57.775 "name": "Nvme$subsystem", 00:22:57.775 "trtype": "$TEST_TRANSPORT", 00:22:57.775 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.775 "adrfam": "ipv4", 00:22:57.775 "trsvcid": "$NVMF_PORT", 00:22:57.775 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.775 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.775 "hdgst": ${hdgst:-false}, 00:22:57.775 "ddgst": ${ddgst:-false} 00:22:57.775 }, 00:22:57.775 "method": "bdev_nvme_attach_controller" 00:22:57.775 } 00:22:57.775 EOF 00:22:57.775 )") 00:22:57.775 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:57.775 [2024-07-15 16:04:26.645394] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:22:57.775 [2024-07-15 16:04:26.645447] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3829218 ] 00:22:57.775 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:57.775 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:57.775 { 00:22:57.775 "params": { 00:22:57.775 "name": "Nvme$subsystem", 00:22:57.775 "trtype": "$TEST_TRANSPORT", 00:22:57.775 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.775 "adrfam": "ipv4", 00:22:57.775 "trsvcid": "$NVMF_PORT", 00:22:57.775 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.775 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.775 "hdgst": ${hdgst:-false}, 00:22:57.775 "ddgst": ${ddgst:-false} 00:22:57.775 }, 00:22:57.775 "method": "bdev_nvme_attach_controller" 00:22:57.775 } 00:22:57.775 EOF 00:22:57.775 )") 00:22:57.775 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:57.775 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:57.775 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:57.775 { 00:22:57.775 "params": { 00:22:57.775 "name": "Nvme$subsystem", 00:22:57.775 "trtype": "$TEST_TRANSPORT", 00:22:57.775 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.775 "adrfam": "ipv4", 00:22:57.775 "trsvcid": "$NVMF_PORT", 00:22:57.775 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.775 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.775 "hdgst": ${hdgst:-false}, 00:22:57.775 "ddgst": ${ddgst:-false} 00:22:57.775 }, 00:22:57.775 "method": "bdev_nvme_attach_controller" 00:22:57.775 } 00:22:57.775 EOF 00:22:57.775 )") 00:22:57.775 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:57.775 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:57.775 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:57.775 { 00:22:57.775 "params": { 00:22:57.775 "name": "Nvme$subsystem", 00:22:57.775 "trtype": "$TEST_TRANSPORT", 00:22:57.775 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.775 "adrfam": "ipv4", 00:22:57.775 "trsvcid": "$NVMF_PORT", 00:22:57.775 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.775 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.775 "hdgst": ${hdgst:-false}, 00:22:57.775 "ddgst": ${ddgst:-false} 00:22:57.775 }, 00:22:57.775 "method": "bdev_nvme_attach_controller" 00:22:57.775 } 00:22:57.775 EOF 00:22:57.775 )") 00:22:57.775 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:57.775 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:22:57.775 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:22:57.775 EAL: No free 2048 kB hugepages reported on node 1 00:22:57.775 16:04:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:57.775 "params": { 00:22:57.775 "name": "Nvme1", 00:22:57.775 "trtype": "tcp", 00:22:57.775 "traddr": "10.0.0.2", 00:22:57.775 "adrfam": "ipv4", 00:22:57.775 "trsvcid": "4420", 00:22:57.775 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.775 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:57.776 "hdgst": false, 00:22:57.776 "ddgst": false 00:22:57.776 }, 00:22:57.776 "method": "bdev_nvme_attach_controller" 00:22:57.776 },{ 00:22:57.776 "params": { 00:22:57.776 "name": "Nvme2", 00:22:57.776 "trtype": "tcp", 00:22:57.776 "traddr": "10.0.0.2", 00:22:57.776 "adrfam": "ipv4", 00:22:57.776 "trsvcid": "4420", 00:22:57.776 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:57.776 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:57.776 "hdgst": false, 00:22:57.776 "ddgst": false 00:22:57.776 }, 00:22:57.776 "method": "bdev_nvme_attach_controller" 00:22:57.776 },{ 00:22:57.776 "params": { 00:22:57.776 "name": "Nvme3", 00:22:57.776 "trtype": "tcp", 00:22:57.776 "traddr": "10.0.0.2", 00:22:57.776 "adrfam": "ipv4", 00:22:57.776 "trsvcid": "4420", 00:22:57.776 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:57.776 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:57.776 "hdgst": false, 00:22:57.776 "ddgst": false 00:22:57.776 }, 00:22:57.776 "method": "bdev_nvme_attach_controller" 00:22:57.776 },{ 00:22:57.776 "params": { 00:22:57.776 "name": "Nvme4", 00:22:57.776 "trtype": "tcp", 00:22:57.776 "traddr": "10.0.0.2", 00:22:57.776 "adrfam": "ipv4", 00:22:57.776 "trsvcid": "4420", 00:22:57.776 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:57.776 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:57.776 "hdgst": false, 00:22:57.776 "ddgst": false 00:22:57.776 }, 00:22:57.776 "method": "bdev_nvme_attach_controller" 00:22:57.776 },{ 00:22:57.776 "params": { 00:22:57.776 "name": "Nvme5", 00:22:57.776 "trtype": "tcp", 00:22:57.776 "traddr": "10.0.0.2", 00:22:57.776 "adrfam": "ipv4", 00:22:57.776 "trsvcid": "4420", 00:22:57.776 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:57.776 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:57.776 "hdgst": false, 00:22:57.776 "ddgst": false 00:22:57.776 }, 00:22:57.776 "method": "bdev_nvme_attach_controller" 00:22:57.776 },{ 00:22:57.776 "params": { 00:22:57.776 "name": "Nvme6", 00:22:57.776 "trtype": "tcp", 00:22:57.776 "traddr": "10.0.0.2", 00:22:57.776 "adrfam": "ipv4", 00:22:57.776 "trsvcid": "4420", 00:22:57.776 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:57.776 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:57.776 "hdgst": false, 00:22:57.776 "ddgst": false 00:22:57.776 }, 00:22:57.776 "method": "bdev_nvme_attach_controller" 00:22:57.776 },{ 00:22:57.776 "params": { 00:22:57.776 "name": "Nvme7", 00:22:57.776 "trtype": "tcp", 00:22:57.776 "traddr": "10.0.0.2", 00:22:57.776 "adrfam": "ipv4", 00:22:57.776 "trsvcid": "4420", 00:22:57.776 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:57.776 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:57.776 "hdgst": false, 00:22:57.776 "ddgst": false 00:22:57.776 }, 00:22:57.776 "method": "bdev_nvme_attach_controller" 00:22:57.776 },{ 00:22:57.776 "params": { 00:22:57.776 "name": "Nvme8", 00:22:57.776 "trtype": "tcp", 00:22:57.776 "traddr": "10.0.0.2", 00:22:57.776 "adrfam": "ipv4", 00:22:57.776 "trsvcid": "4420", 00:22:57.776 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:57.776 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:57.776 "hdgst": false, 00:22:57.776 "ddgst": false 00:22:57.776 }, 00:22:57.776 "method": "bdev_nvme_attach_controller" 00:22:57.776 },{ 00:22:57.776 "params": { 00:22:57.776 "name": "Nvme9", 00:22:57.776 "trtype": "tcp", 00:22:57.776 "traddr": "10.0.0.2", 00:22:57.776 "adrfam": "ipv4", 00:22:57.776 "trsvcid": "4420", 00:22:57.776 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:57.776 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:57.776 "hdgst": false, 00:22:57.776 "ddgst": false 00:22:57.776 }, 00:22:57.776 "method": "bdev_nvme_attach_controller" 00:22:57.776 },{ 00:22:57.776 "params": { 00:22:57.776 "name": "Nvme10", 00:22:57.776 "trtype": "tcp", 00:22:57.776 "traddr": "10.0.0.2", 00:22:57.776 "adrfam": "ipv4", 00:22:57.776 "trsvcid": "4420", 00:22:57.776 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:57.776 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:57.776 "hdgst": false, 00:22:57.776 "ddgst": false 00:22:57.776 }, 00:22:57.776 "method": "bdev_nvme_attach_controller" 00:22:57.776 }' 00:22:57.776 [2024-07-15 16:04:26.701066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.034 [2024-07-15 16:04:26.776251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:59.408 Running I/O for 10 seconds... 00:22:59.408 16:04:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:59.408 16:04:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:22:59.408 16:04:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:59.408 16:04:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.408 16:04:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:59.667 16:04:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.667 16:04:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:59.667 16:04:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:59.667 16:04:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:59.667 16:04:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:22:59.667 16:04:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:22:59.667 16:04:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:59.667 16:04:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:59.667 16:04:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:59.667 16:04:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.667 16:04:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:59.667 16:04:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:59.667 16:04:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.667 16:04:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:22:59.667 16:04:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:22:59.667 16:04:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:59.926 16:04:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:59.926 16:04:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:59.926 16:04:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:59.926 16:04:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:59.926 16:04:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.926 16:04:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:59.926 16:04:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.926 16:04:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:22:59.926 16:04:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:22:59.926 16:04:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:00.185 16:04:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:00.185 16:04:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:00.185 16:04:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:00.185 16:04:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:00.185 16:04:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.185 16:04:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:00.185 16:04:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.185 16:04:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:00.185 16:04:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:00.185 16:04:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:23:00.185 16:04:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:23:00.185 16:04:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:23:00.185 16:04:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 3829218 00:23:00.185 16:04:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 3829218 ']' 00:23:00.185 16:04:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 3829218 00:23:00.185 16:04:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:23:00.185 16:04:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:00.185 16:04:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3829218 00:23:00.185 16:04:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:00.185 16:04:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:00.185 16:04:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3829218' 00:23:00.185 killing process with pid 3829218 00:23:00.185 16:04:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 3829218 00:23:00.185 16:04:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 3829218 00:23:00.443 Received shutdown signal, test time was about 0.902329 seconds 00:23:00.443 00:23:00.443 Latency(us) 00:23:00.443 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.443 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:00.443 Verification LBA range: start 0x0 length 0x400 00:23:00.443 Nvme1n1 : 0.90 284.99 17.81 0.00 0.00 222207.11 16070.57 218833.25 00:23:00.443 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:00.443 Verification LBA range: start 0x0 length 0x400 00:23:00.443 Nvme2n1 : 0.89 289.11 18.07 0.00 0.00 214993.03 28038.01 215186.03 00:23:00.443 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:00.443 Verification LBA range: start 0x0 length 0x400 00:23:00.443 Nvme3n1 : 0.87 302.26 18.89 0.00 0.00 200717.76 4929.45 191479.10 00:23:00.443 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:00.443 Verification LBA range: start 0x0 length 0x400 00:23:00.443 Nvme4n1 : 0.88 290.76 18.17 0.00 0.00 205744.42 13677.08 217009.64 00:23:00.443 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:00.443 Verification LBA range: start 0x0 length 0x400 00:23:00.443 Nvme5n1 : 0.90 283.92 17.75 0.00 0.00 207230.66 16640.45 219745.06 00:23:00.443 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:00.443 Verification LBA range: start 0x0 length 0x400 00:23:00.443 Nvme6n1 : 0.89 286.20 17.89 0.00 0.00 201437.27 16982.37 209715.20 00:23:00.443 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:00.444 Verification LBA range: start 0x0 length 0x400 00:23:00.444 Nvme7n1 : 0.89 287.10 17.94 0.00 0.00 196868.90 15956.59 213362.42 00:23:00.444 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:00.444 Verification LBA range: start 0x0 length 0x400 00:23:00.444 Nvme8n1 : 0.89 288.03 18.00 0.00 0.00 192051.87 26100.42 189655.49 00:23:00.444 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:00.444 Verification LBA range: start 0x0 length 0x400 00:23:00.444 Nvme9n1 : 0.87 219.97 13.75 0.00 0.00 245486.64 19033.93 227951.30 00:23:00.444 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:00.444 Verification LBA range: start 0x0 length 0x400 00:23:00.444 Nvme10n1 : 0.87 219.53 13.72 0.00 0.00 240919.00 18919.96 251658.24 00:23:00.444 =================================================================================================================== 00:23:00.444 Total : 2751.86 171.99 0.00 0.00 211138.00 4929.45 251658.24 00:23:00.444 16:04:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:23:01.819 16:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 3828917 00:23:01.819 16:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:23:01.819 16:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:01.819 16:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:01.819 16:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:01.819 16:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:01.819 16:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:01.819 16:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:23:01.819 16:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:01.819 16:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:23:01.819 16:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:01.819 16:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:01.819 rmmod nvme_tcp 00:23:01.819 rmmod nvme_fabrics 00:23:01.819 rmmod nvme_keyring 00:23:01.819 16:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:01.819 16:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:23:01.819 16:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:23:01.819 16:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 3828917 ']' 00:23:01.819 16:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 3828917 00:23:01.819 16:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 3828917 ']' 00:23:01.820 16:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 3828917 00:23:01.820 16:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:23:01.820 16:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:01.820 16:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3828917 00:23:01.820 16:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:01.820 16:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:01.820 16:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3828917' 00:23:01.820 killing process with pid 3828917 00:23:01.820 16:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 3828917 00:23:01.820 16:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 3828917 00:23:02.079 16:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:02.079 16:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:02.079 16:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:02.079 16:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:02.079 16:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:02.079 16:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.079 16:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:02.079 16:04:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.634 16:04:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:04.634 00:23:04.634 real 0m8.101s 00:23:04.634 user 0m24.641s 00:23:04.634 sys 0m1.392s 00:23:04.634 16:04:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:04.634 16:04:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:04.634 ************************************ 00:23:04.634 END TEST nvmf_shutdown_tc2 00:23:04.634 ************************************ 00:23:04.634 16:04:32 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:04.634 16:04:32 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:04.634 16:04:32 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:04.634 16:04:32 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:04.634 16:04:32 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:04.634 ************************************ 00:23:04.634 START TEST nvmf_shutdown_tc3 00:23:04.634 ************************************ 00:23:04.634 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:23:04.634 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:23:04.634 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:04.634 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:04.634 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:04.634 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:04.634 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:04.634 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:04.634 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.634 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:04.634 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.634 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:04.634 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:04.634 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:04.634 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:04.634 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:04.634 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:04.634 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:04.634 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:04.634 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:04.634 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:04.634 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:04.634 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:23:04.634 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:04.634 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:23:04.634 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:23:04.634 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:23:04.634 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:23:04.634 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:23:04.634 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:04.634 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:04.634 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:04.634 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:04.634 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:04.634 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:04.634 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:04.634 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:04.634 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:04.634 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:04.635 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:04.635 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:04.635 Found net devices under 0000:86:00.0: cvl_0_0 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:04.635 Found net devices under 0000:86:00.1: cvl_0_1 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:04.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:04.635 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:23:04.635 00:23:04.635 --- 10.0.0.2 ping statistics --- 00:23:04.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.635 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:04.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:04.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:23:04.635 00:23:04.635 --- 10.0.0.1 ping statistics --- 00:23:04.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.635 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=3830417 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 3830417 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 3830417 ']' 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:04.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:04.635 16:04:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:04.635 [2024-07-15 16:04:33.373018] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:23:04.635 [2024-07-15 16:04:33.373060] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:04.635 EAL: No free 2048 kB hugepages reported on node 1 00:23:04.635 [2024-07-15 16:04:33.432735] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:04.635 [2024-07-15 16:04:33.513839] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:04.635 [2024-07-15 16:04:33.513871] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:04.635 [2024-07-15 16:04:33.513878] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:04.635 [2024-07-15 16:04:33.513884] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:04.635 [2024-07-15 16:04:33.513890] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:04.635 [2024-07-15 16:04:33.513928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:04.635 [2024-07-15 16:04:33.513953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:04.635 [2024-07-15 16:04:33.514054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:04.635 [2024-07-15 16:04:33.514056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:05.576 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:05.576 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:23:05.576 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:05.576 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:05.576 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:05.576 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:05.576 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:05.576 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.576 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:05.576 [2024-07-15 16:04:34.221239] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:05.576 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.576 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:05.576 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:05.576 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:05.576 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:05.576 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:05.576 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:05.576 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:05.576 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:05.576 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:05.576 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:05.576 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:05.576 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:05.576 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:05.576 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:05.576 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:05.576 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:05.576 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:05.576 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:05.576 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:05.576 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:05.576 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:05.576 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:05.576 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:05.576 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:05.576 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:05.576 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:05.576 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.576 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:05.576 Malloc1 00:23:05.576 [2024-07-15 16:04:34.317201] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:05.576 Malloc2 00:23:05.576 Malloc3 00:23:05.576 Malloc4 00:23:05.576 Malloc5 00:23:05.576 Malloc6 00:23:05.874 Malloc7 00:23:05.874 Malloc8 00:23:05.874 Malloc9 00:23:05.874 Malloc10 00:23:05.874 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.874 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:05.874 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:05.874 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:05.874 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=3830693 00:23:05.874 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 3830693 /var/tmp/bdevperf.sock 00:23:05.874 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 3830693 ']' 00:23:05.874 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:05.874 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:05.874 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:05.874 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:05.874 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:05.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:05.874 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:05.874 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:23:05.874 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:05.874 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:23:05.874 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:05.874 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:05.874 { 00:23:05.874 "params": { 00:23:05.874 "name": "Nvme$subsystem", 00:23:05.874 "trtype": "$TEST_TRANSPORT", 00:23:05.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:05.874 "adrfam": "ipv4", 00:23:05.874 "trsvcid": "$NVMF_PORT", 00:23:05.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:05.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:05.874 "hdgst": ${hdgst:-false}, 00:23:05.874 "ddgst": ${ddgst:-false} 00:23:05.874 }, 00:23:05.874 "method": "bdev_nvme_attach_controller" 00:23:05.874 } 00:23:05.874 EOF 00:23:05.874 )") 00:23:05.874 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:05.874 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:05.874 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:05.874 { 00:23:05.874 "params": { 00:23:05.874 "name": "Nvme$subsystem", 00:23:05.874 "trtype": "$TEST_TRANSPORT", 00:23:05.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:05.874 "adrfam": "ipv4", 00:23:05.874 "trsvcid": "$NVMF_PORT", 00:23:05.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:05.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:05.874 "hdgst": ${hdgst:-false}, 00:23:05.874 "ddgst": ${ddgst:-false} 00:23:05.874 }, 00:23:05.874 "method": "bdev_nvme_attach_controller" 00:23:05.874 } 00:23:05.874 EOF 00:23:05.874 )") 00:23:05.874 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:05.874 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:05.874 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:05.874 { 00:23:05.874 "params": { 00:23:05.874 "name": "Nvme$subsystem", 00:23:05.874 "trtype": "$TEST_TRANSPORT", 00:23:05.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:05.874 "adrfam": "ipv4", 00:23:05.874 "trsvcid": "$NVMF_PORT", 00:23:05.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:05.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:05.874 "hdgst": ${hdgst:-false}, 00:23:05.874 "ddgst": ${ddgst:-false} 00:23:05.874 }, 00:23:05.874 "method": "bdev_nvme_attach_controller" 00:23:05.874 } 00:23:05.874 EOF 00:23:05.874 )") 00:23:05.874 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:05.874 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:05.874 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:05.874 { 00:23:05.874 "params": { 00:23:05.874 "name": "Nvme$subsystem", 00:23:05.874 "trtype": "$TEST_TRANSPORT", 00:23:05.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:05.874 "adrfam": "ipv4", 00:23:05.874 "trsvcid": "$NVMF_PORT", 00:23:05.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:05.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:05.874 "hdgst": ${hdgst:-false}, 00:23:05.874 "ddgst": ${ddgst:-false} 00:23:05.874 }, 00:23:05.874 "method": "bdev_nvme_attach_controller" 00:23:05.874 } 00:23:05.874 EOF 00:23:05.874 )") 00:23:05.874 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:05.874 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:05.874 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:05.874 { 00:23:05.874 "params": { 00:23:05.874 "name": "Nvme$subsystem", 00:23:05.874 "trtype": "$TEST_TRANSPORT", 00:23:05.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:05.874 "adrfam": "ipv4", 00:23:05.874 "trsvcid": "$NVMF_PORT", 00:23:05.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:05.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:05.874 "hdgst": ${hdgst:-false}, 00:23:05.874 "ddgst": ${ddgst:-false} 00:23:05.874 }, 00:23:05.875 "method": "bdev_nvme_attach_controller" 00:23:05.875 } 00:23:05.875 EOF 00:23:05.875 )") 00:23:05.875 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:05.875 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:05.875 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:05.875 { 00:23:05.875 "params": { 00:23:05.875 "name": "Nvme$subsystem", 00:23:05.875 "trtype": "$TEST_TRANSPORT", 00:23:05.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:05.875 "adrfam": "ipv4", 00:23:05.875 "trsvcid": "$NVMF_PORT", 00:23:05.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:05.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:05.875 "hdgst": ${hdgst:-false}, 00:23:05.875 "ddgst": ${ddgst:-false} 00:23:05.875 }, 00:23:05.875 "method": "bdev_nvme_attach_controller" 00:23:05.875 } 00:23:05.875 EOF 00:23:05.875 )") 00:23:05.875 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:05.875 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:05.875 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:05.875 { 00:23:05.875 "params": { 00:23:05.875 "name": "Nvme$subsystem", 00:23:05.875 "trtype": "$TEST_TRANSPORT", 00:23:05.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:05.875 "adrfam": "ipv4", 00:23:05.875 "trsvcid": "$NVMF_PORT", 00:23:05.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:05.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:05.875 "hdgst": ${hdgst:-false}, 00:23:05.875 "ddgst": ${ddgst:-false} 00:23:05.875 }, 00:23:05.875 "method": "bdev_nvme_attach_controller" 00:23:05.875 } 00:23:05.875 EOF 00:23:05.875 )") 00:23:05.875 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:05.875 [2024-07-15 16:04:34.790755] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:23:05.875 [2024-07-15 16:04:34.790807] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3830693 ] 00:23:05.875 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:05.875 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:05.875 { 00:23:05.875 "params": { 00:23:05.875 "name": "Nvme$subsystem", 00:23:05.875 "trtype": "$TEST_TRANSPORT", 00:23:05.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:05.875 "adrfam": "ipv4", 00:23:05.875 "trsvcid": "$NVMF_PORT", 00:23:05.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:05.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:05.875 "hdgst": ${hdgst:-false}, 00:23:05.875 "ddgst": ${ddgst:-false} 00:23:05.875 }, 00:23:05.875 "method": "bdev_nvme_attach_controller" 00:23:05.875 } 00:23:05.875 EOF 00:23:05.875 )") 00:23:05.875 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:05.875 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:05.875 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:05.875 { 00:23:05.875 "params": { 00:23:05.875 "name": "Nvme$subsystem", 00:23:05.875 "trtype": "$TEST_TRANSPORT", 00:23:05.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:05.875 "adrfam": "ipv4", 00:23:05.875 "trsvcid": "$NVMF_PORT", 00:23:05.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:05.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:05.875 "hdgst": ${hdgst:-false}, 00:23:05.875 "ddgst": ${ddgst:-false} 00:23:05.875 }, 00:23:05.875 "method": "bdev_nvme_attach_controller" 00:23:05.875 } 00:23:05.875 EOF 00:23:05.875 )") 00:23:05.875 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:05.875 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:05.875 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:05.875 { 00:23:05.875 "params": { 00:23:05.875 "name": "Nvme$subsystem", 00:23:05.875 "trtype": "$TEST_TRANSPORT", 00:23:05.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:05.875 "adrfam": "ipv4", 00:23:05.875 "trsvcid": "$NVMF_PORT", 00:23:05.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:05.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:05.875 "hdgst": ${hdgst:-false}, 00:23:05.875 "ddgst": ${ddgst:-false} 00:23:05.875 }, 00:23:05.875 "method": "bdev_nvme_attach_controller" 00:23:05.875 } 00:23:05.875 EOF 00:23:05.875 )") 00:23:06.133 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:06.133 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:23:06.133 EAL: No free 2048 kB hugepages reported on node 1 00:23:06.133 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:23:06.133 16:04:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:06.133 "params": { 00:23:06.133 "name": "Nvme1", 00:23:06.133 "trtype": "tcp", 00:23:06.133 "traddr": "10.0.0.2", 00:23:06.133 "adrfam": "ipv4", 00:23:06.133 "trsvcid": "4420", 00:23:06.133 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:06.133 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:06.133 "hdgst": false, 00:23:06.133 "ddgst": false 00:23:06.133 }, 00:23:06.133 "method": "bdev_nvme_attach_controller" 00:23:06.133 },{ 00:23:06.133 "params": { 00:23:06.133 "name": "Nvme2", 00:23:06.133 "trtype": "tcp", 00:23:06.133 "traddr": "10.0.0.2", 00:23:06.133 "adrfam": "ipv4", 00:23:06.133 "trsvcid": "4420", 00:23:06.133 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:06.133 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:06.133 "hdgst": false, 00:23:06.133 "ddgst": false 00:23:06.133 }, 00:23:06.133 "method": "bdev_nvme_attach_controller" 00:23:06.133 },{ 00:23:06.133 "params": { 00:23:06.133 "name": "Nvme3", 00:23:06.133 "trtype": "tcp", 00:23:06.133 "traddr": "10.0.0.2", 00:23:06.133 "adrfam": "ipv4", 00:23:06.133 "trsvcid": "4420", 00:23:06.133 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:06.133 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:06.133 "hdgst": false, 00:23:06.133 "ddgst": false 00:23:06.134 }, 00:23:06.134 "method": "bdev_nvme_attach_controller" 00:23:06.134 },{ 00:23:06.134 "params": { 00:23:06.134 "name": "Nvme4", 00:23:06.134 "trtype": "tcp", 00:23:06.134 "traddr": "10.0.0.2", 00:23:06.134 "adrfam": "ipv4", 00:23:06.134 "trsvcid": "4420", 00:23:06.134 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:06.134 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:06.134 "hdgst": false, 00:23:06.134 "ddgst": false 00:23:06.134 }, 00:23:06.134 "method": "bdev_nvme_attach_controller" 00:23:06.134 },{ 00:23:06.134 "params": { 00:23:06.134 "name": "Nvme5", 00:23:06.134 "trtype": "tcp", 00:23:06.134 "traddr": "10.0.0.2", 00:23:06.134 "adrfam": "ipv4", 00:23:06.134 "trsvcid": "4420", 00:23:06.134 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:06.134 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:06.134 "hdgst": false, 00:23:06.134 "ddgst": false 00:23:06.134 }, 00:23:06.134 "method": "bdev_nvme_attach_controller" 00:23:06.134 },{ 00:23:06.134 "params": { 00:23:06.134 "name": "Nvme6", 00:23:06.134 "trtype": "tcp", 00:23:06.134 "traddr": "10.0.0.2", 00:23:06.134 "adrfam": "ipv4", 00:23:06.134 "trsvcid": "4420", 00:23:06.134 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:06.134 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:06.134 "hdgst": false, 00:23:06.134 "ddgst": false 00:23:06.134 }, 00:23:06.134 "method": "bdev_nvme_attach_controller" 00:23:06.134 },{ 00:23:06.134 "params": { 00:23:06.134 "name": "Nvme7", 00:23:06.134 "trtype": "tcp", 00:23:06.134 "traddr": "10.0.0.2", 00:23:06.134 "adrfam": "ipv4", 00:23:06.134 "trsvcid": "4420", 00:23:06.134 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:06.134 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:06.134 "hdgst": false, 00:23:06.134 "ddgst": false 00:23:06.134 }, 00:23:06.134 "method": "bdev_nvme_attach_controller" 00:23:06.134 },{ 00:23:06.134 "params": { 00:23:06.134 "name": "Nvme8", 00:23:06.134 "trtype": "tcp", 00:23:06.134 "traddr": "10.0.0.2", 00:23:06.134 "adrfam": "ipv4", 00:23:06.134 "trsvcid": "4420", 00:23:06.134 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:06.134 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:06.134 "hdgst": false, 00:23:06.134 "ddgst": false 00:23:06.134 }, 00:23:06.134 "method": "bdev_nvme_attach_controller" 00:23:06.134 },{ 00:23:06.134 "params": { 00:23:06.134 "name": "Nvme9", 00:23:06.134 "trtype": "tcp", 00:23:06.134 "traddr": "10.0.0.2", 00:23:06.134 "adrfam": "ipv4", 00:23:06.134 "trsvcid": "4420", 00:23:06.134 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:06.134 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:06.134 "hdgst": false, 00:23:06.134 "ddgst": false 00:23:06.134 }, 00:23:06.134 "method": "bdev_nvme_attach_controller" 00:23:06.134 },{ 00:23:06.134 "params": { 00:23:06.134 "name": "Nvme10", 00:23:06.134 "trtype": "tcp", 00:23:06.134 "traddr": "10.0.0.2", 00:23:06.134 "adrfam": "ipv4", 00:23:06.134 "trsvcid": "4420", 00:23:06.134 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:06.134 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:06.134 "hdgst": false, 00:23:06.134 "ddgst": false 00:23:06.134 }, 00:23:06.134 "method": "bdev_nvme_attach_controller" 00:23:06.134 }' 00:23:06.134 [2024-07-15 16:04:34.846264] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.134 [2024-07-15 16:04:34.920028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:08.035 Running I/O for 10 seconds... 00:23:08.619 16:04:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:08.619 16:04:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:23:08.619 16:04:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:08.619 16:04:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.619 16:04:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:08.619 16:04:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.619 16:04:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:08.619 16:04:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:08.619 16:04:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:08.619 16:04:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:08.619 16:04:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:23:08.619 16:04:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:23:08.619 16:04:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:08.619 16:04:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:08.619 16:04:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:08.619 16:04:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:08.619 16:04:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.619 16:04:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:08.619 16:04:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.619 16:04:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:08.619 16:04:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:08.619 16:04:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:23:08.619 16:04:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:23:08.619 16:04:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:23:08.619 16:04:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 3830417 00:23:08.619 16:04:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 3830417 ']' 00:23:08.619 16:04:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 3830417 00:23:08.619 16:04:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:23:08.619 16:04:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:08.619 16:04:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3830417 00:23:08.619 16:04:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:08.619 16:04:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:08.619 16:04:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3830417' 00:23:08.619 killing process with pid 3830417 00:23:08.619 16:04:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 3830417 00:23:08.619 16:04:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 3830417 00:23:08.619 [2024-07-15 16:04:37.443717] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.619 [2024-07-15 16:04:37.443771] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.619 [2024-07-15 16:04:37.443779] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.619 [2024-07-15 16:04:37.443786] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.619 [2024-07-15 16:04:37.443793] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.619 [2024-07-15 16:04:37.443799] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.619 [2024-07-15 16:04:37.443811] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.619 [2024-07-15 16:04:37.443818] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.619 [2024-07-15 16:04:37.443824] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.619 [2024-07-15 16:04:37.443831] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.619 [2024-07-15 16:04:37.443837] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.619 [2024-07-15 16:04:37.443843] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.619 [2024-07-15 16:04:37.443849] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.619 [2024-07-15 16:04:37.443856] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.619 [2024-07-15 16:04:37.443867] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.619 [2024-07-15 16:04:37.443874] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.619 [2024-07-15 16:04:37.443880] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.619 [2024-07-15 16:04:37.443887] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.619 [2024-07-15 16:04:37.443893] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.619 [2024-07-15 16:04:37.443899] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.619 [2024-07-15 16:04:37.443905] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.619 [2024-07-15 16:04:37.443912] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.619 [2024-07-15 16:04:37.443918] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.619 [2024-07-15 16:04:37.443925] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.619 [2024-07-15 16:04:37.443930] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.619 [2024-07-15 16:04:37.443936] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.619 [2024-07-15 16:04:37.443942] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.619 [2024-07-15 16:04:37.443948] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.619 [2024-07-15 16:04:37.443955] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.619 [2024-07-15 16:04:37.443961] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.443967] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.443974] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.443979] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.443987] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.443994] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.443999] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.444005] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.444011] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.444017] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.444023] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.444029] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.444036] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.444042] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.444047] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.444053] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.444060] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.444067] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.444073] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.444079] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.444085] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.444091] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.444097] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.444102] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.444109] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.444116] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.444121] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.444127] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.444133] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.444139] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85ad0 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445282] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445313] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445330] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445341] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445352] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445362] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445372] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445382] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445393] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445403] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445413] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445422] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445432] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445442] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445452] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445462] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445471] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445481] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445490] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445500] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445510] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445519] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445530] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445539] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445550] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445559] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445569] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445580] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445589] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445601] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445611] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445621] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445631] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445641] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445650] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445660] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445669] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445679] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445689] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445699] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445709] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445718] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445728] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445737] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445749] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445758] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445768] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445778] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445788] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445799] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445809] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445818] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445828] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445838] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445848] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445857] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445868] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445879] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.620 [2024-07-15 16:04:37.445888] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.445897] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.445907] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.445917] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.445926] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f69a00 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449512] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449529] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449536] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449543] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449549] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449556] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449563] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449570] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449576] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449582] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449589] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449595] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449602] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449608] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449615] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449621] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449627] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449633] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449639] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449646] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449655] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449661] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449668] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449674] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449680] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449687] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449693] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449699] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449706] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449712] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449718] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449724] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449730] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449737] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449743] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449749] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449756] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449762] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449767] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449774] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449780] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449786] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449792] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449800] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449807] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449814] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449820] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449829] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449836] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449842] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449849] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449855] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449862] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449868] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449874] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449881] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449887] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449893] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449899] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449905] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449912] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449918] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.449924] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d86d70 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.450924] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.450944] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.450955] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.450965] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.450975] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.450985] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.450996] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451005] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451015] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451026] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451036] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451050] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451060] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451069] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451079] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451089] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451098] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451109] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451118] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451129] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451139] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451148] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451158] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451168] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451178] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451188] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451198] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451207] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451217] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451231] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451241] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451251] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451261] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451270] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451280] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451290] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451299] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451310] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451322] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451332] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451342] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451351] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451360] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451370] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451380] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451389] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451399] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451409] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451419] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451429] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451439] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451450] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451460] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451470] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451480] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451490] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451500] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451510] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451520] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451531] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451541] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451551] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.451561] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d870e0 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.452736] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.452760] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.452772] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.452779] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.452785] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.452791] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.452798] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.452805] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.452811] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.452817] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.452824] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.452830] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.452836] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.452842] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.452848] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.452854] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.452860] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.452866] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.452873] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.452879] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.452885] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.452891] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.452897] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.452903] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.452910] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.452917] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.452922] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.452928] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.452934] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.452941] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.452949] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.452955] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.452961] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.621 [2024-07-15 16:04:37.452967] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.452973] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.452979] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.452984] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.452990] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.452996] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.453002] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.453008] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.453014] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.453020] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.453026] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.453032] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.453038] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.453044] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.453050] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.453056] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.453062] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.453068] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.453073] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.453079] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.453085] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.453091] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.453097] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.453102] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.453110] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.453116] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.453122] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.453129] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.453136] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.453142] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87580 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.453962] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.453985] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.453993] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.453999] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454006] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454012] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454019] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454025] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454031] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454037] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454044] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454050] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454056] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454061] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454068] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454074] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454080] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454086] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454092] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454098] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454103] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454113] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454119] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454125] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454131] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454138] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454144] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454150] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454156] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454162] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454168] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454174] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454180] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454186] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454195] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454202] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454208] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454214] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454221] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454233] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454239] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454245] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454251] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454258] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454264] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454270] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454276] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454282] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454289] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454296] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454301] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454307] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454313] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454319] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454325] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454331] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454337] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454343] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454349] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454355] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454361] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454367] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454373] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87a40 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454947] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454960] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454967] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454972] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454979] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454985] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454991] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.454997] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.455003] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.455009] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.455015] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.455020] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.455028] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.455035] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.455041] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.455047] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.455053] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.455059] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.455065] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.455071] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.455077] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.455084] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.455090] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.455096] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.455101] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.455107] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.455113] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.455119] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.455124] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.455131] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.455136] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.455142] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.455148] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.455155] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.455161] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.455167] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.455173] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.455179] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.455185] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.455193] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.455199] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.455205] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.455211] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.455217] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.455222] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.455233] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.455239] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.455245] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.455253] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.455259] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.455265] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.455271] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.455277] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.455284] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.455290] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.622 [2024-07-15 16:04:37.455296] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.623 [2024-07-15 16:04:37.455302] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.623 [2024-07-15 16:04:37.455308] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.623 [2024-07-15 16:04:37.455314] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.623 [2024-07-15 16:04:37.455320] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.623 [2024-07-15 16:04:37.455326] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.623 [2024-07-15 16:04:37.455332] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.623 [2024-07-15 16:04:37.455338] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87ee0 is same with the state(5) to be set 00:23:08.623 [2024-07-15 16:04:37.456149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.456992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.456999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.457006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.457014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.457020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.457028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.457034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.457042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.457049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.457057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.457064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.457071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.457077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.457086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.457092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.457100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.457106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.457114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.457120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.457147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:08.623 [2024-07-15 16:04:37.457201] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18e6920 was disconnected and freed. reset controller. 00:23:08.623 [2024-07-15 16:04:37.457408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.457424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.457435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.457443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.457451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.457458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.457466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.457473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.457481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.457487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.457495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.457502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.457510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.457516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.457524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.457530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.457539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.457545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.457553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.457559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.457567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.457574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.457582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.457591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.623 [2024-07-15 16:04:37.457599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.623 [2024-07-15 16:04:37.457606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.457614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.624 [2024-07-15 16:04:37.457620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.457628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.624 [2024-07-15 16:04:37.457635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.457642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.624 [2024-07-15 16:04:37.457651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.457659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.624 [2024-07-15 16:04:37.457665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.457674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.624 [2024-07-15 16:04:37.457681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.457689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.624 [2024-07-15 16:04:37.457695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.457703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.624 [2024-07-15 16:04:37.457710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.457717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.624 [2024-07-15 16:04:37.457724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.457732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.624 [2024-07-15 16:04:37.457739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.457747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.624 [2024-07-15 16:04:37.457753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.457761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.624 [2024-07-15 16:04:37.457768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.457777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.624 [2024-07-15 16:04:37.457785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.457793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.624 [2024-07-15 16:04:37.457800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.457808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.624 [2024-07-15 16:04:37.457814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.457822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.624 [2024-07-15 16:04:37.457828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.457836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.624 [2024-07-15 16:04:37.457843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.457851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.624 [2024-07-15 16:04:37.457857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.457865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.624 [2024-07-15 16:04:37.457872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.457879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.624 [2024-07-15 16:04:37.457887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.457895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.624 [2024-07-15 16:04:37.457902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.457910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.624 [2024-07-15 16:04:37.457916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.457925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.624 [2024-07-15 16:04:37.457931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.457939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.624 [2024-07-15 16:04:37.457946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.457954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.624 [2024-07-15 16:04:37.457962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.457970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.624 [2024-07-15 16:04:37.457977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.457985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.624 [2024-07-15 16:04:37.457991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.457999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.624 [2024-07-15 16:04:37.458006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.458014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.624 [2024-07-15 16:04:37.458021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.458029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.624 [2024-07-15 16:04:37.458035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.458043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.624 [2024-07-15 16:04:37.458050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.458058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.624 [2024-07-15 16:04:37.458064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.458072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.624 [2024-07-15 16:04:37.458078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.458086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.624 [2024-07-15 16:04:37.458093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.458100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.624 [2024-07-15 16:04:37.458107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.458115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.624 [2024-07-15 16:04:37.458123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.458131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.624 [2024-07-15 16:04:37.458137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.458147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.624 [2024-07-15 16:04:37.458153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.458161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.624 [2024-07-15 16:04:37.458168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.458175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.624 [2024-07-15 16:04:37.458182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.458190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.624 [2024-07-15 16:04:37.458196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.458204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.624 [2024-07-15 16:04:37.458211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.458219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.624 [2024-07-15 16:04:37.458230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.458239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.624 [2024-07-15 16:04:37.458246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.458254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.624 [2024-07-15 16:04:37.458260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.458268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.624 [2024-07-15 16:04:37.458275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.458283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.624 [2024-07-15 16:04:37.458289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.458297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.624 [2024-07-15 16:04:37.458303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.458311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.624 [2024-07-15 16:04:37.458317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.458325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.624 [2024-07-15 16:04:37.458333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.458341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.624 [2024-07-15 16:04:37.458347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.458355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.624 [2024-07-15 16:04:37.458363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.458384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:08.624 [2024-07-15 16:04:37.458433] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1817b70 was disconnected and freed. reset controller. 00:23:08.624 [2024-07-15 16:04:37.458765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.624 [2024-07-15 16:04:37.458783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.458790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.624 [2024-07-15 16:04:37.458797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.458805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.624 [2024-07-15 16:04:37.458811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.458819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.624 [2024-07-15 16:04:37.458825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.458832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d28b0 is same with the state(5) to be set 00:23:08.624 [2024-07-15 16:04:37.458858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.624 [2024-07-15 16:04:37.458867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.458874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.624 [2024-07-15 16:04:37.458880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.458887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.624 [2024-07-15 16:04:37.458894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.458901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.624 [2024-07-15 16:04:37.458908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.458914] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181dc70 is same with the state(5) to be set 00:23:08.624 [2024-07-15 16:04:37.458935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.624 [2024-07-15 16:04:37.458945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.458952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.624 [2024-07-15 16:04:37.458959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.458966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.624 [2024-07-15 16:04:37.458973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.458980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.624 [2024-07-15 16:04:37.458986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.458992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136c340 is same with the state(5) to be set 00:23:08.624 [2024-07-15 16:04:37.459015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.624 [2024-07-15 16:04:37.459023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.459031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.624 [2024-07-15 16:04:37.459037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.459043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.624 [2024-07-15 16:04:37.459050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.459057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.624 [2024-07-15 16:04:37.459063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.459069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e90d0 is same with the state(5) to be set 00:23:08.624 [2024-07-15 16:04:37.459095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.624 [2024-07-15 16:04:37.459103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.459110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.624 [2024-07-15 16:04:37.459116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.459123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.624 [2024-07-15 16:04:37.459129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.459135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.624 [2024-07-15 16:04:37.459142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.624 [2024-07-15 16:04:37.459154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840190 is same with the state(5) to be set 00:23:08.624 [2024-07-15 16:04:37.459178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.624 [2024-07-15 16:04:37.459186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.459193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.625 [2024-07-15 16:04:37.459200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.459207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.625 [2024-07-15 16:04:37.459213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.459220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.625 [2024-07-15 16:04:37.459233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.459239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861b30 is same with the state(5) to be set 00:23:08.625 [2024-07-15 16:04:37.459262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.625 [2024-07-15 16:04:37.459270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.459277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.625 [2024-07-15 16:04:37.459283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.459290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.625 [2024-07-15 16:04:37.459297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.459303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.625 [2024-07-15 16:04:37.459310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.459316] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x185a1d0 is same with the state(5) to be set 00:23:08.625 [2024-07-15 16:04:37.459334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.625 [2024-07-15 16:04:37.459342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.459348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.625 [2024-07-15 16:04:37.459355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.459362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.625 [2024-07-15 16:04:37.459369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.459375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.625 [2024-07-15 16:04:37.459383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.459389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f2050 is same with the state(5) to be set 00:23:08.625 [2024-07-15 16:04:37.459407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.625 [2024-07-15 16:04:37.459415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.459422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.625 [2024-07-15 16:04:37.459429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.459435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.625 [2024-07-15 16:04:37.459442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.459449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.625 [2024-07-15 16:04:37.459456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.459462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1864bf0 is same with the state(5) to be set 00:23:08.625 [2024-07-15 16:04:37.459485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.625 [2024-07-15 16:04:37.459493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.459500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.625 [2024-07-15 16:04:37.459506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.459513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.625 [2024-07-15 16:04:37.459521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.459528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.625 [2024-07-15 16:04:37.459535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.459541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e98d0 is same with the state(5) to be set 00:23:08.625 [2024-07-15 16:04:37.459626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.459637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.459648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.459655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.459663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.459672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.459680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.459686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.459695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.459701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.459709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.459716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.459724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.459730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.459738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.459744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.459752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.459758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.459766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.459773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.459780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.459787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.459795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.459801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.459809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.459815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.459825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.459832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.459840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.459846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.459856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.459862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.459870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.459877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.459885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.459891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.459899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.459906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.459914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.459920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.459928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.459934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.459942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.459948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.459956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.459963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.459970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.459976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.459984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.459990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.459998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.460005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.460013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.460019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.460027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.460034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.464595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.464605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.464616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.464623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.464632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.464638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.464646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.464653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.464661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.464667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.464676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.464683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.464691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.464697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.464705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.464712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.464720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.464727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.464735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.464741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.464750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.464756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.464764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.464771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.464781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.464787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.464796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.464802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.464810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.464817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.464824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.464831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.464839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.464846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.464854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.464860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.464869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.464875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.464884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.464890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.464898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.464904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.464913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.464919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.464928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.464934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.464942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.464949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.464957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.464965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.464973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.464979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.464988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.464994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.465002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.465009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.465017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.625 [2024-07-15 16:04:37.465023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.625 [2024-07-15 16:04:37.465031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.465038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.465048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.465054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.465062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.465069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.465077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.465084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.465091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.465098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.465106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.465112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.465120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.465127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.465193] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18e5490 was disconnected and freed. reset controller. 00:23:08.626 [2024-07-15 16:04:37.467363] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:08.626 [2024-07-15 16:04:37.467391] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:08.626 [2024-07-15 16:04:37.467407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x185a1d0 (9): Bad file descriptor 00:23:08.626 [2024-07-15 16:04:37.467418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19f2050 (9): Bad file descriptor 00:23:08.626 [2024-07-15 16:04:37.468776] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:08.626 [2024-07-15 16:04:37.468804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e98d0 (9): Bad file descriptor 00:23:08.626 [2024-07-15 16:04:37.468828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19d28b0 (9): Bad file descriptor 00:23:08.626 [2024-07-15 16:04:37.468845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x181dc70 (9): Bad file descriptor 00:23:08.626 [2024-07-15 16:04:37.468860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x136c340 (9): Bad file descriptor 00:23:08.626 [2024-07-15 16:04:37.468874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e90d0 (9): Bad file descriptor 00:23:08.626 [2024-07-15 16:04:37.468890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1840190 (9): Bad file descriptor 00:23:08.626 [2024-07-15 16:04:37.468906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1861b30 (9): Bad file descriptor 00:23:08.626 [2024-07-15 16:04:37.468923] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1864bf0 (9): Bad file descriptor 00:23:08.626 [2024-07-15 16:04:37.469888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:08.626 [2024-07-15 16:04:37.469910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19f2050 with addr=10.0.0.2, port=4420 00:23:08.626 [2024-07-15 16:04:37.469919] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f2050 is same with the state(5) to be set 00:23:08.626 [2024-07-15 16:04:37.470096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:08.626 [2024-07-15 16:04:37.470107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185a1d0 with addr=10.0.0.2, port=4420 00:23:08.626 [2024-07-15 16:04:37.470115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x185a1d0 is same with the state(5) to be set 00:23:08.626 [2024-07-15 16:04:37.470461] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:08.626 [2024-07-15 16:04:37.470511] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:08.626 [2024-07-15 16:04:37.470556] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:08.626 [2024-07-15 16:04:37.470597] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:08.626 [2024-07-15 16:04:37.470638] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:08.626 [2024-07-15 16:04:37.470681] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:08.626 [2024-07-15 16:04:37.470819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:08.626 [2024-07-15 16:04:37.470832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e98d0 with addr=10.0.0.2, port=4420 00:23:08.626 [2024-07-15 16:04:37.470840] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e98d0 is same with the state(5) to be set 00:23:08.626 [2024-07-15 16:04:37.470850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19f2050 (9): Bad file descriptor 00:23:08.626 [2024-07-15 16:04:37.470859] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x185a1d0 (9): Bad file descriptor 00:23:08.626 [2024-07-15 16:04:37.470900] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:08.626 [2024-07-15 16:04:37.470986] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e98d0 (9): Bad file descriptor 00:23:08.626 [2024-07-15 16:04:37.471000] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:08.626 [2024-07-15 16:04:37.471007] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:08.626 [2024-07-15 16:04:37.471014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:08.626 [2024-07-15 16:04:37.471026] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:08.626 [2024-07-15 16:04:37.471033] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:08.626 [2024-07-15 16:04:37.471039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:08.626 [2024-07-15 16:04:37.471059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.471068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.471080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.471087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.471096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.471103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.471111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.471118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.471126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.471133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.471141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.471148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.471156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.471162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.471171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.471177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.471186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.471193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.471202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.471208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.471219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.471233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.471242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.471248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.471256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.471263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.471271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.471277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.471286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.471292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.471301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.471307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.471315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.471322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.471330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.471337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.471345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.471351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.471360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.471366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.471374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.471381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.471389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.471395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.471404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.471413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.471421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.471427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.471436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.471442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.471451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.471457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.471466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.471472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.471480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.471486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.471495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.471501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.471509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.471516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.471524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.471530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.471538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.471545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.471553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.471559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.471567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.471574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.471582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.471588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.471597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.471605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.471613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.471619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.471627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.471634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.471642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.471649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.471657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.471663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.471671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.471678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.471687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.471694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.471702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.471708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.471716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.471723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.471731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.471737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.471745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.471752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.471760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.471766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.471775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.471781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.626 [2024-07-15 16:04:37.471791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.626 [2024-07-15 16:04:37.471797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.471805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.471812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.471820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.471826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.471835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.471841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.471850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.471856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.471865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.471871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.471880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.471887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.471895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.471901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.471909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.471916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.471924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.471930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.471938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.471945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.471953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.471959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.471968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.471976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.471984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.471990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.471999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.472005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.472014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.472020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.472028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195eea0 is same with the state(5) to be set 00:23:08.627 [2024-07-15 16:04:37.472096] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x195eea0 was disconnected and freed. reset controller. 00:23:08.627 [2024-07-15 16:04:37.472140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:08.627 [2024-07-15 16:04:37.472148] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:08.627 [2024-07-15 16:04:37.472156] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:08.627 [2024-07-15 16:04:37.472162] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:08.627 [2024-07-15 16:04:37.472168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:08.627 [2024-07-15 16:04:37.473159] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:08.627 [2024-07-15 16:04:37.473170] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:08.627 [2024-07-15 16:04:37.473482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:08.627 [2024-07-15 16:04:37.473495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x181dc70 with addr=10.0.0.2, port=4420 00:23:08.627 [2024-07-15 16:04:37.473503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181dc70 is same with the state(5) to be set 00:23:08.627 [2024-07-15 16:04:37.473758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x181dc70 (9): Bad file descriptor 00:23:08.627 [2024-07-15 16:04:37.473797] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:08.627 [2024-07-15 16:04:37.473804] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:08.627 [2024-07-15 16:04:37.473811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:08.627 [2024-07-15 16:04:37.473850] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:08.627 [2024-07-15 16:04:37.478900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.478913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.478924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.478932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.478943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.478950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.478958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.478964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.478973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.478979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.478987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.478993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.627 [2024-07-15 16:04:37.479754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.627 [2024-07-15 16:04:37.479762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.479769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.479776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.479783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.479792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.479798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.479806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.479812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.479820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.479827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.479834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1819040 is same with the state(5) to be set 00:23:08.628 [2024-07-15 16:04:37.480841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.480853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.480864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.480870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.480879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.480885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.480893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.480900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.480908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.480914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.480923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.480929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.480937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.480944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.480952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.480959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.480967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.480974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.480983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.480989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.480997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.481779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.481786] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195f910 is same with the state(5) to be set 00:23:08.628 [2024-07-15 16:04:37.482804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.482818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.482828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.482835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.482843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.482850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.482858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.482864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.482872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.482878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.482886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.482892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.482900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.482907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.482915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.482922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.482930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.482936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.482944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.482950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.482958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.482965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.628 [2024-07-15 16:04:37.482975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.628 [2024-07-15 16:04:37.482982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.482990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.482996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.483010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.483025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.483039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.483053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.483068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.483083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.483097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.483111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.483125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.483140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.483156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.483170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.483184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.483198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.483212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.483230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.483245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.483260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.483274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.483288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.483303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.483317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.483331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.483347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.483362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.483377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.483391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.483405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.483420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.483434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.483448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.483462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.483475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.483489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.483504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.483519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.483533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.483547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.483562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.483576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.483591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.483605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.483619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.483633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.483649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.483664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.483678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.483692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.483708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.483723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.483738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.483745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1960de0 is same with the state(5) to be set 00:23:08.629 [2024-07-15 16:04:37.484748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.484762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.484773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.484780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.484788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.484795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.484803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.484810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.484818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.484825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.484833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.484840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.484848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.484855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.484863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.484870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.484878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.484885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.484898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.484905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.484913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.484919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.484927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.484933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.484941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.484947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.484955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.484962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.484971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.484978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.484986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.484993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.485002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.485008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.485017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.485023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.485031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.485037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.485045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.485052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.485060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.485067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.629 [2024-07-15 16:04:37.485075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.629 [2024-07-15 16:04:37.485085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.485094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.485100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.485108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.485115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.485123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.485130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.485138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.485145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.485154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.485160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.485169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.485176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.485184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.485190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.485199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.485205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.485214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.485220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.485232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.485239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.485247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.485253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.485262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.485269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.485279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.485286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.485294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.485300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.485309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.485315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.485323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.485331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.485340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.485347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.485355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.485361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.485369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.485376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.485384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.485391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.485399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.485405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.485414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.485420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.485428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.485435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.485443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.485449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.485458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.485467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.485475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.485482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.485490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.485496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.485505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.485511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.485519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.485526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.485534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.485540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.485548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.485554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.485562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.485569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.485577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.485584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.485592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.485598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.485606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.485612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.485620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.485627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.485635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.485641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.485650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.485657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.485664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.485671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.485679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.485685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.485693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.485701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.485708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.485715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.485722] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19622b0 is same with the state(5) to be set 00:23:08.630 [2024-07-15 16:04:37.486723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.486737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.486749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.486756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.486765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.486771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.486779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.486785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.486793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.486800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.486808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.486815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.486823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.486830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.486840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.486847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.486855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.486861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.486869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.486876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.486883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.486890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.486898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.486904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.486913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.486919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.486927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.486933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.486942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.486948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.486956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.486962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.486970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.486976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.486984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.486990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.486998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.487004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.487012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.487020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.487028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.487035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.487042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.487049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.487057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.487063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.487071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.487077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.487085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.487091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.487099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.487105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.487114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.487120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.487128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.487135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.487143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.487149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.487157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.487164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.487172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.487178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.487186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.487193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.487202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.487208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.487216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.487222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.487234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.487241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.487248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.487255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.487263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.487269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.487277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.487284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.487292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.487298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.630 [2024-07-15 16:04:37.487306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.630 [2024-07-15 16:04:37.487312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.487320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.487327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.487335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.487341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.487349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.487355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.487363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.487369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.487377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.487384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.487392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.487399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.487407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.487413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.487421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.487427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.487435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.487442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.487449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.487455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.487463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.487469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.487477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.487483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.487491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.487498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.487507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.487514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.487522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.487528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.487536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.487542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.487551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.487557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.487565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.487573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.487581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.487588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.487596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.487602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.487610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.487616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.487624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.487630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.487638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.487645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.487653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.487659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.487666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1946a60 is same with the state(5) to be set 00:23:08.631 [2024-07-15 16:04:37.490458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.490476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.490489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.490496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.490504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.490511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.490519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.490526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.490534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.490541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.490549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.490558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.490567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.490573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.490581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.490588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.490596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.490602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.490610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.490617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.490625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.490631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.490639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.490646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.490654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.490660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.490668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.490674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.490682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.490688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.490696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.490702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.490710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.490716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.490725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.490731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.490740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.490747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.490754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.490761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.490768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.490775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.490783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.490789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.490797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.490804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.490813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.490819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.490827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.490833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.490842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.490848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.490856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.490862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.490870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.490877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.490884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.490891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.490899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.490905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.490913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.490921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.490929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.490936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.490944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.490951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.490959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.490965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.490973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.490980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.490987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.490994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.491002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.491008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.491016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.491023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.491031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.491037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.491045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.491051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.491059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.491066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.491074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.491080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.491088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.491094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.491104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.491110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.491118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.491124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.491132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.491139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.491147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.491153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.491161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.491167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.491176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.491182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.491190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.491196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.631 [2024-07-15 16:04:37.491204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.631 [2024-07-15 16:04:37.491211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.632 [2024-07-15 16:04:37.491218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.632 [2024-07-15 16:04:37.491229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.632 [2024-07-15 16:04:37.491237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.632 [2024-07-15 16:04:37.491244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.632 [2024-07-15 16:04:37.491252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.632 [2024-07-15 16:04:37.491258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.632 [2024-07-15 16:04:37.491266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.632 [2024-07-15 16:04:37.491272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.632 [2024-07-15 16:04:37.491281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.632 [2024-07-15 16:04:37.491289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.632 [2024-07-15 16:04:37.491297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.632 [2024-07-15 16:04:37.491304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.632 [2024-07-15 16:04:37.491312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.632 [2024-07-15 16:04:37.491318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.632 [2024-07-15 16:04:37.491326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.632 [2024-07-15 16:04:37.491333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.632 [2024-07-15 16:04:37.491341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.632 [2024-07-15 16:04:37.491347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.632 [2024-07-15 16:04:37.491355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.632 [2024-07-15 16:04:37.491361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.632 [2024-07-15 16:04:37.491369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.632 [2024-07-15 16:04:37.491375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.632 [2024-07-15 16:04:37.491383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.632 [2024-07-15 16:04:37.491390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.632 [2024-07-15 16:04:37.491398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:08.632 [2024-07-15 16:04:37.491404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.632 [2024-07-15 16:04:37.491411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1947ef0 is same with the state(5) to be set 00:23:08.632 [2024-07-15 16:04:37.493143] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:08.632 [2024-07-15 16:04:37.493160] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:23:08.632 [2024-07-15 16:04:37.493169] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:08.632 [2024-07-15 16:04:37.493178] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:23:08.632 [2024-07-15 16:04:37.493255] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:08.632 [2024-07-15 16:04:37.493270] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:08.632 [2024-07-15 16:04:37.493333] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:23:08.632 task offset: 29184 on job bdev=Nvme3n1 fails 00:23:08.632 00:23:08.632 Latency(us) 00:23:08.632 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.632 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:08.632 Job: Nvme1n1 ended in about 0.80 seconds with error 00:23:08.632 Verification LBA range: start 0x0 length 0x400 00:23:08.632 Nvme1n1 : 0.80 240.68 15.04 80.23 0.00 197166.47 4872.46 206979.78 00:23:08.632 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:08.632 Job: Nvme2n1 ended in about 0.79 seconds with error 00:23:08.632 Verification LBA range: start 0x0 length 0x400 00:23:08.632 Nvme2n1 : 0.79 242.09 15.13 80.70 0.00 192033.84 11853.47 217009.64 00:23:08.632 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:08.632 Job: Nvme3n1 ended in about 0.79 seconds with error 00:23:08.632 Verification LBA range: start 0x0 length 0x400 00:23:08.632 Nvme3n1 : 0.79 242.80 15.18 80.93 0.00 187469.91 9289.02 201508.95 00:23:08.632 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:08.632 Job: Nvme4n1 ended in about 0.79 seconds with error 00:23:08.632 Verification LBA range: start 0x0 length 0x400 00:23:08.632 Nvme4n1 : 0.79 242.52 15.16 80.84 0.00 183760.03 8947.09 217009.64 00:23:08.632 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:08.632 Job: Nvme5n1 ended in about 0.81 seconds with error 00:23:08.632 Verification LBA range: start 0x0 length 0x400 00:23:08.632 Nvme5n1 : 0.81 158.93 9.93 79.47 0.00 244352.22 17780.20 199685.34 00:23:08.632 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:08.632 Job: Nvme6n1 ended in about 0.81 seconds with error 00:23:08.632 Verification LBA range: start 0x0 length 0x400 00:23:08.632 Nvme6n1 : 0.81 158.55 9.91 79.28 0.00 239756.10 18464.06 220656.86 00:23:08.632 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:08.632 Job: Nvme7n1 ended in about 0.81 seconds with error 00:23:08.632 Verification LBA range: start 0x0 length 0x400 00:23:08.632 Nvme7n1 : 0.81 158.17 9.89 79.08 0.00 235108.92 28265.96 212450.62 00:23:08.632 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:08.632 Job: Nvme8n1 ended in about 0.81 seconds with error 00:23:08.632 Verification LBA range: start 0x0 length 0x400 00:23:08.632 Nvme8n1 : 0.81 242.84 15.18 78.89 0.00 169500.79 15158.76 210627.01 00:23:08.632 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:08.632 Job: Nvme9n1 ended in about 0.81 seconds with error 00:23:08.632 Verification LBA range: start 0x0 length 0x400 00:23:08.632 Nvme9n1 : 0.81 157.41 9.84 78.70 0.00 225855.81 17780.20 227951.30 00:23:08.632 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:08.632 Job: Nvme10n1 ended in about 0.82 seconds with error 00:23:08.632 Verification LBA range: start 0x0 length 0x400 00:23:08.632 Nvme10n1 : 0.82 156.69 9.79 78.34 0.00 221905.25 17552.25 244363.80 00:23:08.632 =================================================================================================================== 00:23:08.632 Total : 2000.67 125.04 796.46 0.00 206222.58 4872.46 244363.80 00:23:08.632 [2024-07-15 16:04:37.519171] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:08.632 [2024-07-15 16:04:37.519211] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:08.632 [2024-07-15 16:04:37.519571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:08.632 [2024-07-15 16:04:37.519591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x136c340 with addr=10.0.0.2, port=4420 00:23:08.632 [2024-07-15 16:04:37.519600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136c340 is same with the state(5) to be set 00:23:08.632 [2024-07-15 16:04:37.519858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:08.632 [2024-07-15 16:04:37.519869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1861b30 with addr=10.0.0.2, port=4420 00:23:08.632 [2024-07-15 16:04:37.519882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861b30 is same with the state(5) to be set 00:23:08.632 [2024-07-15 16:04:37.520127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:08.632 [2024-07-15 16:04:37.520137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1840190 with addr=10.0.0.2, port=4420 00:23:08.632 [2024-07-15 16:04:37.520144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1840190 is same with the state(5) to be set 00:23:08.632 [2024-07-15 16:04:37.520312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:08.632 [2024-07-15 16:04:37.520322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1864bf0 with addr=10.0.0.2, port=4420 00:23:08.632 [2024-07-15 16:04:37.520329] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1864bf0 is same with the state(5) to be set 00:23:08.632 [2024-07-15 16:04:37.521714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:08.632 [2024-07-15 16:04:37.521732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:08.632 [2024-07-15 16:04:37.521741] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:08.632 [2024-07-15 16:04:37.521749] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:08.632 [2024-07-15 16:04:37.522063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:08.632 [2024-07-15 16:04:37.522077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e90d0 with addr=10.0.0.2, port=4420 00:23:08.632 [2024-07-15 16:04:37.522084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e90d0 is same with the state(5) to be set 00:23:08.632 [2024-07-15 16:04:37.522325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:08.632 [2024-07-15 16:04:37.522336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19d28b0 with addr=10.0.0.2, port=4420 00:23:08.632 [2024-07-15 16:04:37.522343] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d28b0 is same with the state(5) to be set 00:23:08.632 [2024-07-15 16:04:37.522355] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x136c340 (9): Bad file descriptor 00:23:08.632 [2024-07-15 16:04:37.522365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1861b30 (9): Bad file descriptor 00:23:08.632 [2024-07-15 16:04:37.522374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1840190 (9): Bad file descriptor 00:23:08.632 [2024-07-15 16:04:37.522382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1864bf0 (9): Bad file descriptor 00:23:08.632 [2024-07-15 16:04:37.522414] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:08.632 [2024-07-15 16:04:37.522424] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:08.632 [2024-07-15 16:04:37.522435] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:08.632 [2024-07-15 16:04:37.522446] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:08.632 [2024-07-15 16:04:37.522752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:08.632 [2024-07-15 16:04:37.522765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185a1d0 with addr=10.0.0.2, port=4420 00:23:08.632 [2024-07-15 16:04:37.522772] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x185a1d0 is same with the state(5) to be set 00:23:08.632 [2024-07-15 16:04:37.522963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:08.632 [2024-07-15 16:04:37.522974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19f2050 with addr=10.0.0.2, port=4420 00:23:08.632 [2024-07-15 16:04:37.522984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f2050 is same with the state(5) to be set 00:23:08.632 [2024-07-15 16:04:37.523105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:08.632 [2024-07-15 16:04:37.523115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e98d0 with addr=10.0.0.2, port=4420 00:23:08.632 [2024-07-15 16:04:37.523122] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e98d0 is same with the state(5) to be set 00:23:08.632 [2024-07-15 16:04:37.523282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:08.632 [2024-07-15 16:04:37.523293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x181dc70 with addr=10.0.0.2, port=4420 00:23:08.632 [2024-07-15 16:04:37.523299] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181dc70 is same with the state(5) to be set 00:23:08.632 [2024-07-15 16:04:37.523308] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e90d0 (9): Bad file descriptor 00:23:08.632 [2024-07-15 16:04:37.523316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19d28b0 (9): Bad file descriptor 00:23:08.632 [2024-07-15 16:04:37.523324] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:23:08.632 [2024-07-15 16:04:37.523330] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:23:08.632 [2024-07-15 16:04:37.523337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:08.632 [2024-07-15 16:04:37.523347] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:23:08.632 [2024-07-15 16:04:37.523354] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:23:08.632 [2024-07-15 16:04:37.523360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:23:08.632 [2024-07-15 16:04:37.523370] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:23:08.632 [2024-07-15 16:04:37.523376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:23:08.632 [2024-07-15 16:04:37.523382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:08.632 [2024-07-15 16:04:37.523391] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:23:08.632 [2024-07-15 16:04:37.523397] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:23:08.632 [2024-07-15 16:04:37.523403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:23:08.632 [2024-07-15 16:04:37.523467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:08.632 [2024-07-15 16:04:37.523475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:08.632 [2024-07-15 16:04:37.523481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:08.632 [2024-07-15 16:04:37.523486] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:08.632 [2024-07-15 16:04:37.523493] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x185a1d0 (9): Bad file descriptor 00:23:08.632 [2024-07-15 16:04:37.523501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19f2050 (9): Bad file descriptor 00:23:08.632 [2024-07-15 16:04:37.523509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e98d0 (9): Bad file descriptor 00:23:08.632 [2024-07-15 16:04:37.523517] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x181dc70 (9): Bad file descriptor 00:23:08.632 [2024-07-15 16:04:37.523524] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:23:08.632 [2024-07-15 16:04:37.523533] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:23:08.632 [2024-07-15 16:04:37.523539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:23:08.632 [2024-07-15 16:04:37.523547] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:23:08.632 [2024-07-15 16:04:37.523553] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:23:08.632 [2024-07-15 16:04:37.523559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:08.632 [2024-07-15 16:04:37.523583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:08.632 [2024-07-15 16:04:37.523589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:08.632 [2024-07-15 16:04:37.523595] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:08.632 [2024-07-15 16:04:37.523601] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:08.632 [2024-07-15 16:04:37.523607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:08.632 [2024-07-15 16:04:37.523615] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:08.632 [2024-07-15 16:04:37.523621] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:08.632 [2024-07-15 16:04:37.523627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:08.632 [2024-07-15 16:04:37.523636] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:08.632 [2024-07-15 16:04:37.523641] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:08.632 [2024-07-15 16:04:37.523647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:08.632 [2024-07-15 16:04:37.523655] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:08.632 [2024-07-15 16:04:37.523661] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:08.632 [2024-07-15 16:04:37.523666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:08.632 [2024-07-15 16:04:37.523691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:08.632 [2024-07-15 16:04:37.523697] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:08.632 [2024-07-15 16:04:37.523703] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:08.632 [2024-07-15 16:04:37.523708] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.200 16:04:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:23:09.200 16:04:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:23:10.135 16:04:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 3830693 00:23:10.135 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (3830693) - No such process 00:23:10.135 16:04:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:23:10.135 16:04:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:23:10.135 16:04:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:10.135 16:04:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:10.135 16:04:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:10.135 16:04:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:10.135 16:04:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:10.135 16:04:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:23:10.135 16:04:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:10.135 16:04:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:23:10.135 16:04:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:10.135 16:04:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:10.135 rmmod nvme_tcp 00:23:10.135 rmmod nvme_fabrics 00:23:10.135 rmmod nvme_keyring 00:23:10.135 16:04:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:10.135 16:04:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:23:10.135 16:04:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:23:10.135 16:04:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:10.135 16:04:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:10.135 16:04:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:10.135 16:04:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:10.135 16:04:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:10.135 16:04:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:10.135 16:04:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:10.135 16:04:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:10.135 16:04:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:12.665 16:04:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:12.665 00:23:12.665 real 0m7.973s 00:23:12.665 user 0m20.261s 00:23:12.665 sys 0m1.231s 00:23:12.665 16:04:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:12.665 16:04:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:12.665 ************************************ 00:23:12.665 END TEST nvmf_shutdown_tc3 00:23:12.665 ************************************ 00:23:12.665 16:04:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:12.665 16:04:41 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:23:12.665 00:23:12.665 real 0m31.180s 00:23:12.665 user 1m19.891s 00:23:12.665 sys 0m8.004s 00:23:12.665 16:04:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:12.665 16:04:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:12.665 ************************************ 00:23:12.665 END TEST nvmf_shutdown 00:23:12.665 ************************************ 00:23:12.665 16:04:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:12.665 16:04:41 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:23:12.665 16:04:41 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:12.666 16:04:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:12.666 16:04:41 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:23:12.666 16:04:41 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:12.666 16:04:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:12.666 16:04:41 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:23:12.666 16:04:41 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:12.666 16:04:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:12.666 16:04:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:12.666 16:04:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:12.666 ************************************ 00:23:12.666 START TEST nvmf_multicontroller 00:23:12.666 ************************************ 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:12.666 * Looking for test storage... 00:23:12.666 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:23:12.666 16:04:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:17.954 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:17.954 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:17.954 Found net devices under 0000:86:00.0: cvl_0_0 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:17.954 Found net devices under 0000:86:00.1: cvl_0_1 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:17.954 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:17.955 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:17.955 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:17.955 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:17.955 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:17.955 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:17.955 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:17.955 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:17.955 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:17.955 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:17.955 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:17.955 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:17.955 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:17.955 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:17.955 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:17.955 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:18.214 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:18.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:18.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:23:18.214 00:23:18.214 --- 10.0.0.2 ping statistics --- 00:23:18.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.214 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:23:18.214 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:18.214 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:18.214 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:23:18.214 00:23:18.214 --- 10.0.0.1 ping statistics --- 00:23:18.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.214 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:23:18.214 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:18.214 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:23:18.214 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:18.214 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:18.214 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:18.214 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:18.214 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:18.214 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:18.214 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:18.214 16:04:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:18.214 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:18.214 16:04:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:18.214 16:04:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:18.214 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=3834948 00:23:18.214 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 3834948 00:23:18.214 16:04:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:18.214 16:04:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 3834948 ']' 00:23:18.214 16:04:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:18.214 16:04:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:18.214 16:04:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:18.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:18.214 16:04:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:18.214 16:04:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:18.214 [2024-07-15 16:04:46.994747] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:23:18.214 [2024-07-15 16:04:46.994788] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:18.214 EAL: No free 2048 kB hugepages reported on node 1 00:23:18.214 [2024-07-15 16:04:47.051699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:18.214 [2024-07-15 16:04:47.127228] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:18.214 [2024-07-15 16:04:47.127284] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:18.214 [2024-07-15 16:04:47.127292] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:18.214 [2024-07-15 16:04:47.127297] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:18.214 [2024-07-15 16:04:47.127302] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:18.214 [2024-07-15 16:04:47.127430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:18.214 [2024-07-15 16:04:47.127458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:18.214 [2024-07-15 16:04:47.127459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:19.153 [2024-07-15 16:04:47.844469] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:19.153 Malloc0 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:19.153 [2024-07-15 16:04:47.908839] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:19.153 [2024-07-15 16:04:47.916760] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:19.153 Malloc1 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3835179 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3835179 /var/tmp/bdevperf.sock 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 3835179 ']' 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:19.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:19.153 16:04:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:20.088 16:04:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:20.088 16:04:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:23:20.088 16:04:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:20.088 16:04:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.088 16:04:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:20.347 NVMe0n1 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.348 1 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:20.348 request: 00:23:20.348 { 00:23:20.348 "name": "NVMe0", 00:23:20.348 "trtype": "tcp", 00:23:20.348 "traddr": "10.0.0.2", 00:23:20.348 "adrfam": "ipv4", 00:23:20.348 "trsvcid": "4420", 00:23:20.348 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.348 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:20.348 "hostaddr": "10.0.0.2", 00:23:20.348 "hostsvcid": "60000", 00:23:20.348 "prchk_reftag": false, 00:23:20.348 "prchk_guard": false, 00:23:20.348 "hdgst": false, 00:23:20.348 "ddgst": false, 00:23:20.348 "method": "bdev_nvme_attach_controller", 00:23:20.348 "req_id": 1 00:23:20.348 } 00:23:20.348 Got JSON-RPC error response 00:23:20.348 response: 00:23:20.348 { 00:23:20.348 "code": -114, 00:23:20.348 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:20.348 } 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:20.348 request: 00:23:20.348 { 00:23:20.348 "name": "NVMe0", 00:23:20.348 "trtype": "tcp", 00:23:20.348 "traddr": "10.0.0.2", 00:23:20.348 "adrfam": "ipv4", 00:23:20.348 "trsvcid": "4420", 00:23:20.348 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:20.348 "hostaddr": "10.0.0.2", 00:23:20.348 "hostsvcid": "60000", 00:23:20.348 "prchk_reftag": false, 00:23:20.348 "prchk_guard": false, 00:23:20.348 "hdgst": false, 00:23:20.348 "ddgst": false, 00:23:20.348 "method": "bdev_nvme_attach_controller", 00:23:20.348 "req_id": 1 00:23:20.348 } 00:23:20.348 Got JSON-RPC error response 00:23:20.348 response: 00:23:20.348 { 00:23:20.348 "code": -114, 00:23:20.348 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:20.348 } 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:20.348 request: 00:23:20.348 { 00:23:20.348 "name": "NVMe0", 00:23:20.348 "trtype": "tcp", 00:23:20.348 "traddr": "10.0.0.2", 00:23:20.348 "adrfam": "ipv4", 00:23:20.348 "trsvcid": "4420", 00:23:20.348 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.348 "hostaddr": "10.0.0.2", 00:23:20.348 "hostsvcid": "60000", 00:23:20.348 "prchk_reftag": false, 00:23:20.348 "prchk_guard": false, 00:23:20.348 "hdgst": false, 00:23:20.348 "ddgst": false, 00:23:20.348 "multipath": "disable", 00:23:20.348 "method": "bdev_nvme_attach_controller", 00:23:20.348 "req_id": 1 00:23:20.348 } 00:23:20.348 Got JSON-RPC error response 00:23:20.348 response: 00:23:20.348 { 00:23:20.348 "code": -114, 00:23:20.348 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:23:20.348 } 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:20.348 request: 00:23:20.348 { 00:23:20.348 "name": "NVMe0", 00:23:20.348 "trtype": "tcp", 00:23:20.348 "traddr": "10.0.0.2", 00:23:20.348 "adrfam": "ipv4", 00:23:20.348 "trsvcid": "4420", 00:23:20.348 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.348 "hostaddr": "10.0.0.2", 00:23:20.348 "hostsvcid": "60000", 00:23:20.348 "prchk_reftag": false, 00:23:20.348 "prchk_guard": false, 00:23:20.348 "hdgst": false, 00:23:20.348 "ddgst": false, 00:23:20.348 "multipath": "failover", 00:23:20.348 "method": "bdev_nvme_attach_controller", 00:23:20.348 "req_id": 1 00:23:20.348 } 00:23:20.348 Got JSON-RPC error response 00:23:20.348 response: 00:23:20.348 { 00:23:20.348 "code": -114, 00:23:20.348 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:20.348 } 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:20.348 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.349 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:20.349 00:23:20.349 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.349 16:04:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:20.349 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.349 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:20.349 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.349 16:04:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:20.349 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.349 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:20.607 00:23:20.607 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.607 16:04:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:20.607 16:04:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:20.607 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.608 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:20.608 16:04:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.608 16:04:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:20.608 16:04:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:21.542 0 00:23:21.543 16:04:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:21.543 16:04:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.543 16:04:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:21.543 16:04:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.543 16:04:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 3835179 00:23:21.543 16:04:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 3835179 ']' 00:23:21.543 16:04:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 3835179 00:23:21.543 16:04:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:23:21.543 16:04:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:21.543 16:04:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3835179 00:23:21.802 16:04:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:21.802 16:04:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:21.802 16:04:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3835179' 00:23:21.802 killing process with pid 3835179 00:23:21.802 16:04:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 3835179 00:23:21.802 16:04:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 3835179 00:23:21.802 16:04:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:21.802 16:04:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.802 16:04:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:21.802 16:04:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.802 16:04:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:21.802 16:04:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.802 16:04:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:21.802 16:04:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.802 16:04:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:23:21.802 16:04:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:21.802 16:04:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:21.802 16:04:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:21.802 16:04:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:23:21.802 16:04:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:23:21.802 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:21.802 [2024-07-15 16:04:48.017397] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:23:21.802 [2024-07-15 16:04:48.017442] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3835179 ] 00:23:21.802 EAL: No free 2048 kB hugepages reported on node 1 00:23:21.802 [2024-07-15 16:04:48.071975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.802 [2024-07-15 16:04:48.152946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:21.802 [2024-07-15 16:04:49.304033] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 57f64843-f572-4990-bfad-b3504db5c632 already exists 00:23:21.802 [2024-07-15 16:04:49.304059] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:57f64843-f572-4990-bfad-b3504db5c632 alias for bdev NVMe1n1 00:23:21.802 [2024-07-15 16:04:49.304067] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:21.802 Running I/O for 1 seconds... 00:23:21.802 00:23:21.802 Latency(us) 00:23:21.802 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:21.802 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:21.802 NVMe0n1 : 1.01 23545.87 91.98 0.00 0.00 5418.83 1538.67 6696.07 00:23:21.802 =================================================================================================================== 00:23:21.802 Total : 23545.87 91.98 0.00 0.00 5418.83 1538.67 6696.07 00:23:21.802 Received shutdown signal, test time was about 1.000000 seconds 00:23:21.802 00:23:21.802 Latency(us) 00:23:21.802 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:21.802 =================================================================================================================== 00:23:21.802 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:21.802 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:21.802 16:04:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:21.802 16:04:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:21.802 16:04:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:23:21.802 16:04:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:21.802 16:04:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:23:21.802 16:04:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:21.802 16:04:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:23:21.802 16:04:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:21.802 16:04:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:21.802 rmmod nvme_tcp 00:23:22.061 rmmod nvme_fabrics 00:23:22.061 rmmod nvme_keyring 00:23:22.061 16:04:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:22.061 16:04:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:23:22.061 16:04:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:23:22.061 16:04:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 3834948 ']' 00:23:22.061 16:04:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 3834948 00:23:22.061 16:04:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 3834948 ']' 00:23:22.061 16:04:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 3834948 00:23:22.061 16:04:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:23:22.061 16:04:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:22.061 16:04:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3834948 00:23:22.061 16:04:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:22.062 16:04:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:22.062 16:04:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3834948' 00:23:22.062 killing process with pid 3834948 00:23:22.062 16:04:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 3834948 00:23:22.062 16:04:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 3834948 00:23:22.321 16:04:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:22.321 16:04:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:22.321 16:04:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:22.321 16:04:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:22.321 16:04:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:22.321 16:04:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.321 16:04:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:22.321 16:04:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.229 16:04:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:24.229 00:23:24.229 real 0m11.972s 00:23:24.229 user 0m16.362s 00:23:24.229 sys 0m4.998s 00:23:24.229 16:04:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:24.229 16:04:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.229 ************************************ 00:23:24.229 END TEST nvmf_multicontroller 00:23:24.229 ************************************ 00:23:24.487 16:04:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:24.487 16:04:53 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:24.487 16:04:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:24.487 16:04:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:24.487 16:04:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:24.487 ************************************ 00:23:24.487 START TEST nvmf_aer 00:23:24.487 ************************************ 00:23:24.487 16:04:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:24.487 * Looking for test storage... 00:23:24.487 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:24.487 16:04:53 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:24.487 16:04:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:24.487 16:04:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:24.487 16:04:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:24.487 16:04:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:24.487 16:04:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:24.487 16:04:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:24.487 16:04:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:24.487 16:04:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:24.487 16:04:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:24.487 16:04:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:24.487 16:04:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:24.487 16:04:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:24.487 16:04:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:24.487 16:04:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:24.488 16:04:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:24.488 16:04:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:24.488 16:04:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:24.488 16:04:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:24.488 16:04:53 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:24.488 16:04:53 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:24.488 16:04:53 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:24.488 16:04:53 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.488 16:04:53 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.488 16:04:53 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.488 16:04:53 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:24.488 16:04:53 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.488 16:04:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:23:24.488 16:04:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:24.488 16:04:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:24.488 16:04:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:24.488 16:04:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:24.488 16:04:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:24.488 16:04:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:24.488 16:04:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:24.488 16:04:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:24.488 16:04:53 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:24.488 16:04:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:24.488 16:04:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:24.488 16:04:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:24.488 16:04:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:24.488 16:04:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:24.488 16:04:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.488 16:04:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:24.488 16:04:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.488 16:04:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:24.488 16:04:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:24.488 16:04:53 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:23:24.488 16:04:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:29.761 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:29.761 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:29.761 Found net devices under 0000:86:00.0: cvl_0_0 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:29.761 Found net devices under 0000:86:00.1: cvl_0_1 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:29.761 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:29.761 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:23:29.761 00:23:29.761 --- 10.0.0.2 ping statistics --- 00:23:29.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.761 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:29.761 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:29.761 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:23:29.761 00:23:29.761 --- 10.0.0.1 ping statistics --- 00:23:29.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.761 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=3838978 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 3838978 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 3838978 ']' 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:29.761 16:04:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:29.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:29.762 16:04:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:29.762 16:04:58 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:29.762 [2024-07-15 16:04:58.694062] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:23:29.762 [2024-07-15 16:04:58.694103] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:30.020 EAL: No free 2048 kB hugepages reported on node 1 00:23:30.020 [2024-07-15 16:04:58.752120] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:30.020 [2024-07-15 16:04:58.827449] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:30.020 [2024-07-15 16:04:58.827487] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:30.020 [2024-07-15 16:04:58.827495] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:30.020 [2024-07-15 16:04:58.827501] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:30.020 [2024-07-15 16:04:58.827507] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:30.020 [2024-07-15 16:04:58.827552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.020 [2024-07-15 16:04:58.827657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:30.020 [2024-07-15 16:04:58.827683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:30.021 [2024-07-15 16:04:58.827684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:30.588 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:30.588 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:23:30.588 16:04:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:30.588 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:30.588 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:30.847 16:04:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:30.847 16:04:59 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:30.847 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.847 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:30.847 [2024-07-15 16:04:59.533218] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:30.847 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.847 16:04:59 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:30.847 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.847 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:30.847 Malloc0 00:23:30.847 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.847 16:04:59 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:30.847 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.847 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:30.847 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.847 16:04:59 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:30.847 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.847 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:30.847 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.847 16:04:59 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:30.847 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.847 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:30.847 [2024-07-15 16:04:59.584944] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:30.847 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.847 16:04:59 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:30.847 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.847 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:30.847 [ 00:23:30.847 { 00:23:30.847 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:30.847 "subtype": "Discovery", 00:23:30.847 "listen_addresses": [], 00:23:30.847 "allow_any_host": true, 00:23:30.847 "hosts": [] 00:23:30.847 }, 00:23:30.847 { 00:23:30.847 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:30.847 "subtype": "NVMe", 00:23:30.847 "listen_addresses": [ 00:23:30.847 { 00:23:30.847 "trtype": "TCP", 00:23:30.847 "adrfam": "IPv4", 00:23:30.847 "traddr": "10.0.0.2", 00:23:30.847 "trsvcid": "4420" 00:23:30.847 } 00:23:30.847 ], 00:23:30.847 "allow_any_host": true, 00:23:30.847 "hosts": [], 00:23:30.847 "serial_number": "SPDK00000000000001", 00:23:30.847 "model_number": "SPDK bdev Controller", 00:23:30.847 "max_namespaces": 2, 00:23:30.847 "min_cntlid": 1, 00:23:30.847 "max_cntlid": 65519, 00:23:30.847 "namespaces": [ 00:23:30.847 { 00:23:30.847 "nsid": 1, 00:23:30.847 "bdev_name": "Malloc0", 00:23:30.847 "name": "Malloc0", 00:23:30.847 "nguid": "FF2595DA3F6F467EA9261969A4BC330B", 00:23:30.847 "uuid": "ff2595da-3f6f-467e-a926-1969a4bc330b" 00:23:30.847 } 00:23:30.847 ] 00:23:30.847 } 00:23:30.847 ] 00:23:30.847 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.847 16:04:59 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:30.847 16:04:59 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:30.847 16:04:59 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=3839227 00:23:30.847 16:04:59 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:30.847 16:04:59 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:30.847 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:23:30.847 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:30.847 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:23:30.847 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:23:30.847 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:30.847 EAL: No free 2048 kB hugepages reported on node 1 00:23:30.847 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:30.847 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:23:30.847 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:23:30.847 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:31.106 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:31.106 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:31.106 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:23:31.106 16:04:59 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:31.106 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.106 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:31.106 Malloc1 00:23:31.106 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.106 16:04:59 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:31.106 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.106 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:31.106 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.106 16:04:59 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:31.106 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.106 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:31.106 Asynchronous Event Request test 00:23:31.106 Attaching to 10.0.0.2 00:23:31.106 Attached to 10.0.0.2 00:23:31.106 Registering asynchronous event callbacks... 00:23:31.106 Starting namespace attribute notice tests for all controllers... 00:23:31.106 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:31.106 aer_cb - Changed Namespace 00:23:31.106 Cleaning up... 00:23:31.106 [ 00:23:31.106 { 00:23:31.106 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:31.106 "subtype": "Discovery", 00:23:31.106 "listen_addresses": [], 00:23:31.106 "allow_any_host": true, 00:23:31.106 "hosts": [] 00:23:31.106 }, 00:23:31.106 { 00:23:31.106 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.106 "subtype": "NVMe", 00:23:31.106 "listen_addresses": [ 00:23:31.106 { 00:23:31.106 "trtype": "TCP", 00:23:31.106 "adrfam": "IPv4", 00:23:31.106 "traddr": "10.0.0.2", 00:23:31.106 "trsvcid": "4420" 00:23:31.106 } 00:23:31.106 ], 00:23:31.106 "allow_any_host": true, 00:23:31.106 "hosts": [], 00:23:31.106 "serial_number": "SPDK00000000000001", 00:23:31.106 "model_number": "SPDK bdev Controller", 00:23:31.106 "max_namespaces": 2, 00:23:31.106 "min_cntlid": 1, 00:23:31.106 "max_cntlid": 65519, 00:23:31.106 "namespaces": [ 00:23:31.106 { 00:23:31.106 "nsid": 1, 00:23:31.106 "bdev_name": "Malloc0", 00:23:31.106 "name": "Malloc0", 00:23:31.106 "nguid": "FF2595DA3F6F467EA9261969A4BC330B", 00:23:31.106 "uuid": "ff2595da-3f6f-467e-a926-1969a4bc330b" 00:23:31.106 }, 00:23:31.106 { 00:23:31.106 "nsid": 2, 00:23:31.106 "bdev_name": "Malloc1", 00:23:31.106 "name": "Malloc1", 00:23:31.106 "nguid": "E74F8BDD5FAC4DBD869752DEC46D645B", 00:23:31.106 "uuid": "e74f8bdd-5fac-4dbd-8697-52dec46d645b" 00:23:31.106 } 00:23:31.106 ] 00:23:31.106 } 00:23:31.106 ] 00:23:31.106 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.107 16:04:59 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 3839227 00:23:31.107 16:04:59 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:31.107 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.107 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:31.107 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.107 16:04:59 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:31.107 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.107 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:31.107 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.107 16:04:59 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:31.107 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.107 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:31.107 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.107 16:04:59 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:31.107 16:04:59 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:31.107 16:04:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:31.107 16:04:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:23:31.107 16:04:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:31.107 16:04:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:23:31.107 16:04:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:31.107 16:04:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:31.107 rmmod nvme_tcp 00:23:31.107 rmmod nvme_fabrics 00:23:31.107 rmmod nvme_keyring 00:23:31.107 16:04:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:31.107 16:04:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:23:31.107 16:04:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:23:31.107 16:04:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 3838978 ']' 00:23:31.107 16:04:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 3838978 00:23:31.107 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 3838978 ']' 00:23:31.107 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 3838978 00:23:31.107 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:23:31.107 16:04:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:31.107 16:05:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3838978 00:23:31.364 16:05:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:31.364 16:05:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:31.364 16:05:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3838978' 00:23:31.364 killing process with pid 3838978 00:23:31.364 16:05:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 3838978 00:23:31.364 16:05:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 3838978 00:23:31.364 16:05:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:31.364 16:05:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:31.364 16:05:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:31.364 16:05:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:31.364 16:05:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:31.364 16:05:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.364 16:05:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:31.364 16:05:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.896 16:05:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:33.896 00:23:33.896 real 0m9.089s 00:23:33.896 user 0m7.149s 00:23:33.896 sys 0m4.388s 00:23:33.896 16:05:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:33.896 16:05:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.896 ************************************ 00:23:33.896 END TEST nvmf_aer 00:23:33.896 ************************************ 00:23:33.896 16:05:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:33.896 16:05:02 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:33.896 16:05:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:33.896 16:05:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:33.896 16:05:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:33.896 ************************************ 00:23:33.896 START TEST nvmf_async_init 00:23:33.896 ************************************ 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:33.896 * Looking for test storage... 00:23:33.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=3c051939eb0c4e58a8babc0ed80d9ba8 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:23:33.896 16:05:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:39.165 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:39.165 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:39.165 Found net devices under 0000:86:00.0: cvl_0_0 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:39.165 Found net devices under 0000:86:00.1: cvl_0_1 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:39.165 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:39.165 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:23:39.165 00:23:39.165 --- 10.0.0.2 ping statistics --- 00:23:39.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.165 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:39.165 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:39.165 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:23:39.165 00:23:39.165 --- 10.0.0.1 ping statistics --- 00:23:39.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.165 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=3842738 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 3842738 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 3842738 ']' 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:39.165 16:05:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.165 [2024-07-15 16:05:08.029043] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:23:39.166 [2024-07-15 16:05:08.029088] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:39.166 EAL: No free 2048 kB hugepages reported on node 1 00:23:39.166 [2024-07-15 16:05:08.085055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.425 [2024-07-15 16:05:08.164784] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:39.425 [2024-07-15 16:05:08.164824] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:39.425 [2024-07-15 16:05:08.164831] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:39.425 [2024-07-15 16:05:08.164837] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:39.425 [2024-07-15 16:05:08.164842] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:39.425 [2024-07-15 16:05:08.164860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:39.992 16:05:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:39.992 16:05:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:23:39.992 16:05:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:39.992 16:05:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:39.992 16:05:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.992 16:05:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:39.992 16:05:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:39.992 16:05:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.992 16:05:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.992 [2024-07-15 16:05:08.872586] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:39.992 16:05:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.992 16:05:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:39.992 16:05:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.992 16:05:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.992 null0 00:23:39.992 16:05:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.992 16:05:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:39.992 16:05:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.992 16:05:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.992 16:05:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.992 16:05:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:39.992 16:05:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.992 16:05:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.992 16:05:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.992 16:05:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 3c051939eb0c4e58a8babc0ed80d9ba8 00:23:39.992 16:05:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.992 16:05:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.992 16:05:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.992 16:05:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:39.992 16:05:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.993 16:05:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:39.993 [2024-07-15 16:05:08.912800] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:39.993 16:05:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.993 16:05:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:39.993 16:05:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.993 16:05:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.251 nvme0n1 00:23:40.251 16:05:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.251 16:05:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:40.251 16:05:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.251 16:05:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.251 [ 00:23:40.251 { 00:23:40.251 "name": "nvme0n1", 00:23:40.251 "aliases": [ 00:23:40.251 "3c051939-eb0c-4e58-a8ba-bc0ed80d9ba8" 00:23:40.251 ], 00:23:40.251 "product_name": "NVMe disk", 00:23:40.251 "block_size": 512, 00:23:40.251 "num_blocks": 2097152, 00:23:40.251 "uuid": "3c051939-eb0c-4e58-a8ba-bc0ed80d9ba8", 00:23:40.252 "assigned_rate_limits": { 00:23:40.252 "rw_ios_per_sec": 0, 00:23:40.252 "rw_mbytes_per_sec": 0, 00:23:40.252 "r_mbytes_per_sec": 0, 00:23:40.252 "w_mbytes_per_sec": 0 00:23:40.252 }, 00:23:40.252 "claimed": false, 00:23:40.252 "zoned": false, 00:23:40.252 "supported_io_types": { 00:23:40.252 "read": true, 00:23:40.252 "write": true, 00:23:40.252 "unmap": false, 00:23:40.252 "flush": true, 00:23:40.252 "reset": true, 00:23:40.252 "nvme_admin": true, 00:23:40.252 "nvme_io": true, 00:23:40.252 "nvme_io_md": false, 00:23:40.252 "write_zeroes": true, 00:23:40.252 "zcopy": false, 00:23:40.252 "get_zone_info": false, 00:23:40.252 "zone_management": false, 00:23:40.252 "zone_append": false, 00:23:40.252 "compare": true, 00:23:40.252 "compare_and_write": true, 00:23:40.252 "abort": true, 00:23:40.252 "seek_hole": false, 00:23:40.252 "seek_data": false, 00:23:40.252 "copy": true, 00:23:40.252 "nvme_iov_md": false 00:23:40.252 }, 00:23:40.252 "memory_domains": [ 00:23:40.252 { 00:23:40.252 "dma_device_id": "system", 00:23:40.252 "dma_device_type": 1 00:23:40.252 } 00:23:40.252 ], 00:23:40.252 "driver_specific": { 00:23:40.252 "nvme": [ 00:23:40.252 { 00:23:40.252 "trid": { 00:23:40.252 "trtype": "TCP", 00:23:40.252 "adrfam": "IPv4", 00:23:40.252 "traddr": "10.0.0.2", 00:23:40.252 "trsvcid": "4420", 00:23:40.252 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:40.252 }, 00:23:40.252 "ctrlr_data": { 00:23:40.252 "cntlid": 1, 00:23:40.252 "vendor_id": "0x8086", 00:23:40.252 "model_number": "SPDK bdev Controller", 00:23:40.252 "serial_number": "00000000000000000000", 00:23:40.252 "firmware_revision": "24.09", 00:23:40.252 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:40.252 "oacs": { 00:23:40.252 "security": 0, 00:23:40.252 "format": 0, 00:23:40.252 "firmware": 0, 00:23:40.252 "ns_manage": 0 00:23:40.252 }, 00:23:40.252 "multi_ctrlr": true, 00:23:40.252 "ana_reporting": false 00:23:40.252 }, 00:23:40.252 "vs": { 00:23:40.252 "nvme_version": "1.3" 00:23:40.252 }, 00:23:40.252 "ns_data": { 00:23:40.252 "id": 1, 00:23:40.252 "can_share": true 00:23:40.252 } 00:23:40.252 } 00:23:40.252 ], 00:23:40.252 "mp_policy": "active_passive" 00:23:40.252 } 00:23:40.252 } 00:23:40.252 ] 00:23:40.252 16:05:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.252 16:05:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:40.252 16:05:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.252 16:05:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.252 [2024-07-15 16:05:09.161296] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:40.252 [2024-07-15 16:05:09.161352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x89e250 (9): Bad file descriptor 00:23:40.512 [2024-07-15 16:05:09.293314] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:40.512 16:05:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.512 16:05:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:40.512 16:05:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.512 16:05:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.512 [ 00:23:40.512 { 00:23:40.512 "name": "nvme0n1", 00:23:40.512 "aliases": [ 00:23:40.512 "3c051939-eb0c-4e58-a8ba-bc0ed80d9ba8" 00:23:40.512 ], 00:23:40.512 "product_name": "NVMe disk", 00:23:40.512 "block_size": 512, 00:23:40.512 "num_blocks": 2097152, 00:23:40.512 "uuid": "3c051939-eb0c-4e58-a8ba-bc0ed80d9ba8", 00:23:40.512 "assigned_rate_limits": { 00:23:40.512 "rw_ios_per_sec": 0, 00:23:40.512 "rw_mbytes_per_sec": 0, 00:23:40.512 "r_mbytes_per_sec": 0, 00:23:40.512 "w_mbytes_per_sec": 0 00:23:40.512 }, 00:23:40.512 "claimed": false, 00:23:40.512 "zoned": false, 00:23:40.512 "supported_io_types": { 00:23:40.512 "read": true, 00:23:40.512 "write": true, 00:23:40.512 "unmap": false, 00:23:40.512 "flush": true, 00:23:40.512 "reset": true, 00:23:40.512 "nvme_admin": true, 00:23:40.512 "nvme_io": true, 00:23:40.512 "nvme_io_md": false, 00:23:40.512 "write_zeroes": true, 00:23:40.512 "zcopy": false, 00:23:40.512 "get_zone_info": false, 00:23:40.512 "zone_management": false, 00:23:40.512 "zone_append": false, 00:23:40.512 "compare": true, 00:23:40.512 "compare_and_write": true, 00:23:40.512 "abort": true, 00:23:40.512 "seek_hole": false, 00:23:40.512 "seek_data": false, 00:23:40.512 "copy": true, 00:23:40.512 "nvme_iov_md": false 00:23:40.512 }, 00:23:40.512 "memory_domains": [ 00:23:40.512 { 00:23:40.512 "dma_device_id": "system", 00:23:40.512 "dma_device_type": 1 00:23:40.512 } 00:23:40.512 ], 00:23:40.512 "driver_specific": { 00:23:40.512 "nvme": [ 00:23:40.512 { 00:23:40.512 "trid": { 00:23:40.512 "trtype": "TCP", 00:23:40.512 "adrfam": "IPv4", 00:23:40.512 "traddr": "10.0.0.2", 00:23:40.512 "trsvcid": "4420", 00:23:40.512 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:40.512 }, 00:23:40.512 "ctrlr_data": { 00:23:40.512 "cntlid": 2, 00:23:40.512 "vendor_id": "0x8086", 00:23:40.512 "model_number": "SPDK bdev Controller", 00:23:40.512 "serial_number": "00000000000000000000", 00:23:40.512 "firmware_revision": "24.09", 00:23:40.512 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:40.512 "oacs": { 00:23:40.512 "security": 0, 00:23:40.512 "format": 0, 00:23:40.512 "firmware": 0, 00:23:40.512 "ns_manage": 0 00:23:40.512 }, 00:23:40.512 "multi_ctrlr": true, 00:23:40.512 "ana_reporting": false 00:23:40.512 }, 00:23:40.512 "vs": { 00:23:40.512 "nvme_version": "1.3" 00:23:40.512 }, 00:23:40.512 "ns_data": { 00:23:40.512 "id": 1, 00:23:40.512 "can_share": true 00:23:40.512 } 00:23:40.512 } 00:23:40.512 ], 00:23:40.512 "mp_policy": "active_passive" 00:23:40.512 } 00:23:40.512 } 00:23:40.512 ] 00:23:40.512 16:05:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.512 16:05:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.512 16:05:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.512 16:05:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.512 16:05:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.512 16:05:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:40.512 16:05:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.uCWvrpsGrL 00:23:40.512 16:05:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:40.512 16:05:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.uCWvrpsGrL 00:23:40.512 16:05:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:40.512 16:05:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.512 16:05:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.512 16:05:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.512 16:05:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:40.512 16:05:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.512 16:05:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.512 [2024-07-15 16:05:09.345863] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:40.512 [2024-07-15 16:05:09.346005] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:40.512 16:05:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.512 16:05:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.uCWvrpsGrL 00:23:40.512 16:05:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.512 16:05:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.512 [2024-07-15 16:05:09.353877] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:40.512 16:05:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.512 16:05:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.uCWvrpsGrL 00:23:40.512 16:05:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.512 16:05:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.512 [2024-07-15 16:05:09.361912] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:40.512 [2024-07-15 16:05:09.361947] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:40.512 nvme0n1 00:23:40.512 16:05:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.512 16:05:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:40.512 16:05:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.512 16:05:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.512 [ 00:23:40.512 { 00:23:40.512 "name": "nvme0n1", 00:23:40.512 "aliases": [ 00:23:40.512 "3c051939-eb0c-4e58-a8ba-bc0ed80d9ba8" 00:23:40.512 ], 00:23:40.512 "product_name": "NVMe disk", 00:23:40.512 "block_size": 512, 00:23:40.512 "num_blocks": 2097152, 00:23:40.512 "uuid": "3c051939-eb0c-4e58-a8ba-bc0ed80d9ba8", 00:23:40.512 "assigned_rate_limits": { 00:23:40.512 "rw_ios_per_sec": 0, 00:23:40.512 "rw_mbytes_per_sec": 0, 00:23:40.512 "r_mbytes_per_sec": 0, 00:23:40.512 "w_mbytes_per_sec": 0 00:23:40.512 }, 00:23:40.512 "claimed": false, 00:23:40.512 "zoned": false, 00:23:40.512 "supported_io_types": { 00:23:40.512 "read": true, 00:23:40.513 "write": true, 00:23:40.513 "unmap": false, 00:23:40.513 "flush": true, 00:23:40.513 "reset": true, 00:23:40.513 "nvme_admin": true, 00:23:40.513 "nvme_io": true, 00:23:40.513 "nvme_io_md": false, 00:23:40.513 "write_zeroes": true, 00:23:40.513 "zcopy": false, 00:23:40.513 "get_zone_info": false, 00:23:40.513 "zone_management": false, 00:23:40.513 "zone_append": false, 00:23:40.513 "compare": true, 00:23:40.513 "compare_and_write": true, 00:23:40.513 "abort": true, 00:23:40.513 "seek_hole": false, 00:23:40.513 "seek_data": false, 00:23:40.513 "copy": true, 00:23:40.513 "nvme_iov_md": false 00:23:40.513 }, 00:23:40.513 "memory_domains": [ 00:23:40.513 { 00:23:40.513 "dma_device_id": "system", 00:23:40.513 "dma_device_type": 1 00:23:40.513 } 00:23:40.513 ], 00:23:40.513 "driver_specific": { 00:23:40.513 "nvme": [ 00:23:40.513 { 00:23:40.513 "trid": { 00:23:40.513 "trtype": "TCP", 00:23:40.513 "adrfam": "IPv4", 00:23:40.513 "traddr": "10.0.0.2", 00:23:40.513 "trsvcid": "4421", 00:23:40.513 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:40.513 }, 00:23:40.513 "ctrlr_data": { 00:23:40.513 "cntlid": 3, 00:23:40.513 "vendor_id": "0x8086", 00:23:40.513 "model_number": "SPDK bdev Controller", 00:23:40.513 "serial_number": "00000000000000000000", 00:23:40.513 "firmware_revision": "24.09", 00:23:40.513 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:40.513 "oacs": { 00:23:40.513 "security": 0, 00:23:40.513 "format": 0, 00:23:40.513 "firmware": 0, 00:23:40.513 "ns_manage": 0 00:23:40.513 }, 00:23:40.513 "multi_ctrlr": true, 00:23:40.513 "ana_reporting": false 00:23:40.513 }, 00:23:40.513 "vs": { 00:23:40.513 "nvme_version": "1.3" 00:23:40.513 }, 00:23:40.513 "ns_data": { 00:23:40.513 "id": 1, 00:23:40.513 "can_share": true 00:23:40.513 } 00:23:40.513 } 00:23:40.513 ], 00:23:40.513 "mp_policy": "active_passive" 00:23:40.513 } 00:23:40.513 } 00:23:40.513 ] 00:23:40.513 16:05:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.513 16:05:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.513 16:05:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.513 16:05:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.773 16:05:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.773 16:05:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.uCWvrpsGrL 00:23:40.773 16:05:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:23:40.773 16:05:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:23:40.773 16:05:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:40.773 16:05:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:23:40.773 16:05:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:40.773 16:05:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:23:40.773 16:05:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:40.773 16:05:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:40.773 rmmod nvme_tcp 00:23:40.773 rmmod nvme_fabrics 00:23:40.773 rmmod nvme_keyring 00:23:40.773 16:05:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:40.773 16:05:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:23:40.773 16:05:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:23:40.773 16:05:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 3842738 ']' 00:23:40.773 16:05:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 3842738 00:23:40.773 16:05:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 3842738 ']' 00:23:40.773 16:05:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 3842738 00:23:40.773 16:05:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:23:40.773 16:05:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:40.773 16:05:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3842738 00:23:40.773 16:05:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:40.773 16:05:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:40.773 16:05:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3842738' 00:23:40.773 killing process with pid 3842738 00:23:40.773 16:05:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 3842738 00:23:40.773 [2024-07-15 16:05:09.552342] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:40.773 [2024-07-15 16:05:09.552369] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:40.773 16:05:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 3842738 00:23:41.032 16:05:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:41.032 16:05:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:41.032 16:05:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:41.032 16:05:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:41.032 16:05:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:41.032 16:05:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:41.032 16:05:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:41.032 16:05:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.936 16:05:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:42.936 00:23:42.936 real 0m9.415s 00:23:42.936 user 0m3.454s 00:23:42.936 sys 0m4.443s 00:23:42.936 16:05:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:42.936 16:05:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:42.936 ************************************ 00:23:42.936 END TEST nvmf_async_init 00:23:42.936 ************************************ 00:23:42.936 16:05:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:42.936 16:05:11 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:42.936 16:05:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:42.936 16:05:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:42.936 16:05:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:42.936 ************************************ 00:23:42.936 START TEST dma 00:23:42.936 ************************************ 00:23:42.936 16:05:11 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:43.195 * Looking for test storage... 00:23:43.195 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:43.195 16:05:11 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:43.195 16:05:11 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:23:43.195 16:05:11 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:43.195 16:05:11 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:43.195 16:05:11 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:43.195 16:05:11 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:43.195 16:05:11 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:43.195 16:05:11 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:43.195 16:05:11 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:43.195 16:05:11 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:43.195 16:05:11 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:43.195 16:05:11 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:43.195 16:05:11 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:43.195 16:05:11 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:43.195 16:05:11 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:43.195 16:05:11 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:43.195 16:05:11 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:43.195 16:05:11 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:43.195 16:05:11 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:43.195 16:05:11 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:43.195 16:05:11 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:43.195 16:05:11 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:43.195 16:05:11 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.196 16:05:11 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.196 16:05:11 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.196 16:05:11 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:23:43.196 16:05:11 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.196 16:05:11 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:23:43.196 16:05:11 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:43.196 16:05:11 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:43.196 16:05:11 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:43.196 16:05:11 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:43.196 16:05:11 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:43.196 16:05:11 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:43.196 16:05:11 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:43.196 16:05:11 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:43.196 16:05:11 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:43.196 16:05:11 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:23:43.196 00:23:43.196 real 0m0.104s 00:23:43.196 user 0m0.042s 00:23:43.196 sys 0m0.068s 00:23:43.196 16:05:11 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:43.196 16:05:11 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:23:43.196 ************************************ 00:23:43.196 END TEST dma 00:23:43.196 ************************************ 00:23:43.196 16:05:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:43.196 16:05:11 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:43.196 16:05:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:43.196 16:05:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:43.196 16:05:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:43.196 ************************************ 00:23:43.196 START TEST nvmf_identify 00:23:43.196 ************************************ 00:23:43.196 16:05:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:43.196 * Looking for test storage... 00:23:43.196 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:43.196 16:05:12 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:43.196 16:05:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:43.196 16:05:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:43.196 16:05:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:43.196 16:05:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:43.196 16:05:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:43.196 16:05:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:43.196 16:05:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:43.196 16:05:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:43.196 16:05:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:43.196 16:05:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:43.196 16:05:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:43.196 16:05:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:43.455 16:05:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:43.455 16:05:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:43.455 16:05:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:43.455 16:05:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:43.455 16:05:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:43.455 16:05:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:43.455 16:05:12 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:43.455 16:05:12 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:43.455 16:05:12 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:43.455 16:05:12 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.455 16:05:12 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.455 16:05:12 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.455 16:05:12 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:43.455 16:05:12 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.455 16:05:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:23:43.455 16:05:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:43.455 16:05:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:43.455 16:05:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:43.455 16:05:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:43.455 16:05:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:43.455 16:05:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:43.455 16:05:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:43.455 16:05:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:43.455 16:05:12 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:43.455 16:05:12 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:43.455 16:05:12 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:43.455 16:05:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:43.455 16:05:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:43.455 16:05:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:43.455 16:05:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:43.455 16:05:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:43.455 16:05:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.455 16:05:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:43.455 16:05:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.455 16:05:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:43.455 16:05:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:43.455 16:05:12 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:23:43.455 16:05:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:48.727 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:48.727 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:23:48.727 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:48.727 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:48.727 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:48.727 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:48.727 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:48.727 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:23:48.727 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:48.727 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:23:48.727 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:23:48.727 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:48.728 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:48.728 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:48.728 Found net devices under 0000:86:00.0: cvl_0_0 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:48.728 Found net devices under 0000:86:00.1: cvl_0_1 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:48.728 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:48.728 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:23:48.728 00:23:48.728 --- 10.0.0.2 ping statistics --- 00:23:48.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:48.728 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:48.728 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:48.728 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:23:48.728 00:23:48.728 --- 10.0.0.1 ping statistics --- 00:23:48.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:48.728 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3846362 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3846362 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 3846362 ']' 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:48.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:48.728 16:05:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:48.728 [2024-07-15 16:05:17.394143] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:23:48.728 [2024-07-15 16:05:17.394188] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:48.728 EAL: No free 2048 kB hugepages reported on node 1 00:23:48.728 [2024-07-15 16:05:17.451629] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:48.728 [2024-07-15 16:05:17.538018] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:48.728 [2024-07-15 16:05:17.538050] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:48.728 [2024-07-15 16:05:17.538057] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:48.728 [2024-07-15 16:05:17.538063] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:48.728 [2024-07-15 16:05:17.538068] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:48.728 [2024-07-15 16:05:17.538114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:48.728 [2024-07-15 16:05:17.538215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:48.728 [2024-07-15 16:05:17.538240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:48.729 [2024-07-15 16:05:17.538247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:49.328 16:05:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:49.328 16:05:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:23:49.328 16:05:18 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:49.328 16:05:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.328 16:05:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:49.328 [2024-07-15 16:05:18.208069] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:49.328 16:05:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.328 16:05:18 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:49.328 16:05:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:49.328 16:05:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:49.329 16:05:18 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:49.329 16:05:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.329 16:05:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:49.590 Malloc0 00:23:49.590 16:05:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.590 16:05:18 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:49.590 16:05:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.590 16:05:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:49.590 16:05:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.590 16:05:18 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:49.590 16:05:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.590 16:05:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:49.590 16:05:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.590 16:05:18 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:49.590 16:05:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.590 16:05:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:49.590 [2024-07-15 16:05:18.296158] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:49.590 16:05:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.590 16:05:18 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:49.590 16:05:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.590 16:05:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:49.591 16:05:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.591 16:05:18 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:49.591 16:05:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.591 16:05:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:49.591 [ 00:23:49.591 { 00:23:49.591 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:49.591 "subtype": "Discovery", 00:23:49.591 "listen_addresses": [ 00:23:49.591 { 00:23:49.591 "trtype": "TCP", 00:23:49.591 "adrfam": "IPv4", 00:23:49.591 "traddr": "10.0.0.2", 00:23:49.591 "trsvcid": "4420" 00:23:49.591 } 00:23:49.591 ], 00:23:49.591 "allow_any_host": true, 00:23:49.591 "hosts": [] 00:23:49.591 }, 00:23:49.591 { 00:23:49.591 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.591 "subtype": "NVMe", 00:23:49.591 "listen_addresses": [ 00:23:49.591 { 00:23:49.591 "trtype": "TCP", 00:23:49.591 "adrfam": "IPv4", 00:23:49.591 "traddr": "10.0.0.2", 00:23:49.591 "trsvcid": "4420" 00:23:49.591 } 00:23:49.591 ], 00:23:49.591 "allow_any_host": true, 00:23:49.591 "hosts": [], 00:23:49.591 "serial_number": "SPDK00000000000001", 00:23:49.591 "model_number": "SPDK bdev Controller", 00:23:49.591 "max_namespaces": 32, 00:23:49.591 "min_cntlid": 1, 00:23:49.591 "max_cntlid": 65519, 00:23:49.591 "namespaces": [ 00:23:49.591 { 00:23:49.591 "nsid": 1, 00:23:49.591 "bdev_name": "Malloc0", 00:23:49.591 "name": "Malloc0", 00:23:49.591 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:49.591 "eui64": "ABCDEF0123456789", 00:23:49.591 "uuid": "80861deb-522d-44fe-b2a3-c9db774e9393" 00:23:49.591 } 00:23:49.591 ] 00:23:49.591 } 00:23:49.591 ] 00:23:49.591 16:05:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.591 16:05:18 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:49.591 [2024-07-15 16:05:18.348186] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:23:49.591 [2024-07-15 16:05:18.348218] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3846578 ] 00:23:49.591 EAL: No free 2048 kB hugepages reported on node 1 00:23:49.591 [2024-07-15 16:05:18.376774] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:23:49.591 [2024-07-15 16:05:18.376827] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:49.591 [2024-07-15 16:05:18.376832] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:49.591 [2024-07-15 16:05:18.376843] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:49.591 [2024-07-15 16:05:18.376849] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:49.591 [2024-07-15 16:05:18.377209] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:23:49.591 [2024-07-15 16:05:18.377242] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2441ec0 0 00:23:49.591 [2024-07-15 16:05:18.391236] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:49.591 [2024-07-15 16:05:18.391247] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:49.591 [2024-07-15 16:05:18.391251] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:49.591 [2024-07-15 16:05:18.391254] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:49.591 [2024-07-15 16:05:18.391288] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.591 [2024-07-15 16:05:18.391294] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.591 [2024-07-15 16:05:18.391298] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2441ec0) 00:23:49.591 [2024-07-15 16:05:18.391309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:49.591 [2024-07-15 16:05:18.391324] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c4e40, cid 0, qid 0 00:23:49.591 [2024-07-15 16:05:18.399233] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.591 [2024-07-15 16:05:18.399241] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.591 [2024-07-15 16:05:18.399244] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.591 [2024-07-15 16:05:18.399248] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c4e40) on tqpair=0x2441ec0 00:23:49.591 [2024-07-15 16:05:18.399257] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:49.591 [2024-07-15 16:05:18.399262] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:23:49.591 [2024-07-15 16:05:18.399267] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:23:49.591 [2024-07-15 16:05:18.399279] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.591 [2024-07-15 16:05:18.399283] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.591 [2024-07-15 16:05:18.399286] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2441ec0) 00:23:49.591 [2024-07-15 16:05:18.399293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.591 [2024-07-15 16:05:18.399305] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c4e40, cid 0, qid 0 00:23:49.591 [2024-07-15 16:05:18.399493] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.591 [2024-07-15 16:05:18.399499] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.591 [2024-07-15 16:05:18.399502] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.591 [2024-07-15 16:05:18.399506] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c4e40) on tqpair=0x2441ec0 00:23:49.591 [2024-07-15 16:05:18.399511] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:23:49.591 [2024-07-15 16:05:18.399517] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:23:49.591 [2024-07-15 16:05:18.399523] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.591 [2024-07-15 16:05:18.399526] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.591 [2024-07-15 16:05:18.399532] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2441ec0) 00:23:49.591 [2024-07-15 16:05:18.399538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.591 [2024-07-15 16:05:18.399549] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c4e40, cid 0, qid 0 00:23:49.591 [2024-07-15 16:05:18.399632] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.591 [2024-07-15 16:05:18.399638] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.591 [2024-07-15 16:05:18.399641] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.591 [2024-07-15 16:05:18.399644] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c4e40) on tqpair=0x2441ec0 00:23:49.591 [2024-07-15 16:05:18.399649] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:23:49.591 [2024-07-15 16:05:18.399655] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:23:49.591 [2024-07-15 16:05:18.399661] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.591 [2024-07-15 16:05:18.399664] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.591 [2024-07-15 16:05:18.399667] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2441ec0) 00:23:49.591 [2024-07-15 16:05:18.399673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.591 [2024-07-15 16:05:18.399682] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c4e40, cid 0, qid 0 00:23:49.591 [2024-07-15 16:05:18.399757] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.591 [2024-07-15 16:05:18.399762] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.591 [2024-07-15 16:05:18.399765] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.591 [2024-07-15 16:05:18.399768] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c4e40) on tqpair=0x2441ec0 00:23:49.591 [2024-07-15 16:05:18.399773] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:49.591 [2024-07-15 16:05:18.399782] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.591 [2024-07-15 16:05:18.399785] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.591 [2024-07-15 16:05:18.399788] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2441ec0) 00:23:49.591 [2024-07-15 16:05:18.399794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.591 [2024-07-15 16:05:18.399803] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c4e40, cid 0, qid 0 00:23:49.591 [2024-07-15 16:05:18.399884] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.591 [2024-07-15 16:05:18.399889] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.591 [2024-07-15 16:05:18.399892] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.591 [2024-07-15 16:05:18.399896] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c4e40) on tqpair=0x2441ec0 00:23:49.591 [2024-07-15 16:05:18.399900] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:23:49.591 [2024-07-15 16:05:18.399904] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:23:49.591 [2024-07-15 16:05:18.399910] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:49.591 [2024-07-15 16:05:18.400015] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:23:49.591 [2024-07-15 16:05:18.400019] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:49.591 [2024-07-15 16:05:18.400029] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.591 [2024-07-15 16:05:18.400032] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.591 [2024-07-15 16:05:18.400035] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2441ec0) 00:23:49.591 [2024-07-15 16:05:18.400041] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.591 [2024-07-15 16:05:18.400050] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c4e40, cid 0, qid 0 00:23:49.591 [2024-07-15 16:05:18.400176] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.591 [2024-07-15 16:05:18.400181] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.591 [2024-07-15 16:05:18.400184] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.591 [2024-07-15 16:05:18.400187] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c4e40) on tqpair=0x2441ec0 00:23:49.591 [2024-07-15 16:05:18.400191] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:49.591 [2024-07-15 16:05:18.400199] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.591 [2024-07-15 16:05:18.400202] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.592 [2024-07-15 16:05:18.400205] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2441ec0) 00:23:49.592 [2024-07-15 16:05:18.400211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.592 [2024-07-15 16:05:18.400220] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c4e40, cid 0, qid 0 00:23:49.592 [2024-07-15 16:05:18.400332] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.592 [2024-07-15 16:05:18.400338] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.592 [2024-07-15 16:05:18.400341] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.592 [2024-07-15 16:05:18.400344] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c4e40) on tqpair=0x2441ec0 00:23:49.592 [2024-07-15 16:05:18.400348] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:49.592 [2024-07-15 16:05:18.400352] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:23:49.592 [2024-07-15 16:05:18.400358] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:23:49.592 [2024-07-15 16:05:18.400366] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:23:49.592 [2024-07-15 16:05:18.400374] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.592 [2024-07-15 16:05:18.400377] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2441ec0) 00:23:49.592 [2024-07-15 16:05:18.400383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.592 [2024-07-15 16:05:18.400393] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c4e40, cid 0, qid 0 00:23:49.592 [2024-07-15 16:05:18.400484] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:49.592 [2024-07-15 16:05:18.400489] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:49.592 [2024-07-15 16:05:18.400492] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:49.592 [2024-07-15 16:05:18.400496] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2441ec0): datao=0, datal=4096, cccid=0 00:23:49.592 [2024-07-15 16:05:18.400500] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24c4e40) on tqpair(0x2441ec0): expected_datao=0, payload_size=4096 00:23:49.592 [2024-07-15 16:05:18.400505] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.592 [2024-07-15 16:05:18.400563] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:49.592 [2024-07-15 16:05:18.400567] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:49.592 [2024-07-15 16:05:18.400641] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.592 [2024-07-15 16:05:18.400647] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.592 [2024-07-15 16:05:18.400649] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.592 [2024-07-15 16:05:18.400653] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c4e40) on tqpair=0x2441ec0 00:23:49.592 [2024-07-15 16:05:18.400660] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:23:49.592 [2024-07-15 16:05:18.400667] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:23:49.592 [2024-07-15 16:05:18.400671] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:23:49.592 [2024-07-15 16:05:18.400675] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:23:49.592 [2024-07-15 16:05:18.400679] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:23:49.592 [2024-07-15 16:05:18.400683] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:23:49.592 [2024-07-15 16:05:18.400691] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:23:49.592 [2024-07-15 16:05:18.400697] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.592 [2024-07-15 16:05:18.400700] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.592 [2024-07-15 16:05:18.400703] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2441ec0) 00:23:49.592 [2024-07-15 16:05:18.400709] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:49.592 [2024-07-15 16:05:18.400719] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c4e40, cid 0, qid 0 00:23:49.592 [2024-07-15 16:05:18.400798] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.592 [2024-07-15 16:05:18.400804] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.592 [2024-07-15 16:05:18.400807] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.592 [2024-07-15 16:05:18.400810] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c4e40) on tqpair=0x2441ec0 00:23:49.592 [2024-07-15 16:05:18.400816] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.592 [2024-07-15 16:05:18.400820] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.592 [2024-07-15 16:05:18.400823] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2441ec0) 00:23:49.592 [2024-07-15 16:05:18.400828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.592 [2024-07-15 16:05:18.400834] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.592 [2024-07-15 16:05:18.400837] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.592 [2024-07-15 16:05:18.400840] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2441ec0) 00:23:49.592 [2024-07-15 16:05:18.400844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.592 [2024-07-15 16:05:18.400850] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.592 [2024-07-15 16:05:18.400853] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.592 [2024-07-15 16:05:18.400856] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2441ec0) 00:23:49.592 [2024-07-15 16:05:18.400862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.592 [2024-07-15 16:05:18.400868] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.592 [2024-07-15 16:05:18.400871] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.592 [2024-07-15 16:05:18.400874] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441ec0) 00:23:49.592 [2024-07-15 16:05:18.400879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.592 [2024-07-15 16:05:18.400883] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:23:49.592 [2024-07-15 16:05:18.400893] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:49.592 [2024-07-15 16:05:18.400898] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.592 [2024-07-15 16:05:18.400902] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2441ec0) 00:23:49.592 [2024-07-15 16:05:18.400907] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.592 [2024-07-15 16:05:18.400919] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c4e40, cid 0, qid 0 00:23:49.592 [2024-07-15 16:05:18.400923] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c4fc0, cid 1, qid 0 00:23:49.592 [2024-07-15 16:05:18.400927] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c5140, cid 2, qid 0 00:23:49.592 [2024-07-15 16:05:18.400931] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c52c0, cid 3, qid 0 00:23:49.592 [2024-07-15 16:05:18.400935] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c5440, cid 4, qid 0 00:23:49.592 [2024-07-15 16:05:18.401059] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.592 [2024-07-15 16:05:18.401065] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.592 [2024-07-15 16:05:18.401067] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.592 [2024-07-15 16:05:18.401071] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c5440) on tqpair=0x2441ec0 00:23:49.592 [2024-07-15 16:05:18.401075] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:23:49.592 [2024-07-15 16:05:18.401079] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:23:49.592 [2024-07-15 16:05:18.401088] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.592 [2024-07-15 16:05:18.401091] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2441ec0) 00:23:49.592 [2024-07-15 16:05:18.401097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.592 [2024-07-15 16:05:18.401105] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c5440, cid 4, qid 0 00:23:49.592 [2024-07-15 16:05:18.401194] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:49.592 [2024-07-15 16:05:18.401200] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:49.592 [2024-07-15 16:05:18.401203] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:49.592 [2024-07-15 16:05:18.401206] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2441ec0): datao=0, datal=4096, cccid=4 00:23:49.592 [2024-07-15 16:05:18.401210] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24c5440) on tqpair(0x2441ec0): expected_datao=0, payload_size=4096 00:23:49.592 [2024-07-15 16:05:18.401214] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.592 [2024-07-15 16:05:18.401220] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:49.592 [2024-07-15 16:05:18.401223] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:49.592 [2024-07-15 16:05:18.401267] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.592 [2024-07-15 16:05:18.401272] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.592 [2024-07-15 16:05:18.401275] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.592 [2024-07-15 16:05:18.401279] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c5440) on tqpair=0x2441ec0 00:23:49.592 [2024-07-15 16:05:18.401290] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:23:49.592 [2024-07-15 16:05:18.401311] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.592 [2024-07-15 16:05:18.401315] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2441ec0) 00:23:49.592 [2024-07-15 16:05:18.401321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.592 [2024-07-15 16:05:18.401326] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.592 [2024-07-15 16:05:18.401330] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.592 [2024-07-15 16:05:18.401332] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2441ec0) 00:23:49.592 [2024-07-15 16:05:18.401338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.592 [2024-07-15 16:05:18.401351] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c5440, cid 4, qid 0 00:23:49.592 [2024-07-15 16:05:18.401356] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c55c0, cid 5, qid 0 00:23:49.592 [2024-07-15 16:05:18.401513] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:49.592 [2024-07-15 16:05:18.401518] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:49.592 [2024-07-15 16:05:18.401521] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:49.592 [2024-07-15 16:05:18.401524] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2441ec0): datao=0, datal=1024, cccid=4 00:23:49.592 [2024-07-15 16:05:18.401528] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24c5440) on tqpair(0x2441ec0): expected_datao=0, payload_size=1024 00:23:49.593 [2024-07-15 16:05:18.401531] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.593 [2024-07-15 16:05:18.401536] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:49.593 [2024-07-15 16:05:18.401540] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:49.593 [2024-07-15 16:05:18.401544] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.593 [2024-07-15 16:05:18.401549] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.593 [2024-07-15 16:05:18.401552] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.593 [2024-07-15 16:05:18.401555] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c55c0) on tqpair=0x2441ec0 00:23:49.593 [2024-07-15 16:05:18.444232] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.593 [2024-07-15 16:05:18.444242] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.593 [2024-07-15 16:05:18.444245] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.593 [2024-07-15 16:05:18.444249] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c5440) on tqpair=0x2441ec0 00:23:49.593 [2024-07-15 16:05:18.444267] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.593 [2024-07-15 16:05:18.444272] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2441ec0) 00:23:49.593 [2024-07-15 16:05:18.444280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.593 [2024-07-15 16:05:18.444296] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c5440, cid 4, qid 0 00:23:49.593 [2024-07-15 16:05:18.444463] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:49.593 [2024-07-15 16:05:18.444469] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:49.593 [2024-07-15 16:05:18.444474] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:49.593 [2024-07-15 16:05:18.444477] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2441ec0): datao=0, datal=3072, cccid=4 00:23:49.593 [2024-07-15 16:05:18.444481] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24c5440) on tqpair(0x2441ec0): expected_datao=0, payload_size=3072 00:23:49.593 [2024-07-15 16:05:18.444485] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.593 [2024-07-15 16:05:18.444553] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:49.593 [2024-07-15 16:05:18.444557] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:49.593 [2024-07-15 16:05:18.444665] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.593 [2024-07-15 16:05:18.444670] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.593 [2024-07-15 16:05:18.444673] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.593 [2024-07-15 16:05:18.444676] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c5440) on tqpair=0x2441ec0 00:23:49.593 [2024-07-15 16:05:18.444684] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.593 [2024-07-15 16:05:18.444687] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2441ec0) 00:23:49.593 [2024-07-15 16:05:18.444693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.593 [2024-07-15 16:05:18.444707] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c5440, cid 4, qid 0 00:23:49.593 [2024-07-15 16:05:18.444794] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:49.593 [2024-07-15 16:05:18.444799] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:49.593 [2024-07-15 16:05:18.444802] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:49.593 [2024-07-15 16:05:18.444805] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2441ec0): datao=0, datal=8, cccid=4 00:23:49.593 [2024-07-15 16:05:18.444809] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24c5440) on tqpair(0x2441ec0): expected_datao=0, payload_size=8 00:23:49.593 [2024-07-15 16:05:18.444813] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.593 [2024-07-15 16:05:18.444818] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:49.593 [2024-07-15 16:05:18.444821] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:49.593 [2024-07-15 16:05:18.486377] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.593 [2024-07-15 16:05:18.486389] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.593 [2024-07-15 16:05:18.486392] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.593 [2024-07-15 16:05:18.486396] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c5440) on tqpair=0x2441ec0 00:23:49.593 ===================================================== 00:23:49.593 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:49.593 ===================================================== 00:23:49.593 Controller Capabilities/Features 00:23:49.593 ================================ 00:23:49.593 Vendor ID: 0000 00:23:49.593 Subsystem Vendor ID: 0000 00:23:49.593 Serial Number: .................... 00:23:49.593 Model Number: ........................................ 00:23:49.593 Firmware Version: 24.09 00:23:49.593 Recommended Arb Burst: 0 00:23:49.593 IEEE OUI Identifier: 00 00 00 00:23:49.593 Multi-path I/O 00:23:49.593 May have multiple subsystem ports: No 00:23:49.593 May have multiple controllers: No 00:23:49.593 Associated with SR-IOV VF: No 00:23:49.593 Max Data Transfer Size: 131072 00:23:49.593 Max Number of Namespaces: 0 00:23:49.593 Max Number of I/O Queues: 1024 00:23:49.593 NVMe Specification Version (VS): 1.3 00:23:49.593 NVMe Specification Version (Identify): 1.3 00:23:49.593 Maximum Queue Entries: 128 00:23:49.593 Contiguous Queues Required: Yes 00:23:49.593 Arbitration Mechanisms Supported 00:23:49.593 Weighted Round Robin: Not Supported 00:23:49.593 Vendor Specific: Not Supported 00:23:49.593 Reset Timeout: 15000 ms 00:23:49.593 Doorbell Stride: 4 bytes 00:23:49.593 NVM Subsystem Reset: Not Supported 00:23:49.593 Command Sets Supported 00:23:49.593 NVM Command Set: Supported 00:23:49.593 Boot Partition: Not Supported 00:23:49.593 Memory Page Size Minimum: 4096 bytes 00:23:49.593 Memory Page Size Maximum: 4096 bytes 00:23:49.593 Persistent Memory Region: Not Supported 00:23:49.593 Optional Asynchronous Events Supported 00:23:49.593 Namespace Attribute Notices: Not Supported 00:23:49.593 Firmware Activation Notices: Not Supported 00:23:49.593 ANA Change Notices: Not Supported 00:23:49.593 PLE Aggregate Log Change Notices: Not Supported 00:23:49.593 LBA Status Info Alert Notices: Not Supported 00:23:49.593 EGE Aggregate Log Change Notices: Not Supported 00:23:49.593 Normal NVM Subsystem Shutdown event: Not Supported 00:23:49.593 Zone Descriptor Change Notices: Not Supported 00:23:49.593 Discovery Log Change Notices: Supported 00:23:49.593 Controller Attributes 00:23:49.593 128-bit Host Identifier: Not Supported 00:23:49.593 Non-Operational Permissive Mode: Not Supported 00:23:49.593 NVM Sets: Not Supported 00:23:49.593 Read Recovery Levels: Not Supported 00:23:49.593 Endurance Groups: Not Supported 00:23:49.593 Predictable Latency Mode: Not Supported 00:23:49.593 Traffic Based Keep ALive: Not Supported 00:23:49.593 Namespace Granularity: Not Supported 00:23:49.593 SQ Associations: Not Supported 00:23:49.593 UUID List: Not Supported 00:23:49.593 Multi-Domain Subsystem: Not Supported 00:23:49.593 Fixed Capacity Management: Not Supported 00:23:49.593 Variable Capacity Management: Not Supported 00:23:49.593 Delete Endurance Group: Not Supported 00:23:49.593 Delete NVM Set: Not Supported 00:23:49.593 Extended LBA Formats Supported: Not Supported 00:23:49.593 Flexible Data Placement Supported: Not Supported 00:23:49.593 00:23:49.593 Controller Memory Buffer Support 00:23:49.593 ================================ 00:23:49.593 Supported: No 00:23:49.593 00:23:49.593 Persistent Memory Region Support 00:23:49.593 ================================ 00:23:49.593 Supported: No 00:23:49.593 00:23:49.593 Admin Command Set Attributes 00:23:49.593 ============================ 00:23:49.593 Security Send/Receive: Not Supported 00:23:49.593 Format NVM: Not Supported 00:23:49.593 Firmware Activate/Download: Not Supported 00:23:49.593 Namespace Management: Not Supported 00:23:49.593 Device Self-Test: Not Supported 00:23:49.593 Directives: Not Supported 00:23:49.593 NVMe-MI: Not Supported 00:23:49.593 Virtualization Management: Not Supported 00:23:49.593 Doorbell Buffer Config: Not Supported 00:23:49.593 Get LBA Status Capability: Not Supported 00:23:49.593 Command & Feature Lockdown Capability: Not Supported 00:23:49.593 Abort Command Limit: 1 00:23:49.593 Async Event Request Limit: 4 00:23:49.593 Number of Firmware Slots: N/A 00:23:49.593 Firmware Slot 1 Read-Only: N/A 00:23:49.593 Firmware Activation Without Reset: N/A 00:23:49.593 Multiple Update Detection Support: N/A 00:23:49.593 Firmware Update Granularity: No Information Provided 00:23:49.593 Per-Namespace SMART Log: No 00:23:49.593 Asymmetric Namespace Access Log Page: Not Supported 00:23:49.593 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:49.593 Command Effects Log Page: Not Supported 00:23:49.593 Get Log Page Extended Data: Supported 00:23:49.593 Telemetry Log Pages: Not Supported 00:23:49.593 Persistent Event Log Pages: Not Supported 00:23:49.593 Supported Log Pages Log Page: May Support 00:23:49.593 Commands Supported & Effects Log Page: Not Supported 00:23:49.593 Feature Identifiers & Effects Log Page:May Support 00:23:49.593 NVMe-MI Commands & Effects Log Page: May Support 00:23:49.593 Data Area 4 for Telemetry Log: Not Supported 00:23:49.593 Error Log Page Entries Supported: 128 00:23:49.593 Keep Alive: Not Supported 00:23:49.593 00:23:49.593 NVM Command Set Attributes 00:23:49.593 ========================== 00:23:49.593 Submission Queue Entry Size 00:23:49.593 Max: 1 00:23:49.593 Min: 1 00:23:49.593 Completion Queue Entry Size 00:23:49.593 Max: 1 00:23:49.593 Min: 1 00:23:49.593 Number of Namespaces: 0 00:23:49.593 Compare Command: Not Supported 00:23:49.593 Write Uncorrectable Command: Not Supported 00:23:49.593 Dataset Management Command: Not Supported 00:23:49.593 Write Zeroes Command: Not Supported 00:23:49.593 Set Features Save Field: Not Supported 00:23:49.593 Reservations: Not Supported 00:23:49.593 Timestamp: Not Supported 00:23:49.593 Copy: Not Supported 00:23:49.593 Volatile Write Cache: Not Present 00:23:49.593 Atomic Write Unit (Normal): 1 00:23:49.593 Atomic Write Unit (PFail): 1 00:23:49.593 Atomic Compare & Write Unit: 1 00:23:49.594 Fused Compare & Write: Supported 00:23:49.594 Scatter-Gather List 00:23:49.594 SGL Command Set: Supported 00:23:49.594 SGL Keyed: Supported 00:23:49.594 SGL Bit Bucket Descriptor: Not Supported 00:23:49.594 SGL Metadata Pointer: Not Supported 00:23:49.594 Oversized SGL: Not Supported 00:23:49.594 SGL Metadata Address: Not Supported 00:23:49.594 SGL Offset: Supported 00:23:49.594 Transport SGL Data Block: Not Supported 00:23:49.594 Replay Protected Memory Block: Not Supported 00:23:49.594 00:23:49.594 Firmware Slot Information 00:23:49.594 ========================= 00:23:49.594 Active slot: 0 00:23:49.594 00:23:49.594 00:23:49.594 Error Log 00:23:49.594 ========= 00:23:49.594 00:23:49.594 Active Namespaces 00:23:49.594 ================= 00:23:49.594 Discovery Log Page 00:23:49.594 ================== 00:23:49.594 Generation Counter: 2 00:23:49.594 Number of Records: 2 00:23:49.594 Record Format: 0 00:23:49.594 00:23:49.594 Discovery Log Entry 0 00:23:49.594 ---------------------- 00:23:49.594 Transport Type: 3 (TCP) 00:23:49.594 Address Family: 1 (IPv4) 00:23:49.594 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:49.594 Entry Flags: 00:23:49.594 Duplicate Returned Information: 1 00:23:49.594 Explicit Persistent Connection Support for Discovery: 1 00:23:49.594 Transport Requirements: 00:23:49.594 Secure Channel: Not Required 00:23:49.594 Port ID: 0 (0x0000) 00:23:49.594 Controller ID: 65535 (0xffff) 00:23:49.594 Admin Max SQ Size: 128 00:23:49.594 Transport Service Identifier: 4420 00:23:49.594 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:49.594 Transport Address: 10.0.0.2 00:23:49.594 Discovery Log Entry 1 00:23:49.594 ---------------------- 00:23:49.594 Transport Type: 3 (TCP) 00:23:49.594 Address Family: 1 (IPv4) 00:23:49.594 Subsystem Type: 2 (NVM Subsystem) 00:23:49.594 Entry Flags: 00:23:49.594 Duplicate Returned Information: 0 00:23:49.594 Explicit Persistent Connection Support for Discovery: 0 00:23:49.594 Transport Requirements: 00:23:49.594 Secure Channel: Not Required 00:23:49.594 Port ID: 0 (0x0000) 00:23:49.594 Controller ID: 65535 (0xffff) 00:23:49.594 Admin Max SQ Size: 128 00:23:49.594 Transport Service Identifier: 4420 00:23:49.594 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:49.594 Transport Address: 10.0.0.2 [2024-07-15 16:05:18.486473] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:23:49.594 [2024-07-15 16:05:18.486483] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c4e40) on tqpair=0x2441ec0 00:23:49.594 [2024-07-15 16:05:18.486489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.594 [2024-07-15 16:05:18.486493] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c4fc0) on tqpair=0x2441ec0 00:23:49.594 [2024-07-15 16:05:18.486497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.594 [2024-07-15 16:05:18.486501] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c5140) on tqpair=0x2441ec0 00:23:49.594 [2024-07-15 16:05:18.486505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.594 [2024-07-15 16:05:18.486509] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c52c0) on tqpair=0x2441ec0 00:23:49.594 [2024-07-15 16:05:18.486513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.594 [2024-07-15 16:05:18.486524] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.594 [2024-07-15 16:05:18.486528] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.594 [2024-07-15 16:05:18.486531] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441ec0) 00:23:49.594 [2024-07-15 16:05:18.486537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.594 [2024-07-15 16:05:18.486551] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c52c0, cid 3, qid 0 00:23:49.594 [2024-07-15 16:05:18.486623] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.594 [2024-07-15 16:05:18.486629] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.594 [2024-07-15 16:05:18.486632] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.594 [2024-07-15 16:05:18.486635] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c52c0) on tqpair=0x2441ec0 00:23:49.594 [2024-07-15 16:05:18.486641] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.594 [2024-07-15 16:05:18.486645] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.594 [2024-07-15 16:05:18.486647] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441ec0) 00:23:49.594 [2024-07-15 16:05:18.486653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.594 [2024-07-15 16:05:18.486664] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c52c0, cid 3, qid 0 00:23:49.594 [2024-07-15 16:05:18.486747] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.594 [2024-07-15 16:05:18.486753] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.594 [2024-07-15 16:05:18.486755] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.594 [2024-07-15 16:05:18.486758] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c52c0) on tqpair=0x2441ec0 00:23:49.594 [2024-07-15 16:05:18.486763] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:23:49.594 [2024-07-15 16:05:18.486767] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:23:49.594 [2024-07-15 16:05:18.486775] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.594 [2024-07-15 16:05:18.486778] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.594 [2024-07-15 16:05:18.486781] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441ec0) 00:23:49.594 [2024-07-15 16:05:18.486787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.594 [2024-07-15 16:05:18.486796] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c52c0, cid 3, qid 0 00:23:49.594 [2024-07-15 16:05:18.486874] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.594 [2024-07-15 16:05:18.486879] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.594 [2024-07-15 16:05:18.486882] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.594 [2024-07-15 16:05:18.486885] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c52c0) on tqpair=0x2441ec0 00:23:49.594 [2024-07-15 16:05:18.486893] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.594 [2024-07-15 16:05:18.486897] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.594 [2024-07-15 16:05:18.486900] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441ec0) 00:23:49.594 [2024-07-15 16:05:18.486905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.594 [2024-07-15 16:05:18.486914] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c52c0, cid 3, qid 0 00:23:49.594 [2024-07-15 16:05:18.486991] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.594 [2024-07-15 16:05:18.486996] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.594 [2024-07-15 16:05:18.487001] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.594 [2024-07-15 16:05:18.487004] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c52c0) on tqpair=0x2441ec0 00:23:49.594 [2024-07-15 16:05:18.487012] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.594 [2024-07-15 16:05:18.487016] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.594 [2024-07-15 16:05:18.487019] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441ec0) 00:23:49.594 [2024-07-15 16:05:18.487024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.594 [2024-07-15 16:05:18.487033] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c52c0, cid 3, qid 0 00:23:49.594 [2024-07-15 16:05:18.487108] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.594 [2024-07-15 16:05:18.487114] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.594 [2024-07-15 16:05:18.487117] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.594 [2024-07-15 16:05:18.487120] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c52c0) on tqpair=0x2441ec0 00:23:49.594 [2024-07-15 16:05:18.487128] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.594 [2024-07-15 16:05:18.487131] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.594 [2024-07-15 16:05:18.487134] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441ec0) 00:23:49.594 [2024-07-15 16:05:18.487140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.594 [2024-07-15 16:05:18.487148] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c52c0, cid 3, qid 0 00:23:49.594 [2024-07-15 16:05:18.487228] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.594 [2024-07-15 16:05:18.487234] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.594 [2024-07-15 16:05:18.487237] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.594 [2024-07-15 16:05:18.487240] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c52c0) on tqpair=0x2441ec0 00:23:49.594 [2024-07-15 16:05:18.487248] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.594 [2024-07-15 16:05:18.487252] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.594 [2024-07-15 16:05:18.487255] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441ec0) 00:23:49.594 [2024-07-15 16:05:18.487260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.594 [2024-07-15 16:05:18.487271] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c52c0, cid 3, qid 0 00:23:49.594 [2024-07-15 16:05:18.487349] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.594 [2024-07-15 16:05:18.487354] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.594 [2024-07-15 16:05:18.487357] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.594 [2024-07-15 16:05:18.487360] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c52c0) on tqpair=0x2441ec0 00:23:49.594 [2024-07-15 16:05:18.487368] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.594 [2024-07-15 16:05:18.487371] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.594 [2024-07-15 16:05:18.487374] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441ec0) 00:23:49.594 [2024-07-15 16:05:18.487380] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.594 [2024-07-15 16:05:18.487389] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c52c0, cid 3, qid 0 00:23:49.594 [2024-07-15 16:05:18.487466] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.594 [2024-07-15 16:05:18.487471] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.595 [2024-07-15 16:05:18.487474] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.595 [2024-07-15 16:05:18.487479] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c52c0) on tqpair=0x2441ec0 00:23:49.595 [2024-07-15 16:05:18.487487] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.595 [2024-07-15 16:05:18.487490] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.595 [2024-07-15 16:05:18.487493] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441ec0) 00:23:49.595 [2024-07-15 16:05:18.487499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.595 [2024-07-15 16:05:18.487508] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c52c0, cid 3, qid 0 00:23:49.595 [2024-07-15 16:05:18.487584] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.595 [2024-07-15 16:05:18.487590] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.595 [2024-07-15 16:05:18.487592] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.595 [2024-07-15 16:05:18.487595] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c52c0) on tqpair=0x2441ec0 00:23:49.595 [2024-07-15 16:05:18.487603] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.595 [2024-07-15 16:05:18.487607] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.595 [2024-07-15 16:05:18.487610] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441ec0) 00:23:49.595 [2024-07-15 16:05:18.487615] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.595 [2024-07-15 16:05:18.487625] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c52c0, cid 3, qid 0 00:23:49.595 [2024-07-15 16:05:18.487699] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.595 [2024-07-15 16:05:18.487705] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.595 [2024-07-15 16:05:18.487707] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.595 [2024-07-15 16:05:18.487711] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c52c0) on tqpair=0x2441ec0 00:23:49.595 [2024-07-15 16:05:18.487719] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.595 [2024-07-15 16:05:18.487722] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.595 [2024-07-15 16:05:18.487725] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441ec0) 00:23:49.595 [2024-07-15 16:05:18.487731] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.595 [2024-07-15 16:05:18.487739] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c52c0, cid 3, qid 0 00:23:49.595 [2024-07-15 16:05:18.487814] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.595 [2024-07-15 16:05:18.487820] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.595 [2024-07-15 16:05:18.487823] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.595 [2024-07-15 16:05:18.487826] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c52c0) on tqpair=0x2441ec0 00:23:49.595 [2024-07-15 16:05:18.487834] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.595 [2024-07-15 16:05:18.487837] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.595 [2024-07-15 16:05:18.487840] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441ec0) 00:23:49.595 [2024-07-15 16:05:18.487846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.595 [2024-07-15 16:05:18.487855] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c52c0, cid 3, qid 0 00:23:49.595 [2024-07-15 16:05:18.487933] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.595 [2024-07-15 16:05:18.487939] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.595 [2024-07-15 16:05:18.487941] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.595 [2024-07-15 16:05:18.487945] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c52c0) on tqpair=0x2441ec0 00:23:49.595 [2024-07-15 16:05:18.487957] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.595 [2024-07-15 16:05:18.487960] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.595 [2024-07-15 16:05:18.487963] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441ec0) 00:23:49.595 [2024-07-15 16:05:18.487969] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.595 [2024-07-15 16:05:18.487978] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c52c0, cid 3, qid 0 00:23:49.595 [2024-07-15 16:05:18.488058] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.595 [2024-07-15 16:05:18.488064] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.595 [2024-07-15 16:05:18.488066] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.595 [2024-07-15 16:05:18.488070] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c52c0) on tqpair=0x2441ec0 00:23:49.595 [2024-07-15 16:05:18.488077] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.595 [2024-07-15 16:05:18.488081] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.595 [2024-07-15 16:05:18.488084] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441ec0) 00:23:49.595 [2024-07-15 16:05:18.488090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.595 [2024-07-15 16:05:18.488098] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c52c0, cid 3, qid 0 00:23:49.595 [2024-07-15 16:05:18.488173] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.595 [2024-07-15 16:05:18.488178] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.595 [2024-07-15 16:05:18.488181] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.595 [2024-07-15 16:05:18.488184] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c52c0) on tqpair=0x2441ec0 00:23:49.595 [2024-07-15 16:05:18.488193] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.595 [2024-07-15 16:05:18.488196] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.595 [2024-07-15 16:05:18.488199] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441ec0) 00:23:49.595 [2024-07-15 16:05:18.488205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.595 [2024-07-15 16:05:18.488213] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c52c0, cid 3, qid 0 00:23:49.595 [2024-07-15 16:05:18.488292] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.595 [2024-07-15 16:05:18.488298] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.595 [2024-07-15 16:05:18.488300] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.595 [2024-07-15 16:05:18.488304] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c52c0) on tqpair=0x2441ec0 00:23:49.595 [2024-07-15 16:05:18.488312] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.595 [2024-07-15 16:05:18.488315] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.595 [2024-07-15 16:05:18.488318] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441ec0) 00:23:49.595 [2024-07-15 16:05:18.488324] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.595 [2024-07-15 16:05:18.488333] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c52c0, cid 3, qid 0 00:23:49.595 [2024-07-15 16:05:18.488410] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.595 [2024-07-15 16:05:18.488416] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.595 [2024-07-15 16:05:18.488418] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.595 [2024-07-15 16:05:18.488422] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c52c0) on tqpair=0x2441ec0 00:23:49.595 [2024-07-15 16:05:18.488430] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.595 [2024-07-15 16:05:18.488435] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.595 [2024-07-15 16:05:18.488438] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441ec0) 00:23:49.595 [2024-07-15 16:05:18.488443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.595 [2024-07-15 16:05:18.488453] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c52c0, cid 3, qid 0 00:23:49.595 [2024-07-15 16:05:18.488529] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.595 [2024-07-15 16:05:18.488534] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.595 [2024-07-15 16:05:18.488537] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.595 [2024-07-15 16:05:18.488540] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c52c0) on tqpair=0x2441ec0 00:23:49.595 [2024-07-15 16:05:18.488548] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.595 [2024-07-15 16:05:18.488552] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.595 [2024-07-15 16:05:18.488555] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441ec0) 00:23:49.595 [2024-07-15 16:05:18.488560] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.595 [2024-07-15 16:05:18.488570] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c52c0, cid 3, qid 0 00:23:49.595 [2024-07-15 16:05:18.488644] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.595 [2024-07-15 16:05:18.488650] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.595 [2024-07-15 16:05:18.488653] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.596 [2024-07-15 16:05:18.488656] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c52c0) on tqpair=0x2441ec0 00:23:49.596 [2024-07-15 16:05:18.488664] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.596 [2024-07-15 16:05:18.488667] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.596 [2024-07-15 16:05:18.488670] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441ec0) 00:23:49.596 [2024-07-15 16:05:18.488676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.596 [2024-07-15 16:05:18.488684] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c52c0, cid 3, qid 0 00:23:49.596 [2024-07-15 16:05:18.488756] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.596 [2024-07-15 16:05:18.488761] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.596 [2024-07-15 16:05:18.488764] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.596 [2024-07-15 16:05:18.488767] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c52c0) on tqpair=0x2441ec0 00:23:49.596 [2024-07-15 16:05:18.488775] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.596 [2024-07-15 16:05:18.488779] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.596 [2024-07-15 16:05:18.488782] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441ec0) 00:23:49.596 [2024-07-15 16:05:18.488787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.596 [2024-07-15 16:05:18.488796] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c52c0, cid 3, qid 0 00:23:49.596 [2024-07-15 16:05:18.488880] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.596 [2024-07-15 16:05:18.488886] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.596 [2024-07-15 16:05:18.488889] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.596 [2024-07-15 16:05:18.488892] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c52c0) on tqpair=0x2441ec0 00:23:49.596 [2024-07-15 16:05:18.488900] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.596 [2024-07-15 16:05:18.488903] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.596 [2024-07-15 16:05:18.488908] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441ec0) 00:23:49.596 [2024-07-15 16:05:18.488913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.596 [2024-07-15 16:05:18.488923] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c52c0, cid 3, qid 0 00:23:49.596 [2024-07-15 16:05:18.488997] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.596 [2024-07-15 16:05:18.489003] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.596 [2024-07-15 16:05:18.489005] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.596 [2024-07-15 16:05:18.489009] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c52c0) on tqpair=0x2441ec0 00:23:49.596 [2024-07-15 16:05:18.489016] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.596 [2024-07-15 16:05:18.489020] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.596 [2024-07-15 16:05:18.489023] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441ec0) 00:23:49.596 [2024-07-15 16:05:18.489028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.596 [2024-07-15 16:05:18.489037] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c52c0, cid 3, qid 0 00:23:49.596 [2024-07-15 16:05:18.489115] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.596 [2024-07-15 16:05:18.489120] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.596 [2024-07-15 16:05:18.489123] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.596 [2024-07-15 16:05:18.489126] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c52c0) on tqpair=0x2441ec0 00:23:49.596 [2024-07-15 16:05:18.489134] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.596 [2024-07-15 16:05:18.489138] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.596 [2024-07-15 16:05:18.489141] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441ec0) 00:23:49.596 [2024-07-15 16:05:18.489146] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.596 [2024-07-15 16:05:18.489155] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c52c0, cid 3, qid 0 00:23:49.596 [2024-07-15 16:05:18.493231] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.596 [2024-07-15 16:05:18.493239] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.596 [2024-07-15 16:05:18.493242] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.596 [2024-07-15 16:05:18.493246] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c52c0) on tqpair=0x2441ec0 00:23:49.596 [2024-07-15 16:05:18.493256] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.596 [2024-07-15 16:05:18.493259] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.596 [2024-07-15 16:05:18.493262] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2441ec0) 00:23:49.596 [2024-07-15 16:05:18.493268] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.596 [2024-07-15 16:05:18.493280] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24c52c0, cid 3, qid 0 00:23:49.596 [2024-07-15 16:05:18.493440] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.596 [2024-07-15 16:05:18.493446] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.596 [2024-07-15 16:05:18.493449] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.596 [2024-07-15 16:05:18.493452] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24c52c0) on tqpair=0x2441ec0 00:23:49.596 [2024-07-15 16:05:18.493458] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:23:49.596 00:23:49.596 16:05:18 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:49.857 [2024-07-15 16:05:18.530315] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:23:49.857 [2024-07-15 16:05:18.530347] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3846581 ] 00:23:49.857 EAL: No free 2048 kB hugepages reported on node 1 00:23:49.857 [2024-07-15 16:05:18.559506] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:23:49.857 [2024-07-15 16:05:18.559549] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:49.857 [2024-07-15 16:05:18.559554] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:49.857 [2024-07-15 16:05:18.559564] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:49.857 [2024-07-15 16:05:18.559570] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:49.857 [2024-07-15 16:05:18.559892] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:23:49.857 [2024-07-15 16:05:18.559918] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1ec1ec0 0 00:23:49.857 [2024-07-15 16:05:18.573231] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:49.857 [2024-07-15 16:05:18.573245] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:49.857 [2024-07-15 16:05:18.573249] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:49.858 [2024-07-15 16:05:18.573252] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:49.858 [2024-07-15 16:05:18.573280] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.858 [2024-07-15 16:05:18.573284] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.858 [2024-07-15 16:05:18.573288] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ec1ec0) 00:23:49.858 [2024-07-15 16:05:18.573300] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:49.858 [2024-07-15 16:05:18.573315] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f44e40, cid 0, qid 0 00:23:49.858 [2024-07-15 16:05:18.581234] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.858 [2024-07-15 16:05:18.581242] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.858 [2024-07-15 16:05:18.581246] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.858 [2024-07-15 16:05:18.581249] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f44e40) on tqpair=0x1ec1ec0 00:23:49.858 [2024-07-15 16:05:18.581257] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:49.858 [2024-07-15 16:05:18.581263] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:23:49.858 [2024-07-15 16:05:18.581267] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:23:49.858 [2024-07-15 16:05:18.581278] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.858 [2024-07-15 16:05:18.581281] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.858 [2024-07-15 16:05:18.581285] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ec1ec0) 00:23:49.858 [2024-07-15 16:05:18.581291] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.858 [2024-07-15 16:05:18.581303] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f44e40, cid 0, qid 0 00:23:49.858 [2024-07-15 16:05:18.581461] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.858 [2024-07-15 16:05:18.581467] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.858 [2024-07-15 16:05:18.581470] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.858 [2024-07-15 16:05:18.581474] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f44e40) on tqpair=0x1ec1ec0 00:23:49.858 [2024-07-15 16:05:18.581478] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:23:49.858 [2024-07-15 16:05:18.581485] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:23:49.858 [2024-07-15 16:05:18.581491] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.858 [2024-07-15 16:05:18.581494] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.858 [2024-07-15 16:05:18.581497] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ec1ec0) 00:23:49.858 [2024-07-15 16:05:18.581503] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.858 [2024-07-15 16:05:18.581513] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f44e40, cid 0, qid 0 00:23:49.858 [2024-07-15 16:05:18.581590] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.858 [2024-07-15 16:05:18.581596] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.858 [2024-07-15 16:05:18.581599] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.858 [2024-07-15 16:05:18.581602] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f44e40) on tqpair=0x1ec1ec0 00:23:49.858 [2024-07-15 16:05:18.581607] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:23:49.858 [2024-07-15 16:05:18.581613] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:23:49.858 [2024-07-15 16:05:18.581619] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.858 [2024-07-15 16:05:18.581622] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.858 [2024-07-15 16:05:18.581625] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ec1ec0) 00:23:49.858 [2024-07-15 16:05:18.581631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.858 [2024-07-15 16:05:18.581641] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f44e40, cid 0, qid 0 00:23:49.858 [2024-07-15 16:05:18.581710] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.858 [2024-07-15 16:05:18.581716] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.858 [2024-07-15 16:05:18.581719] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.858 [2024-07-15 16:05:18.581722] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f44e40) on tqpair=0x1ec1ec0 00:23:49.858 [2024-07-15 16:05:18.581727] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:49.858 [2024-07-15 16:05:18.581735] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.858 [2024-07-15 16:05:18.581738] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.858 [2024-07-15 16:05:18.581741] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ec1ec0) 00:23:49.858 [2024-07-15 16:05:18.581747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.858 [2024-07-15 16:05:18.581756] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f44e40, cid 0, qid 0 00:23:49.858 [2024-07-15 16:05:18.581825] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.858 [2024-07-15 16:05:18.581831] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.858 [2024-07-15 16:05:18.581834] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.858 [2024-07-15 16:05:18.581840] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f44e40) on tqpair=0x1ec1ec0 00:23:49.858 [2024-07-15 16:05:18.581844] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:23:49.858 [2024-07-15 16:05:18.581848] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:23:49.858 [2024-07-15 16:05:18.581855] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:49.858 [2024-07-15 16:05:18.581959] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:23:49.858 [2024-07-15 16:05:18.581963] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:49.858 [2024-07-15 16:05:18.581970] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.858 [2024-07-15 16:05:18.581973] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.858 [2024-07-15 16:05:18.581976] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ec1ec0) 00:23:49.858 [2024-07-15 16:05:18.581982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.858 [2024-07-15 16:05:18.581992] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f44e40, cid 0, qid 0 00:23:49.858 [2024-07-15 16:05:18.582064] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.858 [2024-07-15 16:05:18.582070] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.858 [2024-07-15 16:05:18.582073] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.858 [2024-07-15 16:05:18.582076] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f44e40) on tqpair=0x1ec1ec0 00:23:49.858 [2024-07-15 16:05:18.582080] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:49.858 [2024-07-15 16:05:18.582088] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.858 [2024-07-15 16:05:18.582092] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.858 [2024-07-15 16:05:18.582095] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ec1ec0) 00:23:49.858 [2024-07-15 16:05:18.582101] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.858 [2024-07-15 16:05:18.582111] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f44e40, cid 0, qid 0 00:23:49.858 [2024-07-15 16:05:18.582182] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.858 [2024-07-15 16:05:18.582188] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.858 [2024-07-15 16:05:18.582191] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.858 [2024-07-15 16:05:18.582194] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f44e40) on tqpair=0x1ec1ec0 00:23:49.858 [2024-07-15 16:05:18.582198] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:49.858 [2024-07-15 16:05:18.582202] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:23:49.858 [2024-07-15 16:05:18.582209] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:23:49.858 [2024-07-15 16:05:18.582220] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:23:49.858 [2024-07-15 16:05:18.582234] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.858 [2024-07-15 16:05:18.582238] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ec1ec0) 00:23:49.858 [2024-07-15 16:05:18.582243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.858 [2024-07-15 16:05:18.582255] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f44e40, cid 0, qid 0 00:23:49.858 [2024-07-15 16:05:18.582365] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:49.858 [2024-07-15 16:05:18.582371] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:49.858 [2024-07-15 16:05:18.582374] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:49.858 [2024-07-15 16:05:18.582378] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ec1ec0): datao=0, datal=4096, cccid=0 00:23:49.858 [2024-07-15 16:05:18.582381] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f44e40) on tqpair(0x1ec1ec0): expected_datao=0, payload_size=4096 00:23:49.858 [2024-07-15 16:05:18.582385] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.858 [2024-07-15 16:05:18.582408] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:49.858 [2024-07-15 16:05:18.582412] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:49.858 [2024-07-15 16:05:18.624371] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.858 [2024-07-15 16:05:18.624382] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.858 [2024-07-15 16:05:18.624386] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.858 [2024-07-15 16:05:18.624389] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f44e40) on tqpair=0x1ec1ec0 00:23:49.858 [2024-07-15 16:05:18.624397] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:23:49.858 [2024-07-15 16:05:18.624404] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:23:49.858 [2024-07-15 16:05:18.624408] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:23:49.858 [2024-07-15 16:05:18.624411] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:23:49.858 [2024-07-15 16:05:18.624415] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:23:49.858 [2024-07-15 16:05:18.624420] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:23:49.858 [2024-07-15 16:05:18.624428] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:23:49.858 [2024-07-15 16:05:18.624435] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.859 [2024-07-15 16:05:18.624438] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.859 [2024-07-15 16:05:18.624441] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ec1ec0) 00:23:49.859 [2024-07-15 16:05:18.624448] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:49.859 [2024-07-15 16:05:18.624459] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f44e40, cid 0, qid 0 00:23:49.859 [2024-07-15 16:05:18.624537] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.859 [2024-07-15 16:05:18.624543] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.859 [2024-07-15 16:05:18.624546] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.859 [2024-07-15 16:05:18.624549] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f44e40) on tqpair=0x1ec1ec0 00:23:49.859 [2024-07-15 16:05:18.624555] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.859 [2024-07-15 16:05:18.624558] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.859 [2024-07-15 16:05:18.624561] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ec1ec0) 00:23:49.859 [2024-07-15 16:05:18.624566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.859 [2024-07-15 16:05:18.624571] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.859 [2024-07-15 16:05:18.624575] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.859 [2024-07-15 16:05:18.624580] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1ec1ec0) 00:23:49.859 [2024-07-15 16:05:18.624585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.859 [2024-07-15 16:05:18.624590] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.859 [2024-07-15 16:05:18.624593] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.859 [2024-07-15 16:05:18.624596] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1ec1ec0) 00:23:49.859 [2024-07-15 16:05:18.624601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.859 [2024-07-15 16:05:18.624606] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.859 [2024-07-15 16:05:18.624609] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.859 [2024-07-15 16:05:18.624612] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec1ec0) 00:23:49.859 [2024-07-15 16:05:18.624617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.859 [2024-07-15 16:05:18.624621] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:49.859 [2024-07-15 16:05:18.624631] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:49.859 [2024-07-15 16:05:18.624637] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.859 [2024-07-15 16:05:18.624640] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ec1ec0) 00:23:49.859 [2024-07-15 16:05:18.624646] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.859 [2024-07-15 16:05:18.624657] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f44e40, cid 0, qid 0 00:23:49.859 [2024-07-15 16:05:18.624661] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f44fc0, cid 1, qid 0 00:23:49.859 [2024-07-15 16:05:18.624665] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f45140, cid 2, qid 0 00:23:49.859 [2024-07-15 16:05:18.624669] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f452c0, cid 3, qid 0 00:23:49.859 [2024-07-15 16:05:18.624673] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f45440, cid 4, qid 0 00:23:49.859 [2024-07-15 16:05:18.624783] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.859 [2024-07-15 16:05:18.624790] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.859 [2024-07-15 16:05:18.624793] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.859 [2024-07-15 16:05:18.624796] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f45440) on tqpair=0x1ec1ec0 00:23:49.859 [2024-07-15 16:05:18.624800] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:23:49.859 [2024-07-15 16:05:18.624804] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:49.859 [2024-07-15 16:05:18.624811] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:23:49.859 [2024-07-15 16:05:18.624817] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:49.859 [2024-07-15 16:05:18.624822] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.859 [2024-07-15 16:05:18.624826] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.859 [2024-07-15 16:05:18.624829] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ec1ec0) 00:23:49.859 [2024-07-15 16:05:18.624834] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:49.859 [2024-07-15 16:05:18.624846] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f45440, cid 4, qid 0 00:23:49.859 [2024-07-15 16:05:18.624923] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.859 [2024-07-15 16:05:18.624928] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.859 [2024-07-15 16:05:18.624931] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.859 [2024-07-15 16:05:18.624934] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f45440) on tqpair=0x1ec1ec0 00:23:49.859 [2024-07-15 16:05:18.624984] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:23:49.859 [2024-07-15 16:05:18.624994] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:49.859 [2024-07-15 16:05:18.625000] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.859 [2024-07-15 16:05:18.625004] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ec1ec0) 00:23:49.859 [2024-07-15 16:05:18.625009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.859 [2024-07-15 16:05:18.625019] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f45440, cid 4, qid 0 00:23:49.859 [2024-07-15 16:05:18.625105] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:49.859 [2024-07-15 16:05:18.625111] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:49.859 [2024-07-15 16:05:18.625114] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:49.859 [2024-07-15 16:05:18.625117] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ec1ec0): datao=0, datal=4096, cccid=4 00:23:49.859 [2024-07-15 16:05:18.625121] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f45440) on tqpair(0x1ec1ec0): expected_datao=0, payload_size=4096 00:23:49.859 [2024-07-15 16:05:18.625124] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.859 [2024-07-15 16:05:18.625131] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:49.859 [2024-07-15 16:05:18.625134] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:49.859 [2024-07-15 16:05:18.625187] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.859 [2024-07-15 16:05:18.625193] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.859 [2024-07-15 16:05:18.625198] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.859 [2024-07-15 16:05:18.625202] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f45440) on tqpair=0x1ec1ec0 00:23:49.859 [2024-07-15 16:05:18.625210] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:23:49.859 [2024-07-15 16:05:18.625220] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:23:49.859 [2024-07-15 16:05:18.629235] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:23:49.859 [2024-07-15 16:05:18.629245] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.859 [2024-07-15 16:05:18.629248] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ec1ec0) 00:23:49.859 [2024-07-15 16:05:18.629254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.859 [2024-07-15 16:05:18.629266] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f45440, cid 4, qid 0 00:23:49.859 [2024-07-15 16:05:18.629426] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:49.859 [2024-07-15 16:05:18.629432] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:49.859 [2024-07-15 16:05:18.629435] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:49.859 [2024-07-15 16:05:18.629438] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ec1ec0): datao=0, datal=4096, cccid=4 00:23:49.859 [2024-07-15 16:05:18.629445] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f45440) on tqpair(0x1ec1ec0): expected_datao=0, payload_size=4096 00:23:49.859 [2024-07-15 16:05:18.629449] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.859 [2024-07-15 16:05:18.629455] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:49.859 [2024-07-15 16:05:18.629458] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:49.859 [2024-07-15 16:05:18.629517] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.859 [2024-07-15 16:05:18.629522] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.859 [2024-07-15 16:05:18.629525] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.859 [2024-07-15 16:05:18.629529] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f45440) on tqpair=0x1ec1ec0 00:23:49.859 [2024-07-15 16:05:18.629541] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:49.859 [2024-07-15 16:05:18.629551] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:49.859 [2024-07-15 16:05:18.629557] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.859 [2024-07-15 16:05:18.629561] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ec1ec0) 00:23:49.859 [2024-07-15 16:05:18.629566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.859 [2024-07-15 16:05:18.629576] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f45440, cid 4, qid 0 00:23:49.859 [2024-07-15 16:05:18.629661] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:49.859 [2024-07-15 16:05:18.629667] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:49.859 [2024-07-15 16:05:18.629670] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:49.859 [2024-07-15 16:05:18.629673] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ec1ec0): datao=0, datal=4096, cccid=4 00:23:49.859 [2024-07-15 16:05:18.629677] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f45440) on tqpair(0x1ec1ec0): expected_datao=0, payload_size=4096 00:23:49.859 [2024-07-15 16:05:18.629680] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.859 [2024-07-15 16:05:18.629686] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:49.859 [2024-07-15 16:05:18.629689] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:49.859 [2024-07-15 16:05:18.629733] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.859 [2024-07-15 16:05:18.629738] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.859 [2024-07-15 16:05:18.629741] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.859 [2024-07-15 16:05:18.629744] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f45440) on tqpair=0x1ec1ec0 00:23:49.859 [2024-07-15 16:05:18.629750] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:49.860 [2024-07-15 16:05:18.629758] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:23:49.860 [2024-07-15 16:05:18.629767] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:23:49.860 [2024-07-15 16:05:18.629773] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:49.860 [2024-07-15 16:05:18.629777] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:49.860 [2024-07-15 16:05:18.629782] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:23:49.860 [2024-07-15 16:05:18.629788] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:23:49.860 [2024-07-15 16:05:18.629792] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:23:49.860 [2024-07-15 16:05:18.629796] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:23:49.860 [2024-07-15 16:05:18.629808] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.860 [2024-07-15 16:05:18.629812] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ec1ec0) 00:23:49.860 [2024-07-15 16:05:18.629817] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.860 [2024-07-15 16:05:18.629823] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.860 [2024-07-15 16:05:18.629826] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.860 [2024-07-15 16:05:18.629830] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ec1ec0) 00:23:49.860 [2024-07-15 16:05:18.629835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:49.860 [2024-07-15 16:05:18.629848] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f45440, cid 4, qid 0 00:23:49.860 [2024-07-15 16:05:18.629853] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f455c0, cid 5, qid 0 00:23:49.860 [2024-07-15 16:05:18.629942] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.860 [2024-07-15 16:05:18.629948] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.860 [2024-07-15 16:05:18.629951] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.860 [2024-07-15 16:05:18.629954] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f45440) on tqpair=0x1ec1ec0 00:23:49.860 [2024-07-15 16:05:18.629960] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.860 [2024-07-15 16:05:18.629965] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.860 [2024-07-15 16:05:18.629968] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.860 [2024-07-15 16:05:18.629971] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f455c0) on tqpair=0x1ec1ec0 00:23:49.860 [2024-07-15 16:05:18.629979] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.860 [2024-07-15 16:05:18.629983] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ec1ec0) 00:23:49.860 [2024-07-15 16:05:18.629988] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.860 [2024-07-15 16:05:18.629997] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f455c0, cid 5, qid 0 00:23:49.860 [2024-07-15 16:05:18.630073] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.860 [2024-07-15 16:05:18.630078] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.860 [2024-07-15 16:05:18.630082] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.860 [2024-07-15 16:05:18.630085] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f455c0) on tqpair=0x1ec1ec0 00:23:49.860 [2024-07-15 16:05:18.630093] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.860 [2024-07-15 16:05:18.630096] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ec1ec0) 00:23:49.860 [2024-07-15 16:05:18.630107] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.860 [2024-07-15 16:05:18.630116] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f455c0, cid 5, qid 0 00:23:49.860 [2024-07-15 16:05:18.630194] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.860 [2024-07-15 16:05:18.630199] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.860 [2024-07-15 16:05:18.630202] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.860 [2024-07-15 16:05:18.630208] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f455c0) on tqpair=0x1ec1ec0 00:23:49.860 [2024-07-15 16:05:18.630215] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.860 [2024-07-15 16:05:18.630219] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ec1ec0) 00:23:49.860 [2024-07-15 16:05:18.630229] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.860 [2024-07-15 16:05:18.630239] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f455c0, cid 5, qid 0 00:23:49.860 [2024-07-15 16:05:18.630312] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.860 [2024-07-15 16:05:18.630318] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.860 [2024-07-15 16:05:18.630321] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.860 [2024-07-15 16:05:18.630324] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f455c0) on tqpair=0x1ec1ec0 00:23:49.860 [2024-07-15 16:05:18.630336] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.860 [2024-07-15 16:05:18.630340] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ec1ec0) 00:23:49.860 [2024-07-15 16:05:18.630346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.860 [2024-07-15 16:05:18.630352] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.860 [2024-07-15 16:05:18.630355] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ec1ec0) 00:23:49.860 [2024-07-15 16:05:18.630360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.860 [2024-07-15 16:05:18.630366] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.860 [2024-07-15 16:05:18.630369] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1ec1ec0) 00:23:49.860 [2024-07-15 16:05:18.630374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.860 [2024-07-15 16:05:18.630381] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.860 [2024-07-15 16:05:18.630384] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1ec1ec0) 00:23:49.860 [2024-07-15 16:05:18.630389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.860 [2024-07-15 16:05:18.630399] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f455c0, cid 5, qid 0 00:23:49.860 [2024-07-15 16:05:18.630404] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f45440, cid 4, qid 0 00:23:49.860 [2024-07-15 16:05:18.630408] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f45740, cid 6, qid 0 00:23:49.860 [2024-07-15 16:05:18.630412] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f458c0, cid 7, qid 0 00:23:49.860 [2024-07-15 16:05:18.630760] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:49.860 [2024-07-15 16:05:18.630770] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:49.860 [2024-07-15 16:05:18.630773] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:49.860 [2024-07-15 16:05:18.630776] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ec1ec0): datao=0, datal=8192, cccid=5 00:23:49.860 [2024-07-15 16:05:18.630820] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f455c0) on tqpair(0x1ec1ec0): expected_datao=0, payload_size=8192 00:23:49.860 [2024-07-15 16:05:18.630825] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.860 [2024-07-15 16:05:18.630832] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:49.860 [2024-07-15 16:05:18.630835] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:49.860 [2024-07-15 16:05:18.630843] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:49.860 [2024-07-15 16:05:18.630848] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:49.860 [2024-07-15 16:05:18.630851] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:49.860 [2024-07-15 16:05:18.630854] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ec1ec0): datao=0, datal=512, cccid=4 00:23:49.860 [2024-07-15 16:05:18.630858] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f45440) on tqpair(0x1ec1ec0): expected_datao=0, payload_size=512 00:23:49.860 [2024-07-15 16:05:18.630862] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.860 [2024-07-15 16:05:18.630867] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:49.860 [2024-07-15 16:05:18.630870] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:49.860 [2024-07-15 16:05:18.630875] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:49.860 [2024-07-15 16:05:18.630880] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:49.860 [2024-07-15 16:05:18.630883] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:49.860 [2024-07-15 16:05:18.630886] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ec1ec0): datao=0, datal=512, cccid=6 00:23:49.860 [2024-07-15 16:05:18.630890] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f45740) on tqpair(0x1ec1ec0): expected_datao=0, payload_size=512 00:23:49.860 [2024-07-15 16:05:18.630893] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.860 [2024-07-15 16:05:18.630899] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:49.860 [2024-07-15 16:05:18.630902] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:49.860 [2024-07-15 16:05:18.630907] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:49.860 [2024-07-15 16:05:18.630912] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:49.860 [2024-07-15 16:05:18.630915] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:49.860 [2024-07-15 16:05:18.630918] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ec1ec0): datao=0, datal=4096, cccid=7 00:23:49.860 [2024-07-15 16:05:18.630922] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f458c0) on tqpair(0x1ec1ec0): expected_datao=0, payload_size=4096 00:23:49.860 [2024-07-15 16:05:18.630925] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.860 [2024-07-15 16:05:18.630931] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:49.860 [2024-07-15 16:05:18.630934] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:49.860 [2024-07-15 16:05:18.630941] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.860 [2024-07-15 16:05:18.630946] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.860 [2024-07-15 16:05:18.630949] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.860 [2024-07-15 16:05:18.630953] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f455c0) on tqpair=0x1ec1ec0 00:23:49.860 [2024-07-15 16:05:18.630964] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.860 [2024-07-15 16:05:18.630969] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.860 [2024-07-15 16:05:18.630972] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.860 [2024-07-15 16:05:18.630975] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f45440) on tqpair=0x1ec1ec0 00:23:49.860 [2024-07-15 16:05:18.630984] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.860 [2024-07-15 16:05:18.630989] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.860 [2024-07-15 16:05:18.630992] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.861 [2024-07-15 16:05:18.630995] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f45740) on tqpair=0x1ec1ec0 00:23:49.861 [2024-07-15 16:05:18.631001] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.861 [2024-07-15 16:05:18.631006] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.861 [2024-07-15 16:05:18.631009] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.861 [2024-07-15 16:05:18.631014] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f458c0) on tqpair=0x1ec1ec0 00:23:49.861 ===================================================== 00:23:49.861 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:49.861 ===================================================== 00:23:49.861 Controller Capabilities/Features 00:23:49.861 ================================ 00:23:49.861 Vendor ID: 8086 00:23:49.861 Subsystem Vendor ID: 8086 00:23:49.861 Serial Number: SPDK00000000000001 00:23:49.861 Model Number: SPDK bdev Controller 00:23:49.861 Firmware Version: 24.09 00:23:49.861 Recommended Arb Burst: 6 00:23:49.861 IEEE OUI Identifier: e4 d2 5c 00:23:49.861 Multi-path I/O 00:23:49.861 May have multiple subsystem ports: Yes 00:23:49.861 May have multiple controllers: Yes 00:23:49.861 Associated with SR-IOV VF: No 00:23:49.861 Max Data Transfer Size: 131072 00:23:49.861 Max Number of Namespaces: 32 00:23:49.861 Max Number of I/O Queues: 127 00:23:49.861 NVMe Specification Version (VS): 1.3 00:23:49.861 NVMe Specification Version (Identify): 1.3 00:23:49.861 Maximum Queue Entries: 128 00:23:49.861 Contiguous Queues Required: Yes 00:23:49.861 Arbitration Mechanisms Supported 00:23:49.861 Weighted Round Robin: Not Supported 00:23:49.861 Vendor Specific: Not Supported 00:23:49.861 Reset Timeout: 15000 ms 00:23:49.861 Doorbell Stride: 4 bytes 00:23:49.861 NVM Subsystem Reset: Not Supported 00:23:49.861 Command Sets Supported 00:23:49.861 NVM Command Set: Supported 00:23:49.861 Boot Partition: Not Supported 00:23:49.861 Memory Page Size Minimum: 4096 bytes 00:23:49.861 Memory Page Size Maximum: 4096 bytes 00:23:49.861 Persistent Memory Region: Not Supported 00:23:49.861 Optional Asynchronous Events Supported 00:23:49.861 Namespace Attribute Notices: Supported 00:23:49.861 Firmware Activation Notices: Not Supported 00:23:49.861 ANA Change Notices: Not Supported 00:23:49.861 PLE Aggregate Log Change Notices: Not Supported 00:23:49.861 LBA Status Info Alert Notices: Not Supported 00:23:49.861 EGE Aggregate Log Change Notices: Not Supported 00:23:49.861 Normal NVM Subsystem Shutdown event: Not Supported 00:23:49.861 Zone Descriptor Change Notices: Not Supported 00:23:49.861 Discovery Log Change Notices: Not Supported 00:23:49.861 Controller Attributes 00:23:49.861 128-bit Host Identifier: Supported 00:23:49.861 Non-Operational Permissive Mode: Not Supported 00:23:49.861 NVM Sets: Not Supported 00:23:49.861 Read Recovery Levels: Not Supported 00:23:49.861 Endurance Groups: Not Supported 00:23:49.861 Predictable Latency Mode: Not Supported 00:23:49.861 Traffic Based Keep ALive: Not Supported 00:23:49.861 Namespace Granularity: Not Supported 00:23:49.861 SQ Associations: Not Supported 00:23:49.861 UUID List: Not Supported 00:23:49.861 Multi-Domain Subsystem: Not Supported 00:23:49.861 Fixed Capacity Management: Not Supported 00:23:49.861 Variable Capacity Management: Not Supported 00:23:49.861 Delete Endurance Group: Not Supported 00:23:49.861 Delete NVM Set: Not Supported 00:23:49.861 Extended LBA Formats Supported: Not Supported 00:23:49.861 Flexible Data Placement Supported: Not Supported 00:23:49.861 00:23:49.861 Controller Memory Buffer Support 00:23:49.861 ================================ 00:23:49.861 Supported: No 00:23:49.861 00:23:49.861 Persistent Memory Region Support 00:23:49.861 ================================ 00:23:49.861 Supported: No 00:23:49.861 00:23:49.861 Admin Command Set Attributes 00:23:49.861 ============================ 00:23:49.861 Security Send/Receive: Not Supported 00:23:49.861 Format NVM: Not Supported 00:23:49.861 Firmware Activate/Download: Not Supported 00:23:49.861 Namespace Management: Not Supported 00:23:49.861 Device Self-Test: Not Supported 00:23:49.861 Directives: Not Supported 00:23:49.861 NVMe-MI: Not Supported 00:23:49.861 Virtualization Management: Not Supported 00:23:49.861 Doorbell Buffer Config: Not Supported 00:23:49.861 Get LBA Status Capability: Not Supported 00:23:49.861 Command & Feature Lockdown Capability: Not Supported 00:23:49.861 Abort Command Limit: 4 00:23:49.861 Async Event Request Limit: 4 00:23:49.861 Number of Firmware Slots: N/A 00:23:49.861 Firmware Slot 1 Read-Only: N/A 00:23:49.861 Firmware Activation Without Reset: N/A 00:23:49.861 Multiple Update Detection Support: N/A 00:23:49.861 Firmware Update Granularity: No Information Provided 00:23:49.861 Per-Namespace SMART Log: No 00:23:49.861 Asymmetric Namespace Access Log Page: Not Supported 00:23:49.861 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:49.861 Command Effects Log Page: Supported 00:23:49.861 Get Log Page Extended Data: Supported 00:23:49.861 Telemetry Log Pages: Not Supported 00:23:49.861 Persistent Event Log Pages: Not Supported 00:23:49.861 Supported Log Pages Log Page: May Support 00:23:49.861 Commands Supported & Effects Log Page: Not Supported 00:23:49.861 Feature Identifiers & Effects Log Page:May Support 00:23:49.861 NVMe-MI Commands & Effects Log Page: May Support 00:23:49.861 Data Area 4 for Telemetry Log: Not Supported 00:23:49.861 Error Log Page Entries Supported: 128 00:23:49.861 Keep Alive: Supported 00:23:49.861 Keep Alive Granularity: 10000 ms 00:23:49.861 00:23:49.861 NVM Command Set Attributes 00:23:49.861 ========================== 00:23:49.861 Submission Queue Entry Size 00:23:49.861 Max: 64 00:23:49.861 Min: 64 00:23:49.861 Completion Queue Entry Size 00:23:49.861 Max: 16 00:23:49.861 Min: 16 00:23:49.861 Number of Namespaces: 32 00:23:49.861 Compare Command: Supported 00:23:49.861 Write Uncorrectable Command: Not Supported 00:23:49.861 Dataset Management Command: Supported 00:23:49.861 Write Zeroes Command: Supported 00:23:49.861 Set Features Save Field: Not Supported 00:23:49.861 Reservations: Supported 00:23:49.861 Timestamp: Not Supported 00:23:49.861 Copy: Supported 00:23:49.861 Volatile Write Cache: Present 00:23:49.861 Atomic Write Unit (Normal): 1 00:23:49.861 Atomic Write Unit (PFail): 1 00:23:49.861 Atomic Compare & Write Unit: 1 00:23:49.861 Fused Compare & Write: Supported 00:23:49.861 Scatter-Gather List 00:23:49.861 SGL Command Set: Supported 00:23:49.861 SGL Keyed: Supported 00:23:49.861 SGL Bit Bucket Descriptor: Not Supported 00:23:49.861 SGL Metadata Pointer: Not Supported 00:23:49.861 Oversized SGL: Not Supported 00:23:49.861 SGL Metadata Address: Not Supported 00:23:49.861 SGL Offset: Supported 00:23:49.861 Transport SGL Data Block: Not Supported 00:23:49.861 Replay Protected Memory Block: Not Supported 00:23:49.861 00:23:49.861 Firmware Slot Information 00:23:49.861 ========================= 00:23:49.861 Active slot: 1 00:23:49.861 Slot 1 Firmware Revision: 24.09 00:23:49.861 00:23:49.861 00:23:49.861 Commands Supported and Effects 00:23:49.861 ============================== 00:23:49.861 Admin Commands 00:23:49.861 -------------- 00:23:49.861 Get Log Page (02h): Supported 00:23:49.861 Identify (06h): Supported 00:23:49.861 Abort (08h): Supported 00:23:49.861 Set Features (09h): Supported 00:23:49.861 Get Features (0Ah): Supported 00:23:49.861 Asynchronous Event Request (0Ch): Supported 00:23:49.861 Keep Alive (18h): Supported 00:23:49.861 I/O Commands 00:23:49.861 ------------ 00:23:49.861 Flush (00h): Supported LBA-Change 00:23:49.861 Write (01h): Supported LBA-Change 00:23:49.861 Read (02h): Supported 00:23:49.861 Compare (05h): Supported 00:23:49.861 Write Zeroes (08h): Supported LBA-Change 00:23:49.861 Dataset Management (09h): Supported LBA-Change 00:23:49.861 Copy (19h): Supported LBA-Change 00:23:49.861 00:23:49.861 Error Log 00:23:49.861 ========= 00:23:49.861 00:23:49.861 Arbitration 00:23:49.861 =========== 00:23:49.861 Arbitration Burst: 1 00:23:49.861 00:23:49.861 Power Management 00:23:49.861 ================ 00:23:49.861 Number of Power States: 1 00:23:49.861 Current Power State: Power State #0 00:23:49.861 Power State #0: 00:23:49.861 Max Power: 0.00 W 00:23:49.861 Non-Operational State: Operational 00:23:49.861 Entry Latency: Not Reported 00:23:49.861 Exit Latency: Not Reported 00:23:49.861 Relative Read Throughput: 0 00:23:49.862 Relative Read Latency: 0 00:23:49.862 Relative Write Throughput: 0 00:23:49.862 Relative Write Latency: 0 00:23:49.862 Idle Power: Not Reported 00:23:49.862 Active Power: Not Reported 00:23:49.862 Non-Operational Permissive Mode: Not Supported 00:23:49.862 00:23:49.862 Health Information 00:23:49.862 ================== 00:23:49.862 Critical Warnings: 00:23:49.862 Available Spare Space: OK 00:23:49.862 Temperature: OK 00:23:49.862 Device Reliability: OK 00:23:49.862 Read Only: No 00:23:49.862 Volatile Memory Backup: OK 00:23:49.862 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:49.862 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:49.862 Available Spare: 0% 00:23:49.862 Available Spare Threshold: 0% 00:23:49.862 Life Percentage Used:[2024-07-15 16:05:18.631099] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.862 [2024-07-15 16:05:18.631103] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1ec1ec0) 00:23:49.862 [2024-07-15 16:05:18.631110] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.862 [2024-07-15 16:05:18.631124] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f458c0, cid 7, qid 0 00:23:49.862 [2024-07-15 16:05:18.631282] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.862 [2024-07-15 16:05:18.631289] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.862 [2024-07-15 16:05:18.631292] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.862 [2024-07-15 16:05:18.631295] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f458c0) on tqpair=0x1ec1ec0 00:23:49.862 [2024-07-15 16:05:18.631324] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:23:49.862 [2024-07-15 16:05:18.631333] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f44e40) on tqpair=0x1ec1ec0 00:23:49.862 [2024-07-15 16:05:18.631339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.862 [2024-07-15 16:05:18.631343] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f44fc0) on tqpair=0x1ec1ec0 00:23:49.862 [2024-07-15 16:05:18.631348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.862 [2024-07-15 16:05:18.631352] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f45140) on tqpair=0x1ec1ec0 00:23:49.862 [2024-07-15 16:05:18.631356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.862 [2024-07-15 16:05:18.631360] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f452c0) on tqpair=0x1ec1ec0 00:23:49.862 [2024-07-15 16:05:18.631364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:49.862 [2024-07-15 16:05:18.631371] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.862 [2024-07-15 16:05:18.631374] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.862 [2024-07-15 16:05:18.631378] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec1ec0) 00:23:49.862 [2024-07-15 16:05:18.631384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.862 [2024-07-15 16:05:18.631395] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f452c0, cid 3, qid 0 00:23:49.862 [2024-07-15 16:05:18.631470] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.862 [2024-07-15 16:05:18.631476] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.862 [2024-07-15 16:05:18.631479] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.862 [2024-07-15 16:05:18.631482] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f452c0) on tqpair=0x1ec1ec0 00:23:49.862 [2024-07-15 16:05:18.631488] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.862 [2024-07-15 16:05:18.631491] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.862 [2024-07-15 16:05:18.631494] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec1ec0) 00:23:49.862 [2024-07-15 16:05:18.631500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.862 [2024-07-15 16:05:18.631511] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f452c0, cid 3, qid 0 00:23:49.862 [2024-07-15 16:05:18.631601] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.862 [2024-07-15 16:05:18.631607] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.862 [2024-07-15 16:05:18.631612] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.862 [2024-07-15 16:05:18.631615] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f452c0) on tqpair=0x1ec1ec0 00:23:49.862 [2024-07-15 16:05:18.631620] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:23:49.862 [2024-07-15 16:05:18.631624] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:23:49.862 [2024-07-15 16:05:18.631631] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.862 [2024-07-15 16:05:18.631635] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.862 [2024-07-15 16:05:18.631638] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec1ec0) 00:23:49.862 [2024-07-15 16:05:18.631643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.862 [2024-07-15 16:05:18.631653] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f452c0, cid 3, qid 0 00:23:49.862 [2024-07-15 16:05:18.631724] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.862 [2024-07-15 16:05:18.631730] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.862 [2024-07-15 16:05:18.631733] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.862 [2024-07-15 16:05:18.631736] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f452c0) on tqpair=0x1ec1ec0 00:23:49.862 [2024-07-15 16:05:18.631744] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.862 [2024-07-15 16:05:18.631748] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.862 [2024-07-15 16:05:18.631751] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec1ec0) 00:23:49.862 [2024-07-15 16:05:18.631756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.862 [2024-07-15 16:05:18.631765] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f452c0, cid 3, qid 0 00:23:49.862 [2024-07-15 16:05:18.631840] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.862 [2024-07-15 16:05:18.631846] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.862 [2024-07-15 16:05:18.631849] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.862 [2024-07-15 16:05:18.631852] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f452c0) on tqpair=0x1ec1ec0 00:23:49.862 [2024-07-15 16:05:18.631860] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.862 [2024-07-15 16:05:18.631864] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.862 [2024-07-15 16:05:18.631867] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec1ec0) 00:23:49.862 [2024-07-15 16:05:18.631872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.862 [2024-07-15 16:05:18.631881] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f452c0, cid 3, qid 0 00:23:49.862 [2024-07-15 16:05:18.631956] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.862 [2024-07-15 16:05:18.631962] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.862 [2024-07-15 16:05:18.631965] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.862 [2024-07-15 16:05:18.631968] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f452c0) on tqpair=0x1ec1ec0 00:23:49.862 [2024-07-15 16:05:18.631976] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.862 [2024-07-15 16:05:18.631980] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.862 [2024-07-15 16:05:18.631983] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec1ec0) 00:23:49.862 [2024-07-15 16:05:18.631989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.862 [2024-07-15 16:05:18.631997] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f452c0, cid 3, qid 0 00:23:49.862 [2024-07-15 16:05:18.632071] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.862 [2024-07-15 16:05:18.632076] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.862 [2024-07-15 16:05:18.632079] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.862 [2024-07-15 16:05:18.632082] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f452c0) on tqpair=0x1ec1ec0 00:23:49.862 [2024-07-15 16:05:18.632090] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.862 [2024-07-15 16:05:18.632094] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.862 [2024-07-15 16:05:18.632097] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec1ec0) 00:23:49.862 [2024-07-15 16:05:18.632103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.862 [2024-07-15 16:05:18.632111] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f452c0, cid 3, qid 0 00:23:49.862 [2024-07-15 16:05:18.632189] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.862 [2024-07-15 16:05:18.632195] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.863 [2024-07-15 16:05:18.632199] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.863 [2024-07-15 16:05:18.632202] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f452c0) on tqpair=0x1ec1ec0 00:23:49.863 [2024-07-15 16:05:18.632210] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.863 [2024-07-15 16:05:18.632213] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.863 [2024-07-15 16:05:18.632217] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec1ec0) 00:23:49.863 [2024-07-15 16:05:18.632222] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.863 [2024-07-15 16:05:18.632237] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f452c0, cid 3, qid 0 00:23:49.863 [2024-07-15 16:05:18.632310] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.863 [2024-07-15 16:05:18.632315] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.863 [2024-07-15 16:05:18.632318] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.863 [2024-07-15 16:05:18.632321] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f452c0) on tqpair=0x1ec1ec0 00:23:49.863 [2024-07-15 16:05:18.632330] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.863 [2024-07-15 16:05:18.632334] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.863 [2024-07-15 16:05:18.632337] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec1ec0) 00:23:49.863 [2024-07-15 16:05:18.632342] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.863 [2024-07-15 16:05:18.632351] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f452c0, cid 3, qid 0 00:23:49.863 [2024-07-15 16:05:18.632421] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.863 [2024-07-15 16:05:18.632427] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.863 [2024-07-15 16:05:18.632429] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.863 [2024-07-15 16:05:18.632433] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f452c0) on tqpair=0x1ec1ec0 00:23:49.863 [2024-07-15 16:05:18.632441] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.863 [2024-07-15 16:05:18.632444] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.863 [2024-07-15 16:05:18.632447] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec1ec0) 00:23:49.863 [2024-07-15 16:05:18.632453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.863 [2024-07-15 16:05:18.632462] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f452c0, cid 3, qid 0 00:23:49.863 [2024-07-15 16:05:18.632531] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.863 [2024-07-15 16:05:18.632537] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.863 [2024-07-15 16:05:18.632542] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.863 [2024-07-15 16:05:18.632545] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f452c0) on tqpair=0x1ec1ec0 00:23:49.863 [2024-07-15 16:05:18.632553] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.863 [2024-07-15 16:05:18.632557] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.863 [2024-07-15 16:05:18.632560] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec1ec0) 00:23:49.863 [2024-07-15 16:05:18.632565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.863 [2024-07-15 16:05:18.632574] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f452c0, cid 3, qid 0 00:23:49.863 [2024-07-15 16:05:18.632649] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.863 [2024-07-15 16:05:18.632654] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.863 [2024-07-15 16:05:18.632657] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.863 [2024-07-15 16:05:18.632661] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f452c0) on tqpair=0x1ec1ec0 00:23:49.863 [2024-07-15 16:05:18.632669] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.863 [2024-07-15 16:05:18.632672] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.863 [2024-07-15 16:05:18.632675] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec1ec0) 00:23:49.863 [2024-07-15 16:05:18.632681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.863 [2024-07-15 16:05:18.632689] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f452c0, cid 3, qid 0 00:23:49.863 [2024-07-15 16:05:18.632763] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.863 [2024-07-15 16:05:18.632769] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.863 [2024-07-15 16:05:18.632772] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.863 [2024-07-15 16:05:18.632775] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f452c0) on tqpair=0x1ec1ec0 00:23:49.863 [2024-07-15 16:05:18.632783] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.863 [2024-07-15 16:05:18.632786] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.863 [2024-07-15 16:05:18.632789] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec1ec0) 00:23:49.863 [2024-07-15 16:05:18.632795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.863 [2024-07-15 16:05:18.632804] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f452c0, cid 3, qid 0 00:23:49.863 [2024-07-15 16:05:18.632881] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.863 [2024-07-15 16:05:18.632888] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.863 [2024-07-15 16:05:18.632891] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.863 [2024-07-15 16:05:18.632894] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f452c0) on tqpair=0x1ec1ec0 00:23:49.863 [2024-07-15 16:05:18.632902] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.863 [2024-07-15 16:05:18.632906] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.863 [2024-07-15 16:05:18.632909] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec1ec0) 00:23:49.863 [2024-07-15 16:05:18.632914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.863 [2024-07-15 16:05:18.632923] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f452c0, cid 3, qid 0 00:23:49.863 [2024-07-15 16:05:18.632993] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.863 [2024-07-15 16:05:18.632999] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.863 [2024-07-15 16:05:18.633002] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.863 [2024-07-15 16:05:18.633007] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f452c0) on tqpair=0x1ec1ec0 00:23:49.863 [2024-07-15 16:05:18.633015] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.863 [2024-07-15 16:05:18.633019] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.863 [2024-07-15 16:05:18.633022] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec1ec0) 00:23:49.863 [2024-07-15 16:05:18.633027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.863 [2024-07-15 16:05:18.633037] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f452c0, cid 3, qid 0 00:23:49.863 [2024-07-15 16:05:18.633107] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.863 [2024-07-15 16:05:18.633114] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.863 [2024-07-15 16:05:18.633119] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.863 [2024-07-15 16:05:18.633124] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f452c0) on tqpair=0x1ec1ec0 00:23:49.863 [2024-07-15 16:05:18.633134] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.863 [2024-07-15 16:05:18.633137] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.863 [2024-07-15 16:05:18.633140] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec1ec0) 00:23:49.863 [2024-07-15 16:05:18.633146] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.863 [2024-07-15 16:05:18.633154] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f452c0, cid 3, qid 0 00:23:49.863 [2024-07-15 16:05:18.637232] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.863 [2024-07-15 16:05:18.637241] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.863 [2024-07-15 16:05:18.637244] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.863 [2024-07-15 16:05:18.637248] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f452c0) on tqpair=0x1ec1ec0 00:23:49.863 [2024-07-15 16:05:18.637258] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:49.863 [2024-07-15 16:05:18.637261] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:49.863 [2024-07-15 16:05:18.637264] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ec1ec0) 00:23:49.863 [2024-07-15 16:05:18.637270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.863 [2024-07-15 16:05:18.637280] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f452c0, cid 3, qid 0 00:23:49.863 [2024-07-15 16:05:18.637423] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:49.863 [2024-07-15 16:05:18.637429] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:49.863 [2024-07-15 16:05:18.637432] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:49.863 [2024-07-15 16:05:18.637435] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f452c0) on tqpair=0x1ec1ec0 00:23:49.863 [2024-07-15 16:05:18.637443] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:23:49.863 0% 00:23:49.863 Data Units Read: 0 00:23:49.863 Data Units Written: 0 00:23:49.863 Host Read Commands: 0 00:23:49.863 Host Write Commands: 0 00:23:49.863 Controller Busy Time: 0 minutes 00:23:49.863 Power Cycles: 0 00:23:49.863 Power On Hours: 0 hours 00:23:49.863 Unsafe Shutdowns: 0 00:23:49.863 Unrecoverable Media Errors: 0 00:23:49.863 Lifetime Error Log Entries: 0 00:23:49.863 Warning Temperature Time: 0 minutes 00:23:49.863 Critical Temperature Time: 0 minutes 00:23:49.863 00:23:49.863 Number of Queues 00:23:49.863 ================ 00:23:49.863 Number of I/O Submission Queues: 127 00:23:49.863 Number of I/O Completion Queues: 127 00:23:49.863 00:23:49.863 Active Namespaces 00:23:49.863 ================= 00:23:49.863 Namespace ID:1 00:23:49.863 Error Recovery Timeout: Unlimited 00:23:49.863 Command Set Identifier: NVM (00h) 00:23:49.863 Deallocate: Supported 00:23:49.863 Deallocated/Unwritten Error: Not Supported 00:23:49.863 Deallocated Read Value: Unknown 00:23:49.863 Deallocate in Write Zeroes: Not Supported 00:23:49.863 Deallocated Guard Field: 0xFFFF 00:23:49.864 Flush: Supported 00:23:49.864 Reservation: Supported 00:23:49.864 Namespace Sharing Capabilities: Multiple Controllers 00:23:49.864 Size (in LBAs): 131072 (0GiB) 00:23:49.864 Capacity (in LBAs): 131072 (0GiB) 00:23:49.864 Utilization (in LBAs): 131072 (0GiB) 00:23:49.864 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:49.864 EUI64: ABCDEF0123456789 00:23:49.864 UUID: 80861deb-522d-44fe-b2a3-c9db774e9393 00:23:49.864 Thin Provisioning: Not Supported 00:23:49.864 Per-NS Atomic Units: Yes 00:23:49.864 Atomic Boundary Size (Normal): 0 00:23:49.864 Atomic Boundary Size (PFail): 0 00:23:49.864 Atomic Boundary Offset: 0 00:23:49.864 Maximum Single Source Range Length: 65535 00:23:49.864 Maximum Copy Length: 65535 00:23:49.864 Maximum Source Range Count: 1 00:23:49.864 NGUID/EUI64 Never Reused: No 00:23:49.864 Namespace Write Protected: No 00:23:49.864 Number of LBA Formats: 1 00:23:49.864 Current LBA Format: LBA Format #00 00:23:49.864 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:49.864 00:23:49.864 16:05:18 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:49.864 16:05:18 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:49.864 16:05:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.864 16:05:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:49.864 16:05:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.864 16:05:18 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:49.864 16:05:18 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:49.864 16:05:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:49.864 16:05:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:23:49.864 16:05:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:49.864 16:05:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:23:49.864 16:05:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:49.864 16:05:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:49.864 rmmod nvme_tcp 00:23:49.864 rmmod nvme_fabrics 00:23:49.864 rmmod nvme_keyring 00:23:49.864 16:05:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:49.864 16:05:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:23:49.864 16:05:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:23:49.864 16:05:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 3846362 ']' 00:23:49.864 16:05:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 3846362 00:23:49.864 16:05:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 3846362 ']' 00:23:49.864 16:05:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 3846362 00:23:49.864 16:05:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:23:49.864 16:05:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:49.864 16:05:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3846362 00:23:50.123 16:05:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:50.123 16:05:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:50.123 16:05:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3846362' 00:23:50.123 killing process with pid 3846362 00:23:50.123 16:05:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 3846362 00:23:50.123 16:05:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 3846362 00:23:50.123 16:05:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:50.123 16:05:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:50.123 16:05:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:50.123 16:05:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:50.123 16:05:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:50.123 16:05:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.123 16:05:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:50.123 16:05:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:52.679 16:05:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:52.679 00:23:52.679 real 0m9.034s 00:23:52.679 user 0m7.240s 00:23:52.679 sys 0m4.317s 00:23:52.679 16:05:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:52.679 16:05:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:52.679 ************************************ 00:23:52.679 END TEST nvmf_identify 00:23:52.679 ************************************ 00:23:52.679 16:05:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:52.679 16:05:21 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:52.679 16:05:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:52.679 16:05:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:52.679 16:05:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:52.679 ************************************ 00:23:52.679 START TEST nvmf_perf 00:23:52.679 ************************************ 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:52.679 * Looking for test storage... 00:23:52.679 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:52.679 16:05:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:23:52.680 16:05:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:57.948 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:57.948 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:23:57.948 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:57.949 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:57.949 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:57.949 Found net devices under 0000:86:00.0: cvl_0_0 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:57.949 Found net devices under 0000:86:00.1: cvl_0_1 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:57.949 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:57.949 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:23:57.949 00:23:57.949 --- 10.0.0.2 ping statistics --- 00:23:57.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.949 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:57.949 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:57.949 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:23:57.949 00:23:57.949 --- 10.0.0.1 ping statistics --- 00:23:57.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.949 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=3850088 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 3850088 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 3850088 ']' 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:57.949 16:05:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:57.949 [2024-07-15 16:05:26.651612] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:23:57.949 [2024-07-15 16:05:26.651652] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:57.949 EAL: No free 2048 kB hugepages reported on node 1 00:23:57.949 [2024-07-15 16:05:26.708087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:57.949 [2024-07-15 16:05:26.780939] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:57.949 [2024-07-15 16:05:26.780981] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:57.949 [2024-07-15 16:05:26.780988] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:57.949 [2024-07-15 16:05:26.780994] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:57.949 [2024-07-15 16:05:26.780999] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:57.950 [2024-07-15 16:05:26.781063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:57.950 [2024-07-15 16:05:26.781177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:57.950 [2024-07-15 16:05:26.781204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.950 [2024-07-15 16:05:26.781203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:58.885 16:05:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:58.885 16:05:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:23:58.885 16:05:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:58.885 16:05:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:58.885 16:05:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:58.885 16:05:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:58.885 16:05:27 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:58.885 16:05:27 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:02.171 16:05:30 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:02.172 16:05:30 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:02.172 16:05:30 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:24:02.172 16:05:30 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:02.172 16:05:30 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:02.172 16:05:30 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:24:02.172 16:05:30 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:02.172 16:05:30 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:02.172 16:05:30 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:02.172 [2024-07-15 16:05:31.058344] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:02.172 16:05:31 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:02.430 16:05:31 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:02.430 16:05:31 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:02.689 16:05:31 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:02.689 16:05:31 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:02.948 16:05:31 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:02.948 [2024-07-15 16:05:31.805112] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:02.948 16:05:31 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:03.206 16:05:32 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:24:03.206 16:05:32 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:24:03.206 16:05:32 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:03.206 16:05:32 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:24:04.580 Initializing NVMe Controllers 00:24:04.580 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:24:04.580 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:24:04.580 Initialization complete. Launching workers. 00:24:04.580 ======================================================== 00:24:04.580 Latency(us) 00:24:04.580 Device Information : IOPS MiB/s Average min max 00:24:04.580 PCIE (0000:5e:00.0) NSID 1 from core 0: 98096.08 383.19 325.86 39.84 4278.35 00:24:04.580 ======================================================== 00:24:04.580 Total : 98096.08 383.19 325.86 39.84 4278.35 00:24:04.580 00:24:04.580 16:05:33 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:04.580 EAL: No free 2048 kB hugepages reported on node 1 00:24:05.955 Initializing NVMe Controllers 00:24:05.955 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:05.955 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:05.955 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:05.955 Initialization complete. Launching workers. 00:24:05.955 ======================================================== 00:24:05.955 Latency(us) 00:24:05.955 Device Information : IOPS MiB/s Average min max 00:24:05.955 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 113.00 0.44 9138.65 134.46 44723.29 00:24:05.955 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 64.00 0.25 16084.94 6011.37 47884.68 00:24:05.955 ======================================================== 00:24:05.955 Total : 177.00 0.69 11650.30 134.46 47884.68 00:24:05.955 00:24:05.955 16:05:34 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:05.955 EAL: No free 2048 kB hugepages reported on node 1 00:24:07.331 Initializing NVMe Controllers 00:24:07.331 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:07.331 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:07.331 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:07.331 Initialization complete. Launching workers. 00:24:07.331 ======================================================== 00:24:07.331 Latency(us) 00:24:07.331 Device Information : IOPS MiB/s Average min max 00:24:07.331 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10648.42 41.60 3006.56 330.92 6475.65 00:24:07.331 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3847.61 15.03 8330.01 5363.49 16027.53 00:24:07.331 ======================================================== 00:24:07.331 Total : 14496.03 56.63 4419.54 330.92 16027.53 00:24:07.331 00:24:07.331 16:05:36 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:07.331 16:05:36 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:07.331 16:05:36 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:07.331 EAL: No free 2048 kB hugepages reported on node 1 00:24:09.866 Initializing NVMe Controllers 00:24:09.866 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:09.866 Controller IO queue size 128, less than required. 00:24:09.866 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:09.866 Controller IO queue size 128, less than required. 00:24:09.866 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:09.866 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:09.866 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:09.866 Initialization complete. Launching workers. 00:24:09.866 ======================================================== 00:24:09.866 Latency(us) 00:24:09.866 Device Information : IOPS MiB/s Average min max 00:24:09.866 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1291.47 322.87 102120.78 62059.74 141615.61 00:24:09.866 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 623.99 156.00 212195.98 85728.94 311665.88 00:24:09.866 ======================================================== 00:24:09.866 Total : 1915.46 478.87 137979.26 62059.74 311665.88 00:24:09.866 00:24:09.866 16:05:38 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:09.866 EAL: No free 2048 kB hugepages reported on node 1 00:24:10.194 No valid NVMe controllers or AIO or URING devices found 00:24:10.194 Initializing NVMe Controllers 00:24:10.194 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:10.194 Controller IO queue size 128, less than required. 00:24:10.194 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:10.194 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:10.194 Controller IO queue size 128, less than required. 00:24:10.194 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:10.194 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:10.194 WARNING: Some requested NVMe devices were skipped 00:24:10.194 16:05:38 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:10.194 EAL: No free 2048 kB hugepages reported on node 1 00:24:12.728 Initializing NVMe Controllers 00:24:12.728 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:12.728 Controller IO queue size 128, less than required. 00:24:12.728 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:12.728 Controller IO queue size 128, less than required. 00:24:12.728 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:12.728 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:12.728 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:12.728 Initialization complete. Launching workers. 00:24:12.728 00:24:12.728 ==================== 00:24:12.728 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:12.728 TCP transport: 00:24:12.728 polls: 25472 00:24:12.728 idle_polls: 9426 00:24:12.728 sock_completions: 16046 00:24:12.728 nvme_completions: 5547 00:24:12.728 submitted_requests: 8318 00:24:12.728 queued_requests: 1 00:24:12.728 00:24:12.728 ==================== 00:24:12.728 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:12.728 TCP transport: 00:24:12.728 polls: 29887 00:24:12.728 idle_polls: 13671 00:24:12.728 sock_completions: 16216 00:24:12.728 nvme_completions: 5547 00:24:12.728 submitted_requests: 8268 00:24:12.728 queued_requests: 1 00:24:12.728 ======================================================== 00:24:12.728 Latency(us) 00:24:12.728 Device Information : IOPS MiB/s Average min max 00:24:12.728 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1385.75 346.44 94090.11 54296.52 144493.84 00:24:12.728 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1385.75 346.44 93853.68 35529.96 137307.25 00:24:12.728 ======================================================== 00:24:12.728 Total : 2771.50 692.88 93971.89 35529.96 144493.84 00:24:12.728 00:24:12.728 16:05:41 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:12.728 16:05:41 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:12.987 16:05:41 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:12.987 16:05:41 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:12.987 16:05:41 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:12.987 16:05:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:12.987 16:05:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:24:12.987 16:05:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:12.987 16:05:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:24:12.987 16:05:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:12.987 16:05:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:12.987 rmmod nvme_tcp 00:24:12.987 rmmod nvme_fabrics 00:24:12.987 rmmod nvme_keyring 00:24:12.987 16:05:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:12.987 16:05:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:24:12.987 16:05:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:24:12.987 16:05:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 3850088 ']' 00:24:12.987 16:05:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 3850088 00:24:12.987 16:05:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 3850088 ']' 00:24:12.987 16:05:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 3850088 00:24:12.987 16:05:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:24:12.987 16:05:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:12.987 16:05:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3850088 00:24:12.987 16:05:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:12.987 16:05:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:12.987 16:05:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3850088' 00:24:12.987 killing process with pid 3850088 00:24:12.987 16:05:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 3850088 00:24:12.987 16:05:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 3850088 00:24:14.894 16:05:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:14.894 16:05:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:14.894 16:05:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:14.894 16:05:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:14.894 16:05:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:14.894 16:05:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.894 16:05:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:14.894 16:05:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.794 16:05:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:16.794 00:24:16.794 real 0m24.297s 00:24:16.794 user 1m6.214s 00:24:16.794 sys 0m7.183s 00:24:16.794 16:05:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:16.794 16:05:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:16.794 ************************************ 00:24:16.794 END TEST nvmf_perf 00:24:16.794 ************************************ 00:24:16.794 16:05:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:16.794 16:05:45 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:16.794 16:05:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:16.794 16:05:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:16.794 16:05:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:16.794 ************************************ 00:24:16.794 START TEST nvmf_fio_host 00:24:16.794 ************************************ 00:24:16.794 16:05:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:16.794 * Looking for test storage... 00:24:16.794 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:16.794 16:05:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:16.794 16:05:45 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:16.794 16:05:45 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:16.794 16:05:45 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:16.794 16:05:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.794 16:05:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.794 16:05:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.794 16:05:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:16.794 16:05:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.794 16:05:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:16.794 16:05:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:16.794 16:05:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:16.794 16:05:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:16.794 16:05:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:16.794 16:05:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:16.794 16:05:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:16.794 16:05:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:16.794 16:05:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:16.794 16:05:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:16.794 16:05:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:16.794 16:05:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:16.794 16:05:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:16.794 16:05:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:16.794 16:05:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:16.794 16:05:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:16.794 16:05:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:16.794 16:05:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:16.794 16:05:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:16.794 16:05:45 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:16.794 16:05:45 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:16.794 16:05:45 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:16.794 16:05:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.794 16:05:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.794 16:05:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.794 16:05:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:16.794 16:05:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.794 16:05:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:24:16.794 16:05:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:16.794 16:05:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:16.794 16:05:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:16.794 16:05:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:16.795 16:05:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:16.795 16:05:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:16.795 16:05:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:16.795 16:05:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:16.795 16:05:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:16.795 16:05:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:16.795 16:05:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:16.795 16:05:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:16.795 16:05:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:16.795 16:05:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:16.795 16:05:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:16.795 16:05:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.795 16:05:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:16.795 16:05:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.795 16:05:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:16.795 16:05:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:16.795 16:05:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:16.795 16:05:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:22.062 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:22.062 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:22.062 Found net devices under 0000:86:00.0: cvl_0_0 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:22.062 Found net devices under 0000:86:00.1: cvl_0_1 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:22.062 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:22.062 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:24:22.062 00:24:22.062 --- 10.0.0.2 ping statistics --- 00:24:22.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:22.062 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:22.062 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:22.062 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:24:22.062 00:24:22.062 --- 10.0.0.1 ping statistics --- 00:24:22.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:22.062 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:22.062 16:05:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:22.322 16:05:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:22.322 16:05:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:22.322 16:05:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:22.322 16:05:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.322 16:05:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3856202 00:24:22.322 16:05:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:22.322 16:05:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:22.322 16:05:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3856202 00:24:22.322 16:05:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 3856202 ']' 00:24:22.322 16:05:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:22.322 16:05:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:22.322 16:05:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:22.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:22.322 16:05:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:22.322 16:05:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.322 [2024-07-15 16:05:51.065844] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:24:22.322 [2024-07-15 16:05:51.065888] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:22.322 EAL: No free 2048 kB hugepages reported on node 1 00:24:22.322 [2024-07-15 16:05:51.123229] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:22.322 [2024-07-15 16:05:51.197097] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:22.322 [2024-07-15 16:05:51.197140] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:22.322 [2024-07-15 16:05:51.197148] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:22.322 [2024-07-15 16:05:51.197154] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:22.322 [2024-07-15 16:05:51.197160] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:22.322 [2024-07-15 16:05:51.197208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:22.322 [2024-07-15 16:05:51.197310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:22.322 [2024-07-15 16:05:51.197333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:22.322 [2024-07-15 16:05:51.197334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:23.258 16:05:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:23.258 16:05:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:24:23.258 16:05:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:23.258 [2024-07-15 16:05:52.023704] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:23.258 16:05:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:23.258 16:05:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:23.258 16:05:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.258 16:05:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:23.517 Malloc1 00:24:23.517 16:05:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:23.775 16:05:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:23.775 16:05:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:24.033 [2024-07-15 16:05:52.797958] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:24.033 16:05:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:24.290 16:05:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:24.290 16:05:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:24.290 16:05:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:24.290 16:05:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:24.290 16:05:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:24.290 16:05:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:24.290 16:05:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:24.290 16:05:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:24.290 16:05:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:24.290 16:05:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:24.290 16:05:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:24.290 16:05:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:24.290 16:05:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:24.290 16:05:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:24.290 16:05:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:24.290 16:05:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:24.290 16:05:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:24.290 16:05:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:24.290 16:05:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:24.290 16:05:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:24.290 16:05:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:24.290 16:05:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:24.290 16:05:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:24.547 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:24.547 fio-3.35 00:24:24.547 Starting 1 thread 00:24:24.547 EAL: No free 2048 kB hugepages reported on node 1 00:24:27.077 00:24:27.077 test: (groupid=0, jobs=1): err= 0: pid=3856799: Mon Jul 15 16:05:55 2024 00:24:27.077 read: IOPS=11.7k, BW=45.9MiB/s (48.1MB/s)(92.0MiB/2005msec) 00:24:27.077 slat (nsec): min=1604, max=252992, avg=1757.49, stdev=2317.27 00:24:27.077 clat (usec): min=3672, max=10097, avg=6030.45, stdev=454.93 00:24:27.077 lat (usec): min=3705, max=10098, avg=6032.20, stdev=454.89 00:24:27.077 clat percentiles (usec): 00:24:27.077 | 1.00th=[ 4883], 5.00th=[ 5276], 10.00th=[ 5473], 20.00th=[ 5669], 00:24:27.077 | 30.00th=[ 5800], 40.00th=[ 5932], 50.00th=[ 6063], 60.00th=[ 6128], 00:24:27.077 | 70.00th=[ 6259], 80.00th=[ 6390], 90.00th=[ 6587], 95.00th=[ 6718], 00:24:27.077 | 99.00th=[ 7046], 99.50th=[ 7111], 99.90th=[ 8455], 99.95th=[ 9503], 00:24:27.077 | 99.99th=[10028] 00:24:27.077 bw ( KiB/s): min=46184, max=47592, per=100.00%, avg=46994.00, stdev=612.66, samples=4 00:24:27.077 iops : min=11546, max=11898, avg=11748.50, stdev=153.17, samples=4 00:24:27.077 write: IOPS=11.7k, BW=45.6MiB/s (47.8MB/s)(91.5MiB/2005msec); 0 zone resets 00:24:27.077 slat (nsec): min=1664, max=232708, avg=1847.96, stdev=1697.39 00:24:27.077 clat (usec): min=2486, max=9660, avg=4862.93, stdev=390.27 00:24:27.077 lat (usec): min=2501, max=9662, avg=4864.78, stdev=390.27 00:24:27.077 clat percentiles (usec): 00:24:27.077 | 1.00th=[ 3949], 5.00th=[ 4293], 10.00th=[ 4424], 20.00th=[ 4555], 00:24:27.077 | 30.00th=[ 4686], 40.00th=[ 4752], 50.00th=[ 4883], 60.00th=[ 4948], 00:24:27.077 | 70.00th=[ 5014], 80.00th=[ 5145], 90.00th=[ 5276], 95.00th=[ 5407], 00:24:27.077 | 99.00th=[ 5735], 99.50th=[ 5800], 99.90th=[ 7898], 99.95th=[ 9372], 00:24:27.077 | 99.99th=[ 9634] 00:24:27.077 bw ( KiB/s): min=46272, max=47136, per=99.95%, avg=46690.00, stdev=354.92, samples=4 00:24:27.077 iops : min=11568, max=11784, avg=11672.50, stdev=88.73, samples=4 00:24:27.077 lat (msec) : 4=0.69%, 10=99.31%, 20=0.01% 00:24:27.077 cpu : usr=71.26%, sys=25.90%, ctx=78, majf=0, minf=6 00:24:27.077 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:27.077 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:27.077 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:27.077 issued rwts: total=23555,23414,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:27.077 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:27.077 00:24:27.077 Run status group 0 (all jobs): 00:24:27.077 READ: bw=45.9MiB/s (48.1MB/s), 45.9MiB/s-45.9MiB/s (48.1MB/s-48.1MB/s), io=92.0MiB (96.5MB), run=2005-2005msec 00:24:27.077 WRITE: bw=45.6MiB/s (47.8MB/s), 45.6MiB/s-45.6MiB/s (47.8MB/s-47.8MB/s), io=91.5MiB (95.9MB), run=2005-2005msec 00:24:27.077 16:05:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:27.077 16:05:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:27.077 16:05:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:27.077 16:05:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:27.077 16:05:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:27.077 16:05:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:27.077 16:05:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:27.077 16:05:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:27.077 16:05:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:27.077 16:05:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:27.077 16:05:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:27.077 16:05:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:27.077 16:05:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:27.077 16:05:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:27.077 16:05:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:27.077 16:05:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:27.077 16:05:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:27.077 16:05:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:27.077 16:05:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:27.077 16:05:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:27.077 16:05:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:27.077 16:05:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:27.077 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:27.077 fio-3.35 00:24:27.077 Starting 1 thread 00:24:27.077 EAL: No free 2048 kB hugepages reported on node 1 00:24:29.606 00:24:29.606 test: (groupid=0, jobs=1): err= 0: pid=3857376: Mon Jul 15 16:05:58 2024 00:24:29.606 read: IOPS=10.6k, BW=166MiB/s (174MB/s)(334MiB/2009msec) 00:24:29.606 slat (nsec): min=2575, max=84224, avg=2859.50, stdev=1255.74 00:24:29.606 clat (usec): min=2330, max=51197, avg=7174.24, stdev=3550.00 00:24:29.606 lat (usec): min=2333, max=51199, avg=7177.10, stdev=3550.05 00:24:29.606 clat percentiles (usec): 00:24:29.606 | 1.00th=[ 3621], 5.00th=[ 4293], 10.00th=[ 4752], 20.00th=[ 5407], 00:24:29.606 | 30.00th=[ 5932], 40.00th=[ 6390], 50.00th=[ 6849], 60.00th=[ 7373], 00:24:29.606 | 70.00th=[ 7898], 80.00th=[ 8356], 90.00th=[ 9110], 95.00th=[ 9896], 00:24:29.606 | 99.00th=[12649], 99.50th=[44303], 99.90th=[50070], 99.95th=[50594], 00:24:29.606 | 99.99th=[51119] 00:24:29.606 bw ( KiB/s): min=81440, max=91616, per=50.71%, avg=86280.00, stdev=4239.76, samples=4 00:24:29.606 iops : min= 5090, max= 5726, avg=5392.50, stdev=264.98, samples=4 00:24:29.606 write: IOPS=6303, BW=98.5MiB/s (103MB/s)(176MiB/1787msec); 0 zone resets 00:24:29.606 slat (usec): min=29, max=387, avg=32.01, stdev= 7.35 00:24:29.606 clat (usec): min=3216, max=14496, avg=8546.68, stdev=1573.94 00:24:29.606 lat (usec): min=3247, max=14613, avg=8578.70, stdev=1575.33 00:24:29.606 clat percentiles (usec): 00:24:29.606 | 1.00th=[ 5538], 5.00th=[ 6259], 10.00th=[ 6718], 20.00th=[ 7242], 00:24:29.606 | 30.00th=[ 7635], 40.00th=[ 7963], 50.00th=[ 8356], 60.00th=[ 8717], 00:24:29.606 | 70.00th=[ 9110], 80.00th=[ 9765], 90.00th=[10945], 95.00th=[11600], 00:24:29.606 | 99.00th=[12649], 99.50th=[13042], 99.90th=[13829], 99.95th=[14091], 00:24:29.606 | 99.99th=[14353] 00:24:29.606 bw ( KiB/s): min=85568, max=94752, per=88.99%, avg=89752.00, stdev=3900.53, samples=4 00:24:29.606 iops : min= 5348, max= 5922, avg=5609.50, stdev=243.78, samples=4 00:24:29.606 lat (msec) : 4=1.60%, 10=89.25%, 20=8.76%, 50=0.33%, 100=0.06% 00:24:29.606 cpu : usr=86.35%, sys=12.35%, ctx=52, majf=0, minf=3 00:24:29.606 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:24:29.606 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:29.606 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:29.606 issued rwts: total=21363,11264,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:29.606 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:29.606 00:24:29.606 Run status group 0 (all jobs): 00:24:29.606 READ: bw=166MiB/s (174MB/s), 166MiB/s-166MiB/s (174MB/s-174MB/s), io=334MiB (350MB), run=2009-2009msec 00:24:29.606 WRITE: bw=98.5MiB/s (103MB/s), 98.5MiB/s-98.5MiB/s (103MB/s-103MB/s), io=176MiB (185MB), run=1787-1787msec 00:24:29.606 16:05:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:29.864 16:05:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:29.864 16:05:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:29.864 16:05:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:29.864 16:05:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:29.864 16:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:29.865 16:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:24:29.865 16:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:29.865 16:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:24:29.865 16:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:29.865 16:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:29.865 rmmod nvme_tcp 00:24:29.865 rmmod nvme_fabrics 00:24:29.865 rmmod nvme_keyring 00:24:29.865 16:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:29.865 16:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:24:29.865 16:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:24:29.865 16:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 3856202 ']' 00:24:29.865 16:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 3856202 00:24:29.865 16:05:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 3856202 ']' 00:24:29.865 16:05:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 3856202 00:24:29.865 16:05:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:24:29.865 16:05:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:29.865 16:05:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3856202 00:24:29.865 16:05:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:29.865 16:05:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:29.865 16:05:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3856202' 00:24:29.865 killing process with pid 3856202 00:24:29.865 16:05:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 3856202 00:24:29.865 16:05:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 3856202 00:24:30.123 16:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:30.123 16:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:30.123 16:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:30.123 16:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:30.123 16:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:30.123 16:05:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.123 16:05:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:30.123 16:05:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.074 16:06:00 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:32.074 00:24:32.074 real 0m15.482s 00:24:32.074 user 0m47.550s 00:24:32.074 sys 0m5.988s 00:24:32.074 16:06:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:32.074 16:06:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.074 ************************************ 00:24:32.074 END TEST nvmf_fio_host 00:24:32.074 ************************************ 00:24:32.334 16:06:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:32.334 16:06:01 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:32.334 16:06:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:32.334 16:06:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:32.334 16:06:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:32.334 ************************************ 00:24:32.334 START TEST nvmf_failover 00:24:32.334 ************************************ 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:32.334 * Looking for test storage... 00:24:32.334 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:24:32.334 16:06:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:37.649 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:37.649 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:37.649 Found net devices under 0000:86:00.0: cvl_0_0 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:37.649 Found net devices under 0000:86:00.1: cvl_0_1 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:37.649 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:37.650 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:37.650 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:37.650 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:37.650 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:24:37.650 00:24:37.650 --- 10.0.0.2 ping statistics --- 00:24:37.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:37.650 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:24:37.650 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:37.650 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:37.650 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:24:37.650 00:24:37.650 --- 10.0.0.1 ping statistics --- 00:24:37.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:37.650 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:24:37.650 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:37.650 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:24:37.650 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:37.650 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:37.650 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:37.650 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:37.650 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:37.650 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:37.650 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:37.650 16:06:06 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:37.650 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:37.650 16:06:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:37.650 16:06:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:37.650 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=3861250 00:24:37.650 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 3861250 00:24:37.650 16:06:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:37.650 16:06:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3861250 ']' 00:24:37.650 16:06:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:37.650 16:06:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:37.650 16:06:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:37.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:37.650 16:06:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:37.650 16:06:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:37.650 [2024-07-15 16:06:06.568158] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:24:37.650 [2024-07-15 16:06:06.568198] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:37.909 EAL: No free 2048 kB hugepages reported on node 1 00:24:37.909 [2024-07-15 16:06:06.626490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:37.909 [2024-07-15 16:06:06.705987] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:37.909 [2024-07-15 16:06:06.706021] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:37.909 [2024-07-15 16:06:06.706028] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:37.909 [2024-07-15 16:06:06.706034] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:37.909 [2024-07-15 16:06:06.706039] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:37.909 [2024-07-15 16:06:06.706153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:37.909 [2024-07-15 16:06:06.706245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:37.909 [2024-07-15 16:06:06.706247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:38.477 16:06:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:38.477 16:06:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:24:38.477 16:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:38.477 16:06:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:38.477 16:06:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:38.736 16:06:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:38.736 16:06:07 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:38.736 [2024-07-15 16:06:07.578394] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:38.736 16:06:07 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:38.994 Malloc0 00:24:38.994 16:06:07 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:39.253 16:06:07 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:39.253 16:06:08 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:39.512 [2024-07-15 16:06:08.327095] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:39.512 16:06:08 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:39.770 [2024-07-15 16:06:08.527692] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:39.770 16:06:08 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:40.030 [2024-07-15 16:06:08.720324] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:40.030 16:06:08 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3861885 00:24:40.030 16:06:08 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:40.030 16:06:08 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3861885 /var/tmp/bdevperf.sock 00:24:40.030 16:06:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3861885 ']' 00:24:40.030 16:06:08 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:40.030 16:06:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:40.030 16:06:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:40.030 16:06:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:40.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:40.030 16:06:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:40.030 16:06:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:40.967 16:06:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:40.967 16:06:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:24:40.967 16:06:09 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:41.226 NVMe0n1 00:24:41.226 16:06:10 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:41.793 00:24:41.793 16:06:10 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3862350 00:24:41.793 16:06:10 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:41.793 16:06:10 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:42.728 16:06:11 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:42.728 [2024-07-15 16:06:11.615491] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9080 is same with the state(5) to be set 00:24:42.728 [2024-07-15 16:06:11.615541] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9080 is same with the state(5) to be set 00:24:42.728 [2024-07-15 16:06:11.615549] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9080 is same with the state(5) to be set 00:24:42.728 [2024-07-15 16:06:11.615556] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9080 is same with the state(5) to be set 00:24:42.728 [2024-07-15 16:06:11.615562] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9080 is same with the state(5) to be set 00:24:42.728 [2024-07-15 16:06:11.615568] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9080 is same with the state(5) to be set 00:24:42.729 [2024-07-15 16:06:11.615574] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9080 is same with the state(5) to be set 00:24:42.729 [2024-07-15 16:06:11.615580] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9080 is same with the state(5) to be set 00:24:42.729 [2024-07-15 16:06:11.615586] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9080 is same with the state(5) to be set 00:24:42.729 [2024-07-15 16:06:11.615592] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9080 is same with the state(5) to be set 00:24:42.729 [2024-07-15 16:06:11.615597] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9080 is same with the state(5) to be set 00:24:42.729 [2024-07-15 16:06:11.615603] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9080 is same with the state(5) to be set 00:24:42.729 [2024-07-15 16:06:11.615609] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9080 is same with the state(5) to be set 00:24:42.729 [2024-07-15 16:06:11.615614] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9080 is same with the state(5) to be set 00:24:42.729 [2024-07-15 16:06:11.615620] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9080 is same with the state(5) to be set 00:24:42.729 [2024-07-15 16:06:11.615625] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9080 is same with the state(5) to be set 00:24:42.729 [2024-07-15 16:06:11.615631] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9080 is same with the state(5) to be set 00:24:42.729 [2024-07-15 16:06:11.615637] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9080 is same with the state(5) to be set 00:24:42.729 [2024-07-15 16:06:11.615643] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9080 is same with the state(5) to be set 00:24:42.729 [2024-07-15 16:06:11.615648] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9080 is same with the state(5) to be set 00:24:42.729 [2024-07-15 16:06:11.615654] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9080 is same with the state(5) to be set 00:24:42.729 [2024-07-15 16:06:11.615660] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9080 is same with the state(5) to be set 00:24:42.729 [2024-07-15 16:06:11.615671] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9080 is same with the state(5) to be set 00:24:42.729 [2024-07-15 16:06:11.615677] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9080 is same with the state(5) to be set 00:24:42.729 [2024-07-15 16:06:11.615683] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9080 is same with the state(5) to be set 00:24:42.729 [2024-07-15 16:06:11.615688] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9080 is same with the state(5) to be set 00:24:42.729 [2024-07-15 16:06:11.615694] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9080 is same with the state(5) to be set 00:24:42.729 [2024-07-15 16:06:11.615701] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9080 is same with the state(5) to be set 00:24:42.729 [2024-07-15 16:06:11.615707] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9080 is same with the state(5) to be set 00:24:42.729 [2024-07-15 16:06:11.615713] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9080 is same with the state(5) to be set 00:24:42.729 [2024-07-15 16:06:11.615720] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9080 is same with the state(5) to be set 00:24:42.729 [2024-07-15 16:06:11.615726] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9080 is same with the state(5) to be set 00:24:42.729 [2024-07-15 16:06:11.615731] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9080 is same with the state(5) to be set 00:24:42.729 [2024-07-15 16:06:11.615737] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9080 is same with the state(5) to be set 00:24:42.729 [2024-07-15 16:06:11.615743] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9080 is same with the state(5) to be set 00:24:42.729 [2024-07-15 16:06:11.615749] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9080 is same with the state(5) to be set 00:24:42.729 [2024-07-15 16:06:11.615755] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9080 is same with the state(5) to be set 00:24:42.729 [2024-07-15 16:06:11.615762] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9080 is same with the state(5) to be set 00:24:42.729 [2024-07-15 16:06:11.615768] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9080 is same with the state(5) to be set 00:24:42.729 [2024-07-15 16:06:11.615774] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9080 is same with the state(5) to be set 00:24:42.729 [2024-07-15 16:06:11.615780] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9080 is same with the state(5) to be set 00:24:42.729 [2024-07-15 16:06:11.615786] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9080 is same with the state(5) to be set 00:24:42.729 [2024-07-15 16:06:11.615791] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9080 is same with the state(5) to be set 00:24:42.729 [2024-07-15 16:06:11.615797] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9080 is same with the state(5) to be set 00:24:42.729 [2024-07-15 16:06:11.615803] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9080 is same with the state(5) to be set 00:24:42.729 [2024-07-15 16:06:11.615809] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9080 is same with the state(5) to be set 00:24:42.729 [2024-07-15 16:06:11.615815] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9080 is same with the state(5) to be set 00:24:42.729 [2024-07-15 16:06:11.615821] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9080 is same with the state(5) to be set 00:24:42.729 [2024-07-15 16:06:11.615827] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d9080 is same with the state(5) to be set 00:24:42.729 16:06:11 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:46.015 16:06:14 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:46.273 00:24:46.273 16:06:15 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:46.532 16:06:15 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:49.821 16:06:18 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:49.821 [2024-07-15 16:06:18.451449] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:49.821 16:06:18 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:50.756 16:06:19 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:50.756 16:06:19 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 3862350 00:24:57.412 0 00:24:57.412 16:06:25 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 3861885 00:24:57.412 16:06:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3861885 ']' 00:24:57.412 16:06:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3861885 00:24:57.412 16:06:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:24:57.412 16:06:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:57.412 16:06:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3861885 00:24:57.412 16:06:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:57.412 16:06:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:57.412 16:06:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3861885' 00:24:57.412 killing process with pid 3861885 00:24:57.412 16:06:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3861885 00:24:57.412 16:06:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3861885 00:24:57.413 16:06:25 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:57.413 [2024-07-15 16:06:08.794687] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:24:57.413 [2024-07-15 16:06:08.794742] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3861885 ] 00:24:57.413 EAL: No free 2048 kB hugepages reported on node 1 00:24:57.413 [2024-07-15 16:06:08.849183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:57.413 [2024-07-15 16:06:08.924942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:57.413 Running I/O for 15 seconds... 00:24:57.413 [2024-07-15 16:06:11.616540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:95272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.413 [2024-07-15 16:06:11.616578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.413 [2024-07-15 16:06:11.616595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.413 [2024-07-15 16:06:11.616603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.413 [2024-07-15 16:06:11.616613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.413 [2024-07-15 16:06:11.616620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.413 [2024-07-15 16:06:11.616628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.413 [2024-07-15 16:06:11.616635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.413 [2024-07-15 16:06:11.616643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:95304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.413 [2024-07-15 16:06:11.616650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.413 [2024-07-15 16:06:11.616658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.413 [2024-07-15 16:06:11.616664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.413 [2024-07-15 16:06:11.616672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.413 [2024-07-15 16:06:11.616679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.413 [2024-07-15 16:06:11.616687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.413 [2024-07-15 16:06:11.616694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.413 [2024-07-15 16:06:11.616702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.413 [2024-07-15 16:06:11.616708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.413 [2024-07-15 16:06:11.616716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.413 [2024-07-15 16:06:11.616722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.413 [2024-07-15 16:06:11.616730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.413 [2024-07-15 16:06:11.616737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.413 [2024-07-15 16:06:11.616751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.413 [2024-07-15 16:06:11.616758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.413 [2024-07-15 16:06:11.616766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.413 [2024-07-15 16:06:11.616772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.413 [2024-07-15 16:06:11.616781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.413 [2024-07-15 16:06:11.616788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.413 [2024-07-15 16:06:11.616796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:95384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.413 [2024-07-15 16:06:11.616802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.413 [2024-07-15 16:06:11.616810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.413 [2024-07-15 16:06:11.616816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.413 [2024-07-15 16:06:11.616824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:95400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.413 [2024-07-15 16:06:11.616831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.413 [2024-07-15 16:06:11.616839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.413 [2024-07-15 16:06:11.616846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.413 [2024-07-15 16:06:11.616854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.413 [2024-07-15 16:06:11.616860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.413 [2024-07-15 16:06:11.616868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:95424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.413 [2024-07-15 16:06:11.616874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.413 [2024-07-15 16:06:11.616882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.413 [2024-07-15 16:06:11.616889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.413 [2024-07-15 16:06:11.616897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.413 [2024-07-15 16:06:11.616904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.413 [2024-07-15 16:06:11.616912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.413 [2024-07-15 16:06:11.616918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.413 [2024-07-15 16:06:11.616925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.413 [2024-07-15 16:06:11.616934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.413 [2024-07-15 16:06:11.616943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:95464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.413 [2024-07-15 16:06:11.616949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.413 [2024-07-15 16:06:11.616957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:95472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.413 [2024-07-15 16:06:11.616964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.413 [2024-07-15 16:06:11.616972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.413 [2024-07-15 16:06:11.616978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.413 [2024-07-15 16:06:11.616986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:95488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.413 [2024-07-15 16:06:11.616993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.413 [2024-07-15 16:06:11.617001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:95496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.413 [2024-07-15 16:06:11.617007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.413 [2024-07-15 16:06:11.617015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:95504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.413 [2024-07-15 16:06:11.617021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.413 [2024-07-15 16:06:11.617029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:95512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.413 [2024-07-15 16:06:11.617035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.413 [2024-07-15 16:06:11.617044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.413 [2024-07-15 16:06:11.617050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.413 [2024-07-15 16:06:11.617059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.413 [2024-07-15 16:06:11.617065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.413 [2024-07-15 16:06:11.617074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.413 [2024-07-15 16:06:11.617081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.413 [2024-07-15 16:06:11.617088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.413 [2024-07-15 16:06:11.617095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.413 [2024-07-15 16:06:11.617103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.413 [2024-07-15 16:06:11.617110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.413 [2024-07-15 16:06:11.617119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.413 [2024-07-15 16:06:11.617125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.413 [2024-07-15 16:06:11.617133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.413 [2024-07-15 16:06:11.617140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.413 [2024-07-15 16:06:11.617148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.413 [2024-07-15 16:06:11.617154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.413 [2024-07-15 16:06:11.617162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.414 [2024-07-15 16:06:11.617168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.414 [2024-07-15 16:06:11.617176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.414 [2024-07-15 16:06:11.617182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.414 [2024-07-15 16:06:11.617190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.414 [2024-07-15 16:06:11.617197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.414 [2024-07-15 16:06:11.617205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.414 [2024-07-15 16:06:11.617211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.414 [2024-07-15 16:06:11.617219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.414 [2024-07-15 16:06:11.617233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.414 [2024-07-15 16:06:11.617241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.414 [2024-07-15 16:06:11.617247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.414 [2024-07-15 16:06:11.617257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.414 [2024-07-15 16:06:11.617264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.414 [2024-07-15 16:06:11.617272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.414 [2024-07-15 16:06:11.617278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.414 [2024-07-15 16:06:11.617286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.414 [2024-07-15 16:06:11.617292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.414 [2024-07-15 16:06:11.617300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.414 [2024-07-15 16:06:11.617309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.414 [2024-07-15 16:06:11.617318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.414 [2024-07-15 16:06:11.617325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.414 [2024-07-15 16:06:11.617333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.414 [2024-07-15 16:06:11.617339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.414 [2024-07-15 16:06:11.617347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:95680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.414 [2024-07-15 16:06:11.617353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.414 [2024-07-15 16:06:11.617361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:95688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.414 [2024-07-15 16:06:11.617368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.414 [2024-07-15 16:06:11.617377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:95696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.414 [2024-07-15 16:06:11.617383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.414 [2024-07-15 16:06:11.617391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.414 [2024-07-15 16:06:11.617397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.414 [2024-07-15 16:06:11.617405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:95712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.414 [2024-07-15 16:06:11.617411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.414 [2024-07-15 16:06:11.617420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:95720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.414 [2024-07-15 16:06:11.617426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.414 [2024-07-15 16:06:11.617434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:95728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.414 [2024-07-15 16:06:11.617440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.414 [2024-07-15 16:06:11.617448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:95736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.414 [2024-07-15 16:06:11.617454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.414 [2024-07-15 16:06:11.617462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:95744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.414 [2024-07-15 16:06:11.617469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.414 [2024-07-15 16:06:11.617478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:95752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.414 [2024-07-15 16:06:11.617484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.414 [2024-07-15 16:06:11.617493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:95760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.414 [2024-07-15 16:06:11.617499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.414 [2024-07-15 16:06:11.617507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:95768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.414 [2024-07-15 16:06:11.617513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.414 [2024-07-15 16:06:11.617521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:95776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.414 [2024-07-15 16:06:11.617528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.414 [2024-07-15 16:06:11.617536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:95784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.414 [2024-07-15 16:06:11.617543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.414 [2024-07-15 16:06:11.617551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.414 [2024-07-15 16:06:11.617557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.414 [2024-07-15 16:06:11.617565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:95800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.414 [2024-07-15 16:06:11.617572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.414 [2024-07-15 16:06:11.617580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.414 [2024-07-15 16:06:11.617587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.414 [2024-07-15 16:06:11.617594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:95816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.414 [2024-07-15 16:06:11.617601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.414 [2024-07-15 16:06:11.617608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:95824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.414 [2024-07-15 16:06:11.617615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.414 [2024-07-15 16:06:11.617624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:95832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.414 [2024-07-15 16:06:11.617630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.414 [2024-07-15 16:06:11.617638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:95840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.414 [2024-07-15 16:06:11.617644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.414 [2024-07-15 16:06:11.617652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:95848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.414 [2024-07-15 16:06:11.617658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.414 [2024-07-15 16:06:11.617666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:95856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.414 [2024-07-15 16:06:11.617673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.414 [2024-07-15 16:06:11.617682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:95864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.414 [2024-07-15 16:06:11.617689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.414 [2024-07-15 16:06:11.617697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:95872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.414 [2024-07-15 16:06:11.617703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.414 [2024-07-15 16:06:11.617711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:95880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.414 [2024-07-15 16:06:11.617717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.414 [2024-07-15 16:06:11.617726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:95888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.414 [2024-07-15 16:06:11.617732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.414 [2024-07-15 16:06:11.617740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.414 [2024-07-15 16:06:11.617746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.414 [2024-07-15 16:06:11.617754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.414 [2024-07-15 16:06:11.617760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.414 [2024-07-15 16:06:11.617768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.414 [2024-07-15 16:06:11.617775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.414 [2024-07-15 16:06:11.617782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:95920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.414 [2024-07-15 16:06:11.617789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.414 [2024-07-15 16:06:11.617797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:95928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.415 [2024-07-15 16:06:11.617803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.415 [2024-07-15 16:06:11.617811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:95936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.415 [2024-07-15 16:06:11.617817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.415 [2024-07-15 16:06:11.617825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:95944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.415 [2024-07-15 16:06:11.617831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.415 [2024-07-15 16:06:11.617839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:95952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.415 [2024-07-15 16:06:11.617846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.415 [2024-07-15 16:06:11.617856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:95960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.415 [2024-07-15 16:06:11.617863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.415 [2024-07-15 16:06:11.617873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:95968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.415 [2024-07-15 16:06:11.617880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.415 [2024-07-15 16:06:11.617887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:95976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.415 [2024-07-15 16:06:11.617894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.415 [2024-07-15 16:06:11.617901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:95984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.415 [2024-07-15 16:06:11.617907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.415 [2024-07-15 16:06:11.617915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:95992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.415 [2024-07-15 16:06:11.617921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.415 [2024-07-15 16:06:11.617929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.415 [2024-07-15 16:06:11.617936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.415 [2024-07-15 16:06:11.617943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.415 [2024-07-15 16:06:11.617950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.415 [2024-07-15 16:06:11.617957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.415 [2024-07-15 16:06:11.617963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.415 [2024-07-15 16:06:11.617971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.415 [2024-07-15 16:06:11.617979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.415 [2024-07-15 16:06:11.617987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.415 [2024-07-15 16:06:11.617993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.415 [2024-07-15 16:06:11.618001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.415 [2024-07-15 16:06:11.618007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.415 [2024-07-15 16:06:11.618015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.415 [2024-07-15 16:06:11.618021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.415 [2024-07-15 16:06:11.618029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.415 [2024-07-15 16:06:11.618035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.415 [2024-07-15 16:06:11.618065] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.415 [2024-07-15 16:06:11.618072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96064 len:8 PRP1 0x0 PRP2 0x0 00:24:57.415 [2024-07-15 16:06:11.618080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.415 [2024-07-15 16:06:11.618088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.415 [2024-07-15 16:06:11.618093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.415 [2024-07-15 16:06:11.618099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96072 len:8 PRP1 0x0 PRP2 0x0 00:24:57.415 [2024-07-15 16:06:11.618107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.415 [2024-07-15 16:06:11.618113] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.415 [2024-07-15 16:06:11.618118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.415 [2024-07-15 16:06:11.618123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96080 len:8 PRP1 0x0 PRP2 0x0 00:24:57.415 [2024-07-15 16:06:11.618131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.415 [2024-07-15 16:06:11.618138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.415 [2024-07-15 16:06:11.618143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.415 [2024-07-15 16:06:11.618148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96088 len:8 PRP1 0x0 PRP2 0x0 00:24:57.415 [2024-07-15 16:06:11.618155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.415 [2024-07-15 16:06:11.618161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.415 [2024-07-15 16:06:11.618165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.415 [2024-07-15 16:06:11.618171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96096 len:8 PRP1 0x0 PRP2 0x0 00:24:57.415 [2024-07-15 16:06:11.618177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.415 [2024-07-15 16:06:11.618184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.415 [2024-07-15 16:06:11.618189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.415 [2024-07-15 16:06:11.618194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96104 len:8 PRP1 0x0 PRP2 0x0 00:24:57.415 [2024-07-15 16:06:11.618200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.415 [2024-07-15 16:06:11.618206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.415 [2024-07-15 16:06:11.618211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.415 [2024-07-15 16:06:11.618216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96112 len:8 PRP1 0x0 PRP2 0x0 00:24:57.415 [2024-07-15 16:06:11.618222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.415 [2024-07-15 16:06:11.618233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.415 [2024-07-15 16:06:11.618238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.415 [2024-07-15 16:06:11.618244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96120 len:8 PRP1 0x0 PRP2 0x0 00:24:57.415 [2024-07-15 16:06:11.618250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.415 [2024-07-15 16:06:11.618259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.415 [2024-07-15 16:06:11.618263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.415 [2024-07-15 16:06:11.618269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96128 len:8 PRP1 0x0 PRP2 0x0 00:24:57.415 [2024-07-15 16:06:11.618275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.415 [2024-07-15 16:06:11.618282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.415 [2024-07-15 16:06:11.618286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.415 [2024-07-15 16:06:11.618291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96136 len:8 PRP1 0x0 PRP2 0x0 00:24:57.415 [2024-07-15 16:06:11.618299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.415 [2024-07-15 16:06:11.618306] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.415 [2024-07-15 16:06:11.618311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.415 [2024-07-15 16:06:11.618316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96144 len:8 PRP1 0x0 PRP2 0x0 00:24:57.415 [2024-07-15 16:06:11.618322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.415 [2024-07-15 16:06:11.618328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.415 [2024-07-15 16:06:11.618333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.415 [2024-07-15 16:06:11.618338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96152 len:8 PRP1 0x0 PRP2 0x0 00:24:57.415 [2024-07-15 16:06:11.618345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.415 [2024-07-15 16:06:11.618352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.415 [2024-07-15 16:06:11.618356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.415 [2024-07-15 16:06:11.618362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96160 len:8 PRP1 0x0 PRP2 0x0 00:24:57.415 [2024-07-15 16:06:11.618368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.415 [2024-07-15 16:06:11.618374] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.415 [2024-07-15 16:06:11.618379] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.415 [2024-07-15 16:06:11.618384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96168 len:8 PRP1 0x0 PRP2 0x0 00:24:57.415 [2024-07-15 16:06:11.618390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.415 [2024-07-15 16:06:11.618397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.415 [2024-07-15 16:06:11.618401] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.415 [2024-07-15 16:06:11.618407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96176 len:8 PRP1 0x0 PRP2 0x0 00:24:57.415 [2024-07-15 16:06:11.618413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.415 [2024-07-15 16:06:11.618423] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.416 [2024-07-15 16:06:11.618428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.416 [2024-07-15 16:06:11.618433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96184 len:8 PRP1 0x0 PRP2 0x0 00:24:57.416 [2024-07-15 16:06:11.618441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.416 [2024-07-15 16:06:11.618447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.416 [2024-07-15 16:06:11.618452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.416 [2024-07-15 16:06:11.618457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96192 len:8 PRP1 0x0 PRP2 0x0 00:24:57.416 [2024-07-15 16:06:11.618464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.416 [2024-07-15 16:06:11.618471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.416 [2024-07-15 16:06:11.618475] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.416 [2024-07-15 16:06:11.618480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96200 len:8 PRP1 0x0 PRP2 0x0 00:24:57.416 [2024-07-15 16:06:11.618488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.416 [2024-07-15 16:06:11.618495] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.416 [2024-07-15 16:06:11.618499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.416 [2024-07-15 16:06:11.618505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96208 len:8 PRP1 0x0 PRP2 0x0 00:24:57.416 [2024-07-15 16:06:11.618511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.416 [2024-07-15 16:06:11.618518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.416 [2024-07-15 16:06:11.618522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.416 [2024-07-15 16:06:11.618528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96216 len:8 PRP1 0x0 PRP2 0x0 00:24:57.416 [2024-07-15 16:06:11.618534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.416 [2024-07-15 16:06:11.618540] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.416 [2024-07-15 16:06:11.618545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.416 [2024-07-15 16:06:11.629287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96224 len:8 PRP1 0x0 PRP2 0x0 00:24:57.416 [2024-07-15 16:06:11.629302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.416 [2024-07-15 16:06:11.629313] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.416 [2024-07-15 16:06:11.629320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.416 [2024-07-15 16:06:11.629327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96232 len:8 PRP1 0x0 PRP2 0x0 00:24:57.416 [2024-07-15 16:06:11.629335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.416 [2024-07-15 16:06:11.629344] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.416 [2024-07-15 16:06:11.629350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.416 [2024-07-15 16:06:11.629357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96240 len:8 PRP1 0x0 PRP2 0x0 00:24:57.416 [2024-07-15 16:06:11.629366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.416 [2024-07-15 16:06:11.629377] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.416 [2024-07-15 16:06:11.629384] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.416 [2024-07-15 16:06:11.629394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96248 len:8 PRP1 0x0 PRP2 0x0 00:24:57.416 [2024-07-15 16:06:11.629402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.416 [2024-07-15 16:06:11.629412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.416 [2024-07-15 16:06:11.629419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.416 [2024-07-15 16:06:11.629426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96256 len:8 PRP1 0x0 PRP2 0x0 00:24:57.416 [2024-07-15 16:06:11.629435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.416 [2024-07-15 16:06:11.629443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.416 [2024-07-15 16:06:11.629449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.416 [2024-07-15 16:06:11.629457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96264 len:8 PRP1 0x0 PRP2 0x0 00:24:57.416 [2024-07-15 16:06:11.629468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.416 [2024-07-15 16:06:11.629477] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.416 [2024-07-15 16:06:11.629483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.416 [2024-07-15 16:06:11.629491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96272 len:8 PRP1 0x0 PRP2 0x0 00:24:57.416 [2024-07-15 16:06:11.629499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.416 [2024-07-15 16:06:11.629509] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.416 [2024-07-15 16:06:11.629516] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.416 [2024-07-15 16:06:11.629524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96280 len:8 PRP1 0x0 PRP2 0x0 00:24:57.416 [2024-07-15 16:06:11.629532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.416 [2024-07-15 16:06:11.629540] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.416 [2024-07-15 16:06:11.629547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.416 [2024-07-15 16:06:11.629554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96288 len:8 PRP1 0x0 PRP2 0x0 00:24:57.416 [2024-07-15 16:06:11.629563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.416 [2024-07-15 16:06:11.629608] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1740300 was disconnected and freed. reset controller. 00:24:57.416 [2024-07-15 16:06:11.629620] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:57.416 [2024-07-15 16:06:11.629646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.416 [2024-07-15 16:06:11.629656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.416 [2024-07-15 16:06:11.629666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.416 [2024-07-15 16:06:11.629675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.416 [2024-07-15 16:06:11.629684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.416 [2024-07-15 16:06:11.629698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.416 [2024-07-15 16:06:11.629709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.416 [2024-07-15 16:06:11.629717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.416 [2024-07-15 16:06:11.629726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.416 [2024-07-15 16:06:11.629769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1722540 (9): Bad file descriptor 00:24:57.416 [2024-07-15 16:06:11.634413] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.416 [2024-07-15 16:06:11.662913] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:57.416 [2024-07-15 16:06:15.247019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.416 [2024-07-15 16:06:15.247062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.416 [2024-07-15 16:06:15.247076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.416 [2024-07-15 16:06:15.247084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.416 [2024-07-15 16:06:15.247092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.416 [2024-07-15 16:06:15.247099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.416 [2024-07-15 16:06:15.247108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.416 [2024-07-15 16:06:15.247114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.416 [2024-07-15 16:06:15.247122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.416 [2024-07-15 16:06:15.247130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.416 [2024-07-15 16:06:15.247139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.416 [2024-07-15 16:06:15.247145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.416 [2024-07-15 16:06:15.247153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.416 [2024-07-15 16:06:15.247160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.416 [2024-07-15 16:06:15.247168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.416 [2024-07-15 16:06:15.247174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.416 [2024-07-15 16:06:15.247182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.416 [2024-07-15 16:06:15.247189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.416 [2024-07-15 16:06:15.247197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.416 [2024-07-15 16:06:15.247209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.416 [2024-07-15 16:06:15.247218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.416 [2024-07-15 16:06:15.247230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.416 [2024-07-15 16:06:15.247239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.416 [2024-07-15 16:06:15.247245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.416 [2024-07-15 16:06:15.247253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.416 [2024-07-15 16:06:15.247260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.417 [2024-07-15 16:06:15.247269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.417 [2024-07-15 16:06:15.247275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.417 [2024-07-15 16:06:15.247283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.417 [2024-07-15 16:06:15.247292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.417 [2024-07-15 16:06:15.247300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.417 [2024-07-15 16:06:15.247308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.417 [2024-07-15 16:06:15.247316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.417 [2024-07-15 16:06:15.247323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.417 [2024-07-15 16:06:15.247331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.417 [2024-07-15 16:06:15.247337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.417 [2024-07-15 16:06:15.247345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.417 [2024-07-15 16:06:15.247352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.417 [2024-07-15 16:06:15.247361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.417 [2024-07-15 16:06:15.247370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.417 [2024-07-15 16:06:15.247378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.417 [2024-07-15 16:06:15.247386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.417 [2024-07-15 16:06:15.247394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.417 [2024-07-15 16:06:15.247401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.417 [2024-07-15 16:06:15.247409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.417 [2024-07-15 16:06:15.247419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.417 [2024-07-15 16:06:15.247427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.417 [2024-07-15 16:06:15.247435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.417 [2024-07-15 16:06:15.247442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.417 [2024-07-15 16:06:15.247450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.417 [2024-07-15 16:06:15.247458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.417 [2024-07-15 16:06:15.247466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.417 [2024-07-15 16:06:15.247474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.417 [2024-07-15 16:06:15.247481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.417 [2024-07-15 16:06:15.247489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.417 [2024-07-15 16:06:15.247495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.417 [2024-07-15 16:06:15.247503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.417 [2024-07-15 16:06:15.247510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.417 [2024-07-15 16:06:15.247518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.417 [2024-07-15 16:06:15.247524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.417 [2024-07-15 16:06:15.247532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.417 [2024-07-15 16:06:15.247538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.417 [2024-07-15 16:06:15.247546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.417 [2024-07-15 16:06:15.247553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.417 [2024-07-15 16:06:15.247561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.417 [2024-07-15 16:06:15.247567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.417 [2024-07-15 16:06:15.247576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:21360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.417 [2024-07-15 16:06:15.247583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.417 [2024-07-15 16:06:15.247590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.417 [2024-07-15 16:06:15.247596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.417 [2024-07-15 16:06:15.247606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.417 [2024-07-15 16:06:15.247612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.417 [2024-07-15 16:06:15.247620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.417 [2024-07-15 16:06:15.247626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.417 [2024-07-15 16:06:15.247634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.417 [2024-07-15 16:06:15.247640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.417 [2024-07-15 16:06:15.247648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.417 [2024-07-15 16:06:15.247654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.417 [2024-07-15 16:06:15.247662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.417 [2024-07-15 16:06:15.247668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.417 [2024-07-15 16:06:15.247676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.417 [2024-07-15 16:06:15.247683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.417 [2024-07-15 16:06:15.247691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.417 [2024-07-15 16:06:15.247697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.417 [2024-07-15 16:06:15.247705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.417 [2024-07-15 16:06:15.247713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.417 [2024-07-15 16:06:15.247722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.417 [2024-07-15 16:06:15.247728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.417 [2024-07-15 16:06:15.247736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.417 [2024-07-15 16:06:15.247743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.417 [2024-07-15 16:06:15.247751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.417 [2024-07-15 16:06:15.247757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.417 [2024-07-15 16:06:15.247766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.417 [2024-07-15 16:06:15.247772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.417 [2024-07-15 16:06:15.247780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.418 [2024-07-15 16:06:15.247788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.418 [2024-07-15 16:06:15.247796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:20696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.418 [2024-07-15 16:06:15.247803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.418 [2024-07-15 16:06:15.247812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.418 [2024-07-15 16:06:15.247819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.418 [2024-07-15 16:06:15.247826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.418 [2024-07-15 16:06:15.247833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.418 [2024-07-15 16:06:15.247840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.418 [2024-07-15 16:06:15.247847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.418 [2024-07-15 16:06:15.247855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.418 [2024-07-15 16:06:15.247862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.418 [2024-07-15 16:06:15.247870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:21448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.418 [2024-07-15 16:06:15.247875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.418 [2024-07-15 16:06:15.247883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.418 [2024-07-15 16:06:15.247889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.418 [2024-07-15 16:06:15.247897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.418 [2024-07-15 16:06:15.247903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.418 [2024-07-15 16:06:15.247911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.418 [2024-07-15 16:06:15.247917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.418 [2024-07-15 16:06:15.247925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.418 [2024-07-15 16:06:15.247932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.418 [2024-07-15 16:06:15.247940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.418 [2024-07-15 16:06:15.247947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.418 [2024-07-15 16:06:15.247954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.418 [2024-07-15 16:06:15.247961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.418 [2024-07-15 16:06:15.247970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.418 [2024-07-15 16:06:15.247978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.418 [2024-07-15 16:06:15.247985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.418 [2024-07-15 16:06:15.247992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.418 [2024-07-15 16:06:15.247999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.418 [2024-07-15 16:06:15.248005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.418 [2024-07-15 16:06:15.248013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.418 [2024-07-15 16:06:15.248019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.418 [2024-07-15 16:06:15.248027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.418 [2024-07-15 16:06:15.248034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.418 [2024-07-15 16:06:15.248042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.418 [2024-07-15 16:06:15.248048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.418 [2024-07-15 16:06:15.248056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.418 [2024-07-15 16:06:15.248062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.418 [2024-07-15 16:06:15.248070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.418 [2024-07-15 16:06:15.248077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.418 [2024-07-15 16:06:15.248085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.418 [2024-07-15 16:06:15.248091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.418 [2024-07-15 16:06:15.248099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.418 [2024-07-15 16:06:15.248105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.418 [2024-07-15 16:06:15.248113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.418 [2024-07-15 16:06:15.248119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.418 [2024-07-15 16:06:15.248128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.418 [2024-07-15 16:06:15.248135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.418 [2024-07-15 16:06:15.248143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.418 [2024-07-15 16:06:15.248150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.418 [2024-07-15 16:06:15.248158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.418 [2024-07-15 16:06:15.248165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.418 [2024-07-15 16:06:15.248172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.418 [2024-07-15 16:06:15.248179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.418 [2024-07-15 16:06:15.248188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.418 [2024-07-15 16:06:15.248194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.418 [2024-07-15 16:06:15.248202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.418 [2024-07-15 16:06:15.248208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.418 [2024-07-15 16:06:15.248215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.418 [2024-07-15 16:06:15.248222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.418 [2024-07-15 16:06:15.248233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.418 [2024-07-15 16:06:15.248240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.418 [2024-07-15 16:06:15.248249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.418 [2024-07-15 16:06:15.248255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.418 [2024-07-15 16:06:15.248263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.418 [2024-07-15 16:06:15.248270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.418 [2024-07-15 16:06:15.248278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.418 [2024-07-15 16:06:15.248285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.418 [2024-07-15 16:06:15.248293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.418 [2024-07-15 16:06:15.248299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.418 [2024-07-15 16:06:15.248308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.418 [2024-07-15 16:06:15.248314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.418 [2024-07-15 16:06:15.248322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.418 [2024-07-15 16:06:15.248328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.418 [2024-07-15 16:06:15.248343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.418 [2024-07-15 16:06:15.248352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.418 [2024-07-15 16:06:15.248360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.418 [2024-07-15 16:06:15.248367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.418 [2024-07-15 16:06:15.248375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.418 [2024-07-15 16:06:15.248381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.418 [2024-07-15 16:06:15.248389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.418 [2024-07-15 16:06:15.248395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.418 [2024-07-15 16:06:15.248405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.418 [2024-07-15 16:06:15.248412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.419 [2024-07-15 16:06:15.248420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:20976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.419 [2024-07-15 16:06:15.248426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.419 [2024-07-15 16:06:15.248434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.419 [2024-07-15 16:06:15.248440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.419 [2024-07-15 16:06:15.248448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.419 [2024-07-15 16:06:15.248455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.419 [2024-07-15 16:06:15.248463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.419 [2024-07-15 16:06:15.248470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.419 [2024-07-15 16:06:15.248477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.419 [2024-07-15 16:06:15.248484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.419 [2024-07-15 16:06:15.248491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.419 [2024-07-15 16:06:15.248497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.419 [2024-07-15 16:06:15.248505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.419 [2024-07-15 16:06:15.248512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.419 [2024-07-15 16:06:15.248520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.419 [2024-07-15 16:06:15.248527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.419 [2024-07-15 16:06:15.248538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.419 [2024-07-15 16:06:15.248545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.419 [2024-07-15 16:06:15.248553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.419 [2024-07-15 16:06:15.248559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.419 [2024-07-15 16:06:15.248568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.419 [2024-07-15 16:06:15.248574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.419 [2024-07-15 16:06:15.248582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.419 [2024-07-15 16:06:15.248589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.419 [2024-07-15 16:06:15.248596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.419 [2024-07-15 16:06:15.248603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.419 [2024-07-15 16:06:15.248612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.419 [2024-07-15 16:06:15.248618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.419 [2024-07-15 16:06:15.248627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.419 [2024-07-15 16:06:15.248633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.419 [2024-07-15 16:06:15.248641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.419 [2024-07-15 16:06:15.248647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.419 [2024-07-15 16:06:15.248655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.419 [2024-07-15 16:06:15.248661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.419 [2024-07-15 16:06:15.248670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.419 [2024-07-15 16:06:15.248677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.419 [2024-07-15 16:06:15.248685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.419 [2024-07-15 16:06:15.248691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.419 [2024-07-15 16:06:15.248699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.419 [2024-07-15 16:06:15.248705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.419 [2024-07-15 16:06:15.248713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.419 [2024-07-15 16:06:15.248721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.419 [2024-07-15 16:06:15.248729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.419 [2024-07-15 16:06:15.248736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.419 [2024-07-15 16:06:15.248743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.419 [2024-07-15 16:06:15.248749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.419 [2024-07-15 16:06:15.248758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.419 [2024-07-15 16:06:15.248764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.419 [2024-07-15 16:06:15.248772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.419 [2024-07-15 16:06:15.248779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.419 [2024-07-15 16:06:15.248787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.419 [2024-07-15 16:06:15.248794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.419 [2024-07-15 16:06:15.248802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.419 [2024-07-15 16:06:15.248808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.419 [2024-07-15 16:06:15.248816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.419 [2024-07-15 16:06:15.248822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.419 [2024-07-15 16:06:15.248830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.419 [2024-07-15 16:06:15.248837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.419 [2024-07-15 16:06:15.248845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.419 [2024-07-15 16:06:15.248851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.419 [2024-07-15 16:06:15.248859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.419 [2024-07-15 16:06:15.248865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.419 [2024-07-15 16:06:15.248873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.419 [2024-07-15 16:06:15.248880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.419 [2024-07-15 16:06:15.248888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.419 [2024-07-15 16:06:15.248894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.419 [2024-07-15 16:06:15.248903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.419 [2024-07-15 16:06:15.248909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.419 [2024-07-15 16:06:15.248917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.419 [2024-07-15 16:06:15.248923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.419 [2024-07-15 16:06:15.248932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.419 [2024-07-15 16:06:15.248939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.419 [2024-07-15 16:06:15.248947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.419 [2024-07-15 16:06:15.248954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.419 [2024-07-15 16:06:15.248961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ed380 is same with the state(5) to be set 00:24:57.419 [2024-07-15 16:06:15.248968] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.419 [2024-07-15 16:06:15.248973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.419 [2024-07-15 16:06:15.248979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21272 len:8 PRP1 0x0 PRP2 0x0 00:24:57.419 [2024-07-15 16:06:15.248988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.419 [2024-07-15 16:06:15.249029] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18ed380 was disconnected and freed. reset controller. 00:24:57.419 [2024-07-15 16:06:15.249039] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:57.419 [2024-07-15 16:06:15.249060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.419 [2024-07-15 16:06:15.249068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.419 [2024-07-15 16:06:15.249075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.419 [2024-07-15 16:06:15.249081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.419 [2024-07-15 16:06:15.249088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.419 [2024-07-15 16:06:15.249095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.420 [2024-07-15 16:06:15.249102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.420 [2024-07-15 16:06:15.249108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.420 [2024-07-15 16:06:15.249114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.420 [2024-07-15 16:06:15.252460] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.420 [2024-07-15 16:06:15.252490] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1722540 (9): Bad file descriptor 00:24:57.420 [2024-07-15 16:06:15.286357] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:57.420 [2024-07-15 16:06:19.656018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.420 [2024-07-15 16:06:19.656066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.420 [2024-07-15 16:06:19.656081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.420 [2024-07-15 16:06:19.656089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.420 [2024-07-15 16:06:19.656098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.420 [2024-07-15 16:06:19.656105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.420 [2024-07-15 16:06:19.656113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.420 [2024-07-15 16:06:19.656120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.420 [2024-07-15 16:06:19.656127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.420 [2024-07-15 16:06:19.656134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.420 [2024-07-15 16:06:19.656143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.420 [2024-07-15 16:06:19.656150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.420 [2024-07-15 16:06:19.656158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.420 [2024-07-15 16:06:19.656164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.420 [2024-07-15 16:06:19.656172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.420 [2024-07-15 16:06:19.656180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.420 [2024-07-15 16:06:19.656187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.420 [2024-07-15 16:06:19.656194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.420 [2024-07-15 16:06:19.656202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.420 [2024-07-15 16:06:19.656208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.420 [2024-07-15 16:06:19.656216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.420 [2024-07-15 16:06:19.656222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.420 [2024-07-15 16:06:19.656237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.420 [2024-07-15 16:06:19.656243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.420 [2024-07-15 16:06:19.656251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.420 [2024-07-15 16:06:19.656258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.420 [2024-07-15 16:06:19.656271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.420 [2024-07-15 16:06:19.656278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.420 [2024-07-15 16:06:19.656286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.420 [2024-07-15 16:06:19.656293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.420 [2024-07-15 16:06:19.656301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.420 [2024-07-15 16:06:19.656309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.420 [2024-07-15 16:06:19.656317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.420 [2024-07-15 16:06:19.656323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.420 [2024-07-15 16:06:19.656333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.420 [2024-07-15 16:06:19.656342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.420 [2024-07-15 16:06:19.656350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.420 [2024-07-15 16:06:19.656357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.420 [2024-07-15 16:06:19.656365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.420 [2024-07-15 16:06:19.656372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.420 [2024-07-15 16:06:19.656380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.420 [2024-07-15 16:06:19.656387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.420 [2024-07-15 16:06:19.656394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.420 [2024-07-15 16:06:19.656400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.420 [2024-07-15 16:06:19.656408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.420 [2024-07-15 16:06:19.656414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.420 [2024-07-15 16:06:19.656422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.420 [2024-07-15 16:06:19.656430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.420 [2024-07-15 16:06:19.656438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:17016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.420 [2024-07-15 16:06:19.656446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.420 [2024-07-15 16:06:19.656455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:17024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.420 [2024-07-15 16:06:19.656462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.420 [2024-07-15 16:06:19.656471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:17032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.420 [2024-07-15 16:06:19.656477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.420 [2024-07-15 16:06:19.656485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.420 [2024-07-15 16:06:19.656491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.420 [2024-07-15 16:06:19.656499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.420 [2024-07-15 16:06:19.656508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.420 [2024-07-15 16:06:19.656516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.420 [2024-07-15 16:06:19.656522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.420 [2024-07-15 16:06:19.656532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:17064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.420 [2024-07-15 16:06:19.656539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.420 [2024-07-15 16:06:19.656547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:17072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.420 [2024-07-15 16:06:19.656554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.420 [2024-07-15 16:06:19.656563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:17080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.420 [2024-07-15 16:06:19.656571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.420 [2024-07-15 16:06:19.656580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.420 [2024-07-15 16:06:19.656587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.420 [2024-07-15 16:06:19.656595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:17096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.420 [2024-07-15 16:06:19.656601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.420 [2024-07-15 16:06:19.656609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.420 [2024-07-15 16:06:19.656616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.420 [2024-07-15 16:06:19.656623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.420 [2024-07-15 16:06:19.656630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.420 [2024-07-15 16:06:19.656638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:17120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.420 [2024-07-15 16:06:19.656644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.420 [2024-07-15 16:06:19.656654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.420 [2024-07-15 16:06:19.656660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.420 [2024-07-15 16:06:19.656668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.420 [2024-07-15 16:06:19.656675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.421 [2024-07-15 16:06:19.656683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.421 [2024-07-15 16:06:19.656689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.421 [2024-07-15 16:06:19.656698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.421 [2024-07-15 16:06:19.656703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.421 [2024-07-15 16:06:19.656711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:17160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.421 [2024-07-15 16:06:19.656718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.421 [2024-07-15 16:06:19.656725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.421 [2024-07-15 16:06:19.656732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.421 [2024-07-15 16:06:19.656740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:17176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.421 [2024-07-15 16:06:19.656746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.421 [2024-07-15 16:06:19.656754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:17184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.421 [2024-07-15 16:06:19.656760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.421 [2024-07-15 16:06:19.656768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.421 [2024-07-15 16:06:19.656774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.421 [2024-07-15 16:06:19.656782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:17200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.421 [2024-07-15 16:06:19.656788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.421 [2024-07-15 16:06:19.656796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:17208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.421 [2024-07-15 16:06:19.656802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.421 [2024-07-15 16:06:19.656810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:17216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.421 [2024-07-15 16:06:19.656816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.421 [2024-07-15 16:06:19.656824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.421 [2024-07-15 16:06:19.656832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.421 [2024-07-15 16:06:19.656840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.421 [2024-07-15 16:06:19.656847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.421 [2024-07-15 16:06:19.656855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.421 [2024-07-15 16:06:19.656861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.421 [2024-07-15 16:06:19.656868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:17248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.421 [2024-07-15 16:06:19.656875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.421 [2024-07-15 16:06:19.656882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:17256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.421 [2024-07-15 16:06:19.656888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.421 [2024-07-15 16:06:19.656896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.421 [2024-07-15 16:06:19.656902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.421 [2024-07-15 16:06:19.656910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:17272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.421 [2024-07-15 16:06:19.656917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.421 [2024-07-15 16:06:19.656925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.421 [2024-07-15 16:06:19.656931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.421 [2024-07-15 16:06:19.656939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:17288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.421 [2024-07-15 16:06:19.656945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.421 [2024-07-15 16:06:19.656953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:17296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.421 [2024-07-15 16:06:19.656959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.421 [2024-07-15 16:06:19.656967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:17304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.421 [2024-07-15 16:06:19.656974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.421 [2024-07-15 16:06:19.656983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.421 [2024-07-15 16:06:19.656989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.421 [2024-07-15 16:06:19.656997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.421 [2024-07-15 16:06:19.657003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.421 [2024-07-15 16:06:19.657011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:57.421 [2024-07-15 16:06:19.657018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.421 [2024-07-15 16:06:19.657044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.421 [2024-07-15 16:06:19.657051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17336 len:8 PRP1 0x0 PRP2 0x0 00:24:57.421 [2024-07-15 16:06:19.657058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.421 [2024-07-15 16:06:19.657067] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.421 [2024-07-15 16:06:19.657072] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.421 [2024-07-15 16:06:19.657078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:8 PRP1 0x0 PRP2 0x0 00:24:57.421 [2024-07-15 16:06:19.657085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.421 [2024-07-15 16:06:19.657092] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.421 [2024-07-15 16:06:19.657097] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.421 [2024-07-15 16:06:19.657102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17352 len:8 PRP1 0x0 PRP2 0x0 00:24:57.421 [2024-07-15 16:06:19.657108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.421 [2024-07-15 16:06:19.657115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.421 [2024-07-15 16:06:19.657120] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.421 [2024-07-15 16:06:19.657125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17360 len:8 PRP1 0x0 PRP2 0x0 00:24:57.421 [2024-07-15 16:06:19.657132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.421 [2024-07-15 16:06:19.657139] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.421 [2024-07-15 16:06:19.657143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.421 [2024-07-15 16:06:19.657149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17368 len:8 PRP1 0x0 PRP2 0x0 00:24:57.421 [2024-07-15 16:06:19.657155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.421 [2024-07-15 16:06:19.657161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.421 [2024-07-15 16:06:19.657166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.421 [2024-07-15 16:06:19.657171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:8 PRP1 0x0 PRP2 0x0 00:24:57.421 [2024-07-15 16:06:19.657178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.421 [2024-07-15 16:06:19.657185] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.421 [2024-07-15 16:06:19.657190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.421 [2024-07-15 16:06:19.657195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17384 len:8 PRP1 0x0 PRP2 0x0 00:24:57.421 [2024-07-15 16:06:19.657202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.421 [2024-07-15 16:06:19.657208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.421 [2024-07-15 16:06:19.657212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.421 [2024-07-15 16:06:19.657219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17392 len:8 PRP1 0x0 PRP2 0x0 00:24:57.422 [2024-07-15 16:06:19.657230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.422 [2024-07-15 16:06:19.657237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.422 [2024-07-15 16:06:19.657242] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.422 [2024-07-15 16:06:19.657247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17400 len:8 PRP1 0x0 PRP2 0x0 00:24:57.422 [2024-07-15 16:06:19.657254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.422 [2024-07-15 16:06:19.657261] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.422 [2024-07-15 16:06:19.657266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.422 [2024-07-15 16:06:19.657271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:8 PRP1 0x0 PRP2 0x0 00:24:57.422 [2024-07-15 16:06:19.657278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.422 [2024-07-15 16:06:19.657285] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.422 [2024-07-15 16:06:19.657289] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.422 [2024-07-15 16:06:19.657295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17416 len:8 PRP1 0x0 PRP2 0x0 00:24:57.422 [2024-07-15 16:06:19.657301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.422 [2024-07-15 16:06:19.657307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.422 [2024-07-15 16:06:19.657311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.422 [2024-07-15 16:06:19.657317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17424 len:8 PRP1 0x0 PRP2 0x0 00:24:57.422 [2024-07-15 16:06:19.657323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.422 [2024-07-15 16:06:19.657330] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.422 [2024-07-15 16:06:19.657334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.422 [2024-07-15 16:06:19.657339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17432 len:8 PRP1 0x0 PRP2 0x0 00:24:57.422 [2024-07-15 16:06:19.657346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.422 [2024-07-15 16:06:19.657353] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.422 [2024-07-15 16:06:19.657357] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.422 [2024-07-15 16:06:19.657363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:8 PRP1 0x0 PRP2 0x0 00:24:57.422 [2024-07-15 16:06:19.657368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.422 [2024-07-15 16:06:19.657375] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.422 [2024-07-15 16:06:19.657380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.422 [2024-07-15 16:06:19.657386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17448 len:8 PRP1 0x0 PRP2 0x0 00:24:57.422 [2024-07-15 16:06:19.657392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.422 [2024-07-15 16:06:19.657398] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.422 [2024-07-15 16:06:19.657404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.422 [2024-07-15 16:06:19.657410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17456 len:8 PRP1 0x0 PRP2 0x0 00:24:57.422 [2024-07-15 16:06:19.657416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.422 [2024-07-15 16:06:19.657422] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.422 [2024-07-15 16:06:19.657427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.422 [2024-07-15 16:06:19.657432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17464 len:8 PRP1 0x0 PRP2 0x0 00:24:57.422 [2024-07-15 16:06:19.657441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.422 [2024-07-15 16:06:19.657448] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.422 [2024-07-15 16:06:19.657453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.422 [2024-07-15 16:06:19.657458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:8 PRP1 0x0 PRP2 0x0 00:24:57.422 [2024-07-15 16:06:19.657464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.422 [2024-07-15 16:06:19.657470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.422 [2024-07-15 16:06:19.657475] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.422 [2024-07-15 16:06:19.657480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17480 len:8 PRP1 0x0 PRP2 0x0 00:24:57.422 [2024-07-15 16:06:19.657487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.422 [2024-07-15 16:06:19.657493] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.422 [2024-07-15 16:06:19.657498] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.422 [2024-07-15 16:06:19.657503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17488 len:8 PRP1 0x0 PRP2 0x0 00:24:57.422 [2024-07-15 16:06:19.657509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.422 [2024-07-15 16:06:19.657516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.422 [2024-07-15 16:06:19.657520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.422 [2024-07-15 16:06:19.657525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17496 len:8 PRP1 0x0 PRP2 0x0 00:24:57.422 [2024-07-15 16:06:19.657531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.422 [2024-07-15 16:06:19.657539] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.422 [2024-07-15 16:06:19.657543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.422 [2024-07-15 16:06:19.657549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:8 PRP1 0x0 PRP2 0x0 00:24:57.422 [2024-07-15 16:06:19.657554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.422 [2024-07-15 16:06:19.657561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.422 [2024-07-15 16:06:19.657565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.422 [2024-07-15 16:06:19.657570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17512 len:8 PRP1 0x0 PRP2 0x0 00:24:57.422 [2024-07-15 16:06:19.657576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.422 [2024-07-15 16:06:19.657584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.422 [2024-07-15 16:06:19.657590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.422 [2024-07-15 16:06:19.657595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17520 len:8 PRP1 0x0 PRP2 0x0 00:24:57.422 [2024-07-15 16:06:19.657602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.422 [2024-07-15 16:06:19.657608] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.422 [2024-07-15 16:06:19.657613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.422 [2024-07-15 16:06:19.657618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17528 len:8 PRP1 0x0 PRP2 0x0 00:24:57.422 [2024-07-15 16:06:19.657625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.422 [2024-07-15 16:06:19.657631] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.422 [2024-07-15 16:06:19.657636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.422 [2024-07-15 16:06:19.657642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:8 PRP1 0x0 PRP2 0x0 00:24:57.422 [2024-07-15 16:06:19.657648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.422 [2024-07-15 16:06:19.657654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.422 [2024-07-15 16:06:19.657659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.422 [2024-07-15 16:06:19.657664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17544 len:8 PRP1 0x0 PRP2 0x0 00:24:57.422 [2024-07-15 16:06:19.657670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.422 [2024-07-15 16:06:19.657677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.422 [2024-07-15 16:06:19.657682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.422 [2024-07-15 16:06:19.657687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17552 len:8 PRP1 0x0 PRP2 0x0 00:24:57.422 [2024-07-15 16:06:19.657693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.422 [2024-07-15 16:06:19.657701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.422 [2024-07-15 16:06:19.657705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.422 [2024-07-15 16:06:19.657711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17560 len:8 PRP1 0x0 PRP2 0x0 00:24:57.422 [2024-07-15 16:06:19.657718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.422 [2024-07-15 16:06:19.657725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.422 [2024-07-15 16:06:19.657732] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.422 [2024-07-15 16:06:19.657738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:8 PRP1 0x0 PRP2 0x0 00:24:57.422 [2024-07-15 16:06:19.657746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.422 [2024-07-15 16:06:19.657755] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.422 [2024-07-15 16:06:19.657760] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.422 [2024-07-15 16:06:19.657766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17576 len:8 PRP1 0x0 PRP2 0x0 00:24:57.422 [2024-07-15 16:06:19.657775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.422 [2024-07-15 16:06:19.657782] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.422 [2024-07-15 16:06:19.657786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.422 [2024-07-15 16:06:19.657791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17584 len:8 PRP1 0x0 PRP2 0x0 00:24:57.422 [2024-07-15 16:06:19.657798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.422 [2024-07-15 16:06:19.657805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.422 [2024-07-15 16:06:19.657810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.422 [2024-07-15 16:06:19.657815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17592 len:8 PRP1 0x0 PRP2 0x0 00:24:57.422 [2024-07-15 16:06:19.657822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.423 [2024-07-15 16:06:19.657829] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.423 [2024-07-15 16:06:19.657833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.423 [2024-07-15 16:06:19.657838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:8 PRP1 0x0 PRP2 0x0 00:24:57.423 [2024-07-15 16:06:19.657844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.423 [2024-07-15 16:06:19.657851] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.423 [2024-07-15 16:06:19.657856] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.423 [2024-07-15 16:06:19.657861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17608 len:8 PRP1 0x0 PRP2 0x0 00:24:57.423 [2024-07-15 16:06:19.657867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.423 [2024-07-15 16:06:19.657874] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.423 [2024-07-15 16:06:19.657878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.423 [2024-07-15 16:06:19.657883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17616 len:8 PRP1 0x0 PRP2 0x0 00:24:57.423 [2024-07-15 16:06:19.657889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.423 [2024-07-15 16:06:19.657896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.423 [2024-07-15 16:06:19.657902] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.423 [2024-07-15 16:06:19.657907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17624 len:8 PRP1 0x0 PRP2 0x0 00:24:57.423 [2024-07-15 16:06:19.657914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.423 [2024-07-15 16:06:19.657920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.423 [2024-07-15 16:06:19.657924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.423 [2024-07-15 16:06:19.657930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:8 PRP1 0x0 PRP2 0x0 00:24:57.423 [2024-07-15 16:06:19.657935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.423 [2024-07-15 16:06:19.657942] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.423 [2024-07-15 16:06:19.657946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.423 [2024-07-15 16:06:19.657953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17640 len:8 PRP1 0x0 PRP2 0x0 00:24:57.423 [2024-07-15 16:06:19.657960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.423 [2024-07-15 16:06:19.657966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.423 [2024-07-15 16:06:19.657971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.423 [2024-07-15 16:06:19.657976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17648 len:8 PRP1 0x0 PRP2 0x0 00:24:57.423 [2024-07-15 16:06:19.657982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.423 [2024-07-15 16:06:19.657988] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.423 [2024-07-15 16:06:19.657993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.423 [2024-07-15 16:06:19.657998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17656 len:8 PRP1 0x0 PRP2 0x0 00:24:57.423 [2024-07-15 16:06:19.658006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.423 [2024-07-15 16:06:19.658013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.423 [2024-07-15 16:06:19.658017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.423 [2024-07-15 16:06:19.658022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:8 PRP1 0x0 PRP2 0x0 00:24:57.423 [2024-07-15 16:06:19.658028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.423 [2024-07-15 16:06:19.658035] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.423 [2024-07-15 16:06:19.658039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.423 [2024-07-15 16:06:19.658044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17672 len:8 PRP1 0x0 PRP2 0x0 00:24:57.423 [2024-07-15 16:06:19.658050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.423 [2024-07-15 16:06:19.658057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.423 [2024-07-15 16:06:19.658062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.423 [2024-07-15 16:06:19.658067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17680 len:8 PRP1 0x0 PRP2 0x0 00:24:57.423 [2024-07-15 16:06:19.658074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.423 [2024-07-15 16:06:19.668462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.423 [2024-07-15 16:06:19.668474] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.423 [2024-07-15 16:06:19.668483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17688 len:8 PRP1 0x0 PRP2 0x0 00:24:57.423 [2024-07-15 16:06:19.668491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.423 [2024-07-15 16:06:19.668499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.423 [2024-07-15 16:06:19.668506] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.423 [2024-07-15 16:06:19.668512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:8 PRP1 0x0 PRP2 0x0 00:24:57.423 [2024-07-15 16:06:19.668521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.423 [2024-07-15 16:06:19.668532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.423 [2024-07-15 16:06:19.668538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.423 [2024-07-15 16:06:19.668545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17704 len:8 PRP1 0x0 PRP2 0x0 00:24:57.423 [2024-07-15 16:06:19.668553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.423 [2024-07-15 16:06:19.668561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.423 [2024-07-15 16:06:19.668567] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.423 [2024-07-15 16:06:19.668573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17712 len:8 PRP1 0x0 PRP2 0x0 00:24:57.423 [2024-07-15 16:06:19.668580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.423 [2024-07-15 16:06:19.668588] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.423 [2024-07-15 16:06:19.668594] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.423 [2024-07-15 16:06:19.668601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17720 len:8 PRP1 0x0 PRP2 0x0 00:24:57.423 [2024-07-15 16:06:19.668610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.423 [2024-07-15 16:06:19.668620] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.423 [2024-07-15 16:06:19.668626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.423 [2024-07-15 16:06:19.668633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:8 PRP1 0x0 PRP2 0x0 00:24:57.423 [2024-07-15 16:06:19.668641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.423 [2024-07-15 16:06:19.668650] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.423 [2024-07-15 16:06:19.668656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.423 [2024-07-15 16:06:19.668662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17736 len:8 PRP1 0x0 PRP2 0x0 00:24:57.423 [2024-07-15 16:06:19.668670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.423 [2024-07-15 16:06:19.668679] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.423 [2024-07-15 16:06:19.668686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.423 [2024-07-15 16:06:19.668693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17744 len:8 PRP1 0x0 PRP2 0x0 00:24:57.423 [2024-07-15 16:06:19.668701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.423 [2024-07-15 16:06:19.668709] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.423 [2024-07-15 16:06:19.668714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.423 [2024-07-15 16:06:19.668721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17752 len:8 PRP1 0x0 PRP2 0x0 00:24:57.423 [2024-07-15 16:06:19.668728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.423 [2024-07-15 16:06:19.668737] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.423 [2024-07-15 16:06:19.668743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.423 [2024-07-15 16:06:19.668750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:8 PRP1 0x0 PRP2 0x0 00:24:57.423 [2024-07-15 16:06:19.668761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.423 [2024-07-15 16:06:19.668771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.423 [2024-07-15 16:06:19.668779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.423 [2024-07-15 16:06:19.668786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17768 len:8 PRP1 0x0 PRP2 0x0 00:24:57.423 [2024-07-15 16:06:19.668796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.423 [2024-07-15 16:06:19.668804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.423 [2024-07-15 16:06:19.668810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.423 [2024-07-15 16:06:19.668817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17776 len:8 PRP1 0x0 PRP2 0x0 00:24:57.423 [2024-07-15 16:06:19.668826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.423 [2024-07-15 16:06:19.668834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.423 [2024-07-15 16:06:19.668840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.423 [2024-07-15 16:06:19.668847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17784 len:8 PRP1 0x0 PRP2 0x0 00:24:57.423 [2024-07-15 16:06:19.668855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.423 [2024-07-15 16:06:19.668863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.423 [2024-07-15 16:06:19.668869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.423 [2024-07-15 16:06:19.668878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:8 PRP1 0x0 PRP2 0x0 00:24:57.423 [2024-07-15 16:06:19.668887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.423 [2024-07-15 16:06:19.668895] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.424 [2024-07-15 16:06:19.668901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.424 [2024-07-15 16:06:19.668907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17800 len:8 PRP1 0x0 PRP2 0x0 00:24:57.424 [2024-07-15 16:06:19.668916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.424 [2024-07-15 16:06:19.668924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.424 [2024-07-15 16:06:19.668932] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.424 [2024-07-15 16:06:19.668939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17808 len:8 PRP1 0x0 PRP2 0x0 00:24:57.424 [2024-07-15 16:06:19.668947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.424 [2024-07-15 16:06:19.668955] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.424 [2024-07-15 16:06:19.668961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.424 [2024-07-15 16:06:19.668969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17816 len:8 PRP1 0x0 PRP2 0x0 00:24:57.424 [2024-07-15 16:06:19.668978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.424 [2024-07-15 16:06:19.668985] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.424 [2024-07-15 16:06:19.668991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.424 [2024-07-15 16:06:19.668999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:8 PRP1 0x0 PRP2 0x0 00:24:57.424 [2024-07-15 16:06:19.669008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.424 [2024-07-15 16:06:19.669016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.424 [2024-07-15 16:06:19.669022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.424 [2024-07-15 16:06:19.669030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17832 len:8 PRP1 0x0 PRP2 0x0 00:24:57.424 [2024-07-15 16:06:19.669038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.424 [2024-07-15 16:06:19.669046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:57.424 [2024-07-15 16:06:19.669053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:57.424 [2024-07-15 16:06:19.669060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16944 len:8 PRP1 0x0 PRP2 0x0 00:24:57.424 [2024-07-15 16:06:19.669067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.424 [2024-07-15 16:06:19.669113] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18ed170 was disconnected and freed. reset controller. 00:24:57.424 [2024-07-15 16:06:19.669124] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:57.424 [2024-07-15 16:06:19.669148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.424 [2024-07-15 16:06:19.669158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.424 [2024-07-15 16:06:19.669167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.424 [2024-07-15 16:06:19.669175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.424 [2024-07-15 16:06:19.669184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.424 [2024-07-15 16:06:19.669192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.424 [2024-07-15 16:06:19.669201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.424 [2024-07-15 16:06:19.669209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.424 [2024-07-15 16:06:19.669217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.424 [2024-07-15 16:06:19.669256] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1722540 (9): Bad file descriptor 00:24:57.424 [2024-07-15 16:06:19.673057] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.424 [2024-07-15 16:06:19.707078] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:57.424 00:24:57.424 Latency(us) 00:24:57.424 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:57.424 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:57.424 Verification LBA range: start 0x0 length 0x4000 00:24:57.424 NVMe0n1 : 15.00 10806.69 42.21 266.93 0.00 11536.61 598.37 21199.47 00:24:57.424 =================================================================================================================== 00:24:57.424 Total : 10806.69 42.21 266.93 0.00 11536.61 598.37 21199.47 00:24:57.424 Received shutdown signal, test time was about 15.000000 seconds 00:24:57.424 00:24:57.424 Latency(us) 00:24:57.424 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:57.424 =================================================================================================================== 00:24:57.424 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:57.424 16:06:25 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:57.424 16:06:25 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:57.424 16:06:25 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:57.424 16:06:25 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3864879 00:24:57.424 16:06:25 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:57.424 16:06:25 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3864879 /var/tmp/bdevperf.sock 00:24:57.424 16:06:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3864879 ']' 00:24:57.424 16:06:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:57.424 16:06:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:57.424 16:06:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:57.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:57.424 16:06:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:57.424 16:06:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:57.992 16:06:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:57.992 16:06:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:24:57.992 16:06:26 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:57.992 [2024-07-15 16:06:26.860248] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:57.992 16:06:26 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:58.252 [2024-07-15 16:06:27.052812] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:58.252 16:06:27 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:58.820 NVMe0n1 00:24:58.820 16:06:27 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:58.820 00:24:58.820 16:06:27 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:59.079 00:24:59.079 16:06:27 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:59.079 16:06:27 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:59.338 16:06:28 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:59.597 16:06:28 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:02.889 16:06:31 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:02.889 16:06:31 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:02.889 16:06:31 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3865803 00:25:02.889 16:06:31 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:02.889 16:06:31 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 3865803 00:25:03.824 0 00:25:03.824 16:06:32 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:03.824 [2024-07-15 16:06:25.880796] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:25:03.824 [2024-07-15 16:06:25.880844] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3864879 ] 00:25:03.824 EAL: No free 2048 kB hugepages reported on node 1 00:25:03.824 [2024-07-15 16:06:25.935475] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:03.824 [2024-07-15 16:06:26.004580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:03.824 [2024-07-15 16:06:28.332154] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:03.824 [2024-07-15 16:06:28.332199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:03.824 [2024-07-15 16:06:28.332211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.824 [2024-07-15 16:06:28.332219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:03.824 [2024-07-15 16:06:28.332230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.824 [2024-07-15 16:06:28.332238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:03.824 [2024-07-15 16:06:28.332245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.824 [2024-07-15 16:06:28.332252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:03.824 [2024-07-15 16:06:28.332259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:03.824 [2024-07-15 16:06:28.332271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.824 [2024-07-15 16:06:28.332296] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.824 [2024-07-15 16:06:28.332310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e5e540 (9): Bad file descriptor 00:25:03.824 [2024-07-15 16:06:28.337490] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:03.824 Running I/O for 1 seconds... 00:25:03.824 00:25:03.824 Latency(us) 00:25:03.824 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:03.824 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:03.824 Verification LBA range: start 0x0 length 0x4000 00:25:03.824 NVMe0n1 : 1.01 10915.27 42.64 0.00 0.00 11680.16 2535.96 9346.00 00:25:03.824 =================================================================================================================== 00:25:03.824 Total : 10915.27 42.64 0.00 0.00 11680.16 2535.96 9346.00 00:25:03.824 16:06:32 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:03.824 16:06:32 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:04.083 16:06:32 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:04.342 16:06:33 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:04.342 16:06:33 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:04.343 16:06:33 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:04.602 16:06:33 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:07.889 16:06:36 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:07.889 16:06:36 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:07.889 16:06:36 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 3864879 00:25:07.889 16:06:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3864879 ']' 00:25:07.889 16:06:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3864879 00:25:07.889 16:06:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:07.889 16:06:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:07.889 16:06:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3864879 00:25:07.889 16:06:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:07.889 16:06:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:07.889 16:06:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3864879' 00:25:07.889 killing process with pid 3864879 00:25:07.889 16:06:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3864879 00:25:07.889 16:06:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3864879 00:25:08.148 16:06:36 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:08.148 16:06:36 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:08.148 16:06:37 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:08.148 16:06:37 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:08.148 16:06:37 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:08.148 16:06:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:08.148 16:06:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:25:08.148 16:06:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:08.148 16:06:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:25:08.148 16:06:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:08.148 16:06:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:08.148 rmmod nvme_tcp 00:25:08.148 rmmod nvme_fabrics 00:25:08.148 rmmod nvme_keyring 00:25:08.407 16:06:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:08.407 16:06:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:25:08.407 16:06:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:25:08.407 16:06:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 3861250 ']' 00:25:08.407 16:06:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 3861250 00:25:08.407 16:06:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3861250 ']' 00:25:08.407 16:06:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3861250 00:25:08.407 16:06:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:08.407 16:06:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:08.407 16:06:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3861250 00:25:08.407 16:06:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:08.407 16:06:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:08.407 16:06:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3861250' 00:25:08.407 killing process with pid 3861250 00:25:08.407 16:06:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3861250 00:25:08.407 16:06:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3861250 00:25:08.665 16:06:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:08.665 16:06:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:08.665 16:06:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:08.665 16:06:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:08.665 16:06:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:08.665 16:06:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:08.665 16:06:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:08.665 16:06:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.669 16:06:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:10.669 00:25:10.669 real 0m38.352s 00:25:10.669 user 2m4.249s 00:25:10.669 sys 0m7.370s 00:25:10.669 16:06:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:10.669 16:06:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:10.669 ************************************ 00:25:10.669 END TEST nvmf_failover 00:25:10.669 ************************************ 00:25:10.669 16:06:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:10.669 16:06:39 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:10.669 16:06:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:10.669 16:06:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:10.669 16:06:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:10.669 ************************************ 00:25:10.669 START TEST nvmf_host_discovery 00:25:10.669 ************************************ 00:25:10.669 16:06:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:10.669 * Looking for test storage... 00:25:10.669 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:10.669 16:06:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:10.669 16:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:10.669 16:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:10.669 16:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:10.669 16:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:10.669 16:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:10.669 16:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:10.669 16:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:10.669 16:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:10.669 16:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:10.669 16:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:10.669 16:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:10.669 16:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:10.669 16:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:10.669 16:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:10.669 16:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:10.669 16:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:10.669 16:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:10.669 16:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:10.669 16:06:39 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:10.669 16:06:39 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:10.669 16:06:39 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:10.669 16:06:39 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.670 16:06:39 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.670 16:06:39 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.670 16:06:39 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:10.670 16:06:39 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.670 16:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:25:10.670 16:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:10.670 16:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:10.670 16:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:10.670 16:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:10.670 16:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:10.670 16:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:10.670 16:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:10.670 16:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:10.670 16:06:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:10.670 16:06:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:10.670 16:06:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:10.670 16:06:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:10.670 16:06:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:10.670 16:06:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:10.670 16:06:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:10.670 16:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:10.670 16:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:10.670 16:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:10.670 16:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:10.670 16:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:10.670 16:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:10.670 16:06:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:10.670 16:06:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.670 16:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:10.670 16:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:10.670 16:06:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:25:10.670 16:06:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:15.944 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:15.944 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:25:15.944 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:15.944 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:15.944 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:15.944 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:15.944 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:15.944 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:25:15.944 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:15.944 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:25:15.944 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:25:15.944 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:25:15.944 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:25:15.944 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:25:15.944 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:25:15.944 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:15.944 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:15.944 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:15.944 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:15.944 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:15.944 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:15.944 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:15.944 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:15.944 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:15.944 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:15.944 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:15.944 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:15.944 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:15.944 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:15.944 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:15.944 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:15.944 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:15.944 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:15.944 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:15.944 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:15.945 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:15.945 Found net devices under 0000:86:00.0: cvl_0_0 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:15.945 Found net devices under 0000:86:00.1: cvl_0_1 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:15.945 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:15.945 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:25:15.945 00:25:15.945 --- 10.0.0.2 ping statistics --- 00:25:15.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.945 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:15.945 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:15.945 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:25:15.945 00:25:15.945 --- 10.0.0.1 ping statistics --- 00:25:15.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.945 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=3870019 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 3870019 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 3870019 ']' 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:15.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:15.945 16:06:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:15.946 [2024-07-15 16:06:44.501988] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:25:15.946 [2024-07-15 16:06:44.502032] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:15.946 EAL: No free 2048 kB hugepages reported on node 1 00:25:15.946 [2024-07-15 16:06:44.559543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.946 [2024-07-15 16:06:44.638157] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:15.946 [2024-07-15 16:06:44.638190] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:15.946 [2024-07-15 16:06:44.638197] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:15.946 [2024-07-15 16:06:44.638204] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:15.946 [2024-07-15 16:06:44.638209] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:15.946 [2024-07-15 16:06:44.638231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:16.515 16:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:16.515 16:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:25:16.515 16:06:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:16.515 16:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:16.515 16:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:16.515 16:06:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:16.515 16:06:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:16.515 16:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.515 16:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:16.515 [2024-07-15 16:06:45.337538] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:16.515 16:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.515 16:06:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:16.515 16:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.515 16:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:16.515 [2024-07-15 16:06:45.345649] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:16.515 16:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.515 16:06:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:16.515 16:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.515 16:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:16.515 null0 00:25:16.515 16:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.515 16:06:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:16.515 16:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.515 16:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:16.515 null1 00:25:16.515 16:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.515 16:06:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:16.515 16:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.515 16:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:16.515 16:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.515 16:06:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3870253 00:25:16.515 16:06:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3870253 /tmp/host.sock 00:25:16.515 16:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 3870253 ']' 00:25:16.515 16:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:25:16.515 16:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:16.515 16:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:16.515 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:16.515 16:06:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:16.515 16:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:16.515 16:06:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:16.515 [2024-07-15 16:06:45.419252] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:25:16.515 [2024-07-15 16:06:45.419296] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3870253 ] 00:25:16.515 EAL: No free 2048 kB hugepages reported on node 1 00:25:16.774 [2024-07-15 16:06:45.472821] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.774 [2024-07-15 16:06:45.552691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:17.343 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:17.343 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:25:17.343 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:17.343 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:17.343 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.343 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:17.343 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.343 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:17.343 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.343 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:17.343 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.343 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:17.343 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:17.343 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:17.343 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:17.343 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.343 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:17.343 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:17.343 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:17.343 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:17.602 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.860 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:17.860 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:17.861 [2024-07-15 16:06:46.560849] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:25:17.861 16:06:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:25:18.428 [2024-07-15 16:06:47.286685] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:18.428 [2024-07-15 16:06:47.286705] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:18.428 [2024-07-15 16:06:47.286716] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:18.687 [2024-07-15 16:06:47.413113] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:18.946 [2024-07-15 16:06:47.637423] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:18.946 [2024-07-15 16:06:47.637443] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:18.946 16:06:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:18.946 16:06:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:18.946 16:06:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:18.946 16:06:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:18.946 16:06:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:18.946 16:06:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.946 16:06:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:18.946 16:06:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.946 16:06:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:18.946 16:06:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.946 16:06:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.946 16:06:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:18.946 16:06:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:18.946 16:06:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:18.946 16:06:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:18.946 16:06:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:18.947 16:06:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:18.947 16:06:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:18.947 16:06:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:18.947 16:06:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:18.947 16:06:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:18.947 16:06:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:18.947 16:06:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.947 16:06:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:18.947 16:06:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.947 16:06:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:18.947 16:06:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:18.947 16:06:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:18.947 16:06:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:18.947 16:06:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:18.947 16:06:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:18.947 16:06:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:18.947 16:06:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:18.947 16:06:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:18.947 16:06:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:18.947 16:06:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:18.947 16:06:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:18.947 16:06:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.947 16:06:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.207 16:06:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.207 16:06:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:25:19.207 16:06:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:19.207 16:06:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:19.207 16:06:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:19.207 16:06:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:19.207 16:06:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:19.207 16:06:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:19.207 16:06:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:19.207 16:06:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:19.207 16:06:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:19.207 16:06:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:19.207 16:06:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:19.207 16:06:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.207 16:06:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.207 16:06:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.207 16:06:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:19.207 16:06:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:19.207 16:06:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:19.207 16:06:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:19.207 16:06:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:19.207 16:06:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.207 16:06:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.207 16:06:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.207 16:06:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:19.207 16:06:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:19.207 16:06:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:19.207 16:06:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:19.207 16:06:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:19.207 16:06:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:19.207 16:06:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:19.207 16:06:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:19.207 16:06:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.207 16:06:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:19.207 16:06:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.207 16:06:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:19.207 16:06:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.207 16:06:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:19.207 16:06:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:19.207 16:06:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:19.207 16:06:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:19.207 16:06:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:19.207 16:06:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:19.207 16:06:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:19.207 16:06:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:19.207 16:06:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:19.207 16:06:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:19.207 16:06:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:19.207 16:06:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:19.207 16:06:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.207 16:06:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.207 16:06:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.207 16:06:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:19.207 16:06:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:19.207 16:06:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:19.207 16:06:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:19.207 16:06:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:19.207 16:06:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.207 16:06:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.207 [2024-07-15 16:06:48.069102] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:19.207 [2024-07-15 16:06:48.070184] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:19.207 [2024-07-15 16:06:48.070206] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:19.207 16:06:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.207 16:06:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:19.207 16:06:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:19.207 16:06:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:19.207 16:06:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:19.207 16:06:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:19.207 16:06:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:19.207 16:06:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:19.207 16:06:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:19.207 16:06:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.207 16:06:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:19.207 16:06:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.207 16:06:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:19.207 16:06:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.207 16:06:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.207 16:06:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:19.207 16:06:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:19.207 16:06:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:19.207 16:06:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:19.207 16:06:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:19.207 16:06:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:19.207 16:06:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:19.207 16:06:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:19.207 16:06:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:19.207 16:06:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:19.207 16:06:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.207 16:06:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:19.207 16:06:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.467 16:06:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.467 16:06:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:19.467 16:06:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:19.467 16:06:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:19.467 16:06:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:19.467 16:06:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:19.467 16:06:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:19.467 16:06:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:19.467 16:06:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:19.467 16:06:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:19.467 16:06:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:19.467 16:06:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:19.467 16:06:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.467 16:06:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:19.467 16:06:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.467 16:06:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.467 [2024-07-15 16:06:48.197588] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:19.467 16:06:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:19.467 16:06:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:25:19.467 [2024-07-15 16:06:48.301134] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:19.467 [2024-07-15 16:06:48.301150] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:19.467 [2024-07-15 16:06:48.301155] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:20.404 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:20.404 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:20.404 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:20.404 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:20.404 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:20.404 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:20.404 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.404 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.404 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:20.404 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.404 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:20.404 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:20.404 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:20.404 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:20.404 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:20.404 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:20.404 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:20.404 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:20.404 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:20.404 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:20.404 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:20.404 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:20.404 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.404 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.404 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.404 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:20.404 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:20.404 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:20.404 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:20.404 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:20.404 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.404 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.404 [2024-07-15 16:06:49.333307] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:20.404 [2024-07-15 16:06:49.333329] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:20.404 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.404 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:20.404 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:20.404 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:20.404 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:20.404 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:20.665 [2024-07-15 16:06:49.339453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.665 [2024-07-15 16:06:49.339472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.665 [2024-07-15 16:06:49.339481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.665 [2024-07-15 16:06:49.339488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.665 [2024-07-15 16:06:49.339495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.665 [2024-07-15 16:06:49.339504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.665 [2024-07-15 16:06:49.339511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:20.665 [2024-07-15 16:06:49.339518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:20.665 [2024-07-15 16:06:49.339524] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7bf10 is same with the state(5) to be set 00:25:20.665 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:20.665 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:20.665 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:20.665 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.665 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:20.665 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.665 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:20.665 [2024-07-15 16:06:49.349466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd7bf10 (9): Bad file descriptor 00:25:20.665 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.665 [2024-07-15 16:06:49.359506] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:20.665 [2024-07-15 16:06:49.359810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.665 [2024-07-15 16:06:49.359825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd7bf10 with addr=10.0.0.2, port=4420 00:25:20.665 [2024-07-15 16:06:49.359833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7bf10 is same with the state(5) to be set 00:25:20.665 [2024-07-15 16:06:49.359846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd7bf10 (9): Bad file descriptor 00:25:20.665 [2024-07-15 16:06:49.359856] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:20.665 [2024-07-15 16:06:49.359862] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:20.665 [2024-07-15 16:06:49.359869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:20.665 [2024-07-15 16:06:49.359880] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.665 [2024-07-15 16:06:49.369563] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:20.665 [2024-07-15 16:06:49.369782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.665 [2024-07-15 16:06:49.369795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd7bf10 with addr=10.0.0.2, port=4420 00:25:20.665 [2024-07-15 16:06:49.369803] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7bf10 is same with the state(5) to be set 00:25:20.665 [2024-07-15 16:06:49.369813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd7bf10 (9): Bad file descriptor 00:25:20.665 [2024-07-15 16:06:49.369823] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:20.665 [2024-07-15 16:06:49.369830] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:20.665 [2024-07-15 16:06:49.369837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:20.665 [2024-07-15 16:06:49.369846] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.665 [2024-07-15 16:06:49.379613] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:20.665 [2024-07-15 16:06:49.379764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.665 [2024-07-15 16:06:49.379778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd7bf10 with addr=10.0.0.2, port=4420 00:25:20.665 [2024-07-15 16:06:49.379785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7bf10 is same with the state(5) to be set 00:25:20.665 [2024-07-15 16:06:49.379796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd7bf10 (9): Bad file descriptor 00:25:20.665 [2024-07-15 16:06:49.379805] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:20.665 [2024-07-15 16:06:49.379811] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:20.665 [2024-07-15 16:06:49.379818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:20.665 [2024-07-15 16:06:49.379828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.665 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.665 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:20.665 [2024-07-15 16:06:49.389666] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:20.665 [2024-07-15 16:06:49.389932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.665 [2024-07-15 16:06:49.389945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd7bf10 with addr=10.0.0.2, port=4420 00:25:20.665 [2024-07-15 16:06:49.389956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7bf10 is same with the state(5) to be set 00:25:20.665 [2024-07-15 16:06:49.389967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd7bf10 (9): Bad file descriptor 00:25:20.665 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:20.665 [2024-07-15 16:06:49.389977] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:20.665 [2024-07-15 16:06:49.389984] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:20.665 [2024-07-15 16:06:49.389990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:20.665 [2024-07-15 16:06:49.390000] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.665 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:20.665 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:20.665 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:20.665 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:20.665 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:20.665 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:20.665 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:20.665 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:20.665 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.665 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:20.665 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.665 [2024-07-15 16:06:49.399715] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:20.665 [2024-07-15 16:06:49.399853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.665 [2024-07-15 16:06:49.399867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd7bf10 with addr=10.0.0.2, port=4420 00:25:20.665 [2024-07-15 16:06:49.399875] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7bf10 is same with the state(5) to be set 00:25:20.665 [2024-07-15 16:06:49.399886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd7bf10 (9): Bad file descriptor 00:25:20.665 [2024-07-15 16:06:49.399895] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:20.665 [2024-07-15 16:06:49.399902] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:20.665 [2024-07-15 16:06:49.399908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:20.665 [2024-07-15 16:06:49.399918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.666 [2024-07-15 16:06:49.409773] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:20.666 [2024-07-15 16:06:49.409949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.666 [2024-07-15 16:06:49.409961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd7bf10 with addr=10.0.0.2, port=4420 00:25:20.666 [2024-07-15 16:06:49.409968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7bf10 is same with the state(5) to be set 00:25:20.666 [2024-07-15 16:06:49.409978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd7bf10 (9): Bad file descriptor 00:25:20.666 [2024-07-15 16:06:49.409989] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:20.666 [2024-07-15 16:06:49.409998] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:20.666 [2024-07-15 16:06:49.410005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:20.666 [2024-07-15 16:06:49.410014] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.666 [2024-07-15 16:06:49.419822] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:20.666 [2024-07-15 16:06:49.419963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.666 [2024-07-15 16:06:49.419975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd7bf10 with addr=10.0.0.2, port=4420 00:25:20.666 [2024-07-15 16:06:49.419982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7bf10 is same with the state(5) to be set 00:25:20.666 [2024-07-15 16:06:49.419992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd7bf10 (9): Bad file descriptor 00:25:20.666 [2024-07-15 16:06:49.420001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:20.666 [2024-07-15 16:06:49.420007] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:20.666 [2024-07-15 16:06:49.420014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:20.666 [2024-07-15 16:06:49.420023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:20.666 [2024-07-15 16:06:49.420176] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:20.666 [2024-07-15 16:06:49.420190] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:20.666 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.926 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:25:20.926 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:20.926 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:20.926 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:20.926 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:20.926 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:20.926 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:20.926 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:20.926 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:20.926 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:20.926 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:20.926 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:20.926 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.926 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.926 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.926 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:20.926 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:20.926 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:20.926 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:20.926 16:06:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:20.926 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.926 16:06:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.862 [2024-07-15 16:06:50.744800] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:21.862 [2024-07-15 16:06:50.744824] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:21.862 [2024-07-15 16:06:50.744835] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:22.122 [2024-07-15 16:06:50.871223] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:22.122 [2024-07-15 16:06:50.931612] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:22.122 [2024-07-15 16:06:50.931640] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:22.122 16:06:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.122 16:06:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:22.122 16:06:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:22.122 16:06:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:22.122 16:06:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:22.122 16:06:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:22.122 16:06:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:22.122 16:06:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:22.122 16:06:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:22.122 16:06:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.122 16:06:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.122 request: 00:25:22.122 { 00:25:22.122 "name": "nvme", 00:25:22.122 "trtype": "tcp", 00:25:22.122 "traddr": "10.0.0.2", 00:25:22.122 "adrfam": "ipv4", 00:25:22.122 "trsvcid": "8009", 00:25:22.122 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:22.122 "wait_for_attach": true, 00:25:22.122 "method": "bdev_nvme_start_discovery", 00:25:22.122 "req_id": 1 00:25:22.122 } 00:25:22.122 Got JSON-RPC error response 00:25:22.122 response: 00:25:22.122 { 00:25:22.122 "code": -17, 00:25:22.122 "message": "File exists" 00:25:22.122 } 00:25:22.122 16:06:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:22.122 16:06:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:22.122 16:06:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:22.122 16:06:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:22.122 16:06:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:22.122 16:06:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:22.122 16:06:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:22.122 16:06:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:22.122 16:06:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.122 16:06:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:22.122 16:06:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.122 16:06:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:22.122 16:06:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.122 16:06:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:22.122 16:06:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:22.122 16:06:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:22.122 16:06:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.122 16:06:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.122 16:06:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:22.122 16:06:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:22.122 16:06:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:22.122 16:06:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.122 16:06:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:22.122 16:06:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:22.122 16:06:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:22.122 16:06:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:22.122 16:06:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:22.122 16:06:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:22.122 16:06:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:22.122 16:06:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:22.122 16:06:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:22.122 16:06:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.122 16:06:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.382 request: 00:25:22.382 { 00:25:22.382 "name": "nvme_second", 00:25:22.382 "trtype": "tcp", 00:25:22.382 "traddr": "10.0.0.2", 00:25:22.382 "adrfam": "ipv4", 00:25:22.382 "trsvcid": "8009", 00:25:22.382 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:22.382 "wait_for_attach": true, 00:25:22.382 "method": "bdev_nvme_start_discovery", 00:25:22.382 "req_id": 1 00:25:22.382 } 00:25:22.382 Got JSON-RPC error response 00:25:22.382 response: 00:25:22.382 { 00:25:22.382 "code": -17, 00:25:22.382 "message": "File exists" 00:25:22.382 } 00:25:22.382 16:06:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:22.382 16:06:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:22.382 16:06:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:22.382 16:06:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:22.382 16:06:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:22.382 16:06:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:22.382 16:06:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:22.382 16:06:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.382 16:06:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.382 16:06:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:22.382 16:06:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:22.382 16:06:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:22.382 16:06:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.382 16:06:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:22.382 16:06:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:22.382 16:06:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:22.382 16:06:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:22.382 16:06:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:22.382 16:06:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.382 16:06:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:22.382 16:06:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.382 16:06:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.382 16:06:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:22.382 16:06:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:22.382 16:06:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:22.382 16:06:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:22.382 16:06:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:22.382 16:06:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:22.382 16:06:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:22.382 16:06:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:22.383 16:06:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:22.383 16:06:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.383 16:06:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.321 [2024-07-15 16:06:52.175794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:23.321 [2024-07-15 16:06:52.175824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb8a00 with addr=10.0.0.2, port=8010 00:25:23.321 [2024-07-15 16:06:52.175837] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:23.321 [2024-07-15 16:06:52.175844] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:23.321 [2024-07-15 16:06:52.175851] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:24.257 [2024-07-15 16:06:53.178294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:24.257 [2024-07-15 16:06:53.178317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb8a00 with addr=10.0.0.2, port=8010 00:25:24.257 [2024-07-15 16:06:53.178328] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:24.257 [2024-07-15 16:06:53.178334] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:24.257 [2024-07-15 16:06:53.178340] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:25.632 [2024-07-15 16:06:54.180436] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:25.632 request: 00:25:25.632 { 00:25:25.632 "name": "nvme_second", 00:25:25.632 "trtype": "tcp", 00:25:25.632 "traddr": "10.0.0.2", 00:25:25.632 "adrfam": "ipv4", 00:25:25.632 "trsvcid": "8010", 00:25:25.632 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:25.632 "wait_for_attach": false, 00:25:25.632 "attach_timeout_ms": 3000, 00:25:25.632 "method": "bdev_nvme_start_discovery", 00:25:25.632 "req_id": 1 00:25:25.632 } 00:25:25.632 Got JSON-RPC error response 00:25:25.632 response: 00:25:25.632 { 00:25:25.632 "code": -110, 00:25:25.632 "message": "Connection timed out" 00:25:25.632 } 00:25:25.632 16:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:25.632 16:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:25.632 16:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:25.632 16:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:25.632 16:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:25.632 16:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:25.632 16:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:25.632 16:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:25.632 16:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.632 16:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:25.632 16:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:25.632 16:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:25.632 16:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.632 16:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:25.632 16:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:25.632 16:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3870253 00:25:25.632 16:06:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:25.632 16:06:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:25.632 16:06:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:25:25.632 16:06:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:25.632 16:06:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:25:25.632 16:06:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:25.632 16:06:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:25.632 rmmod nvme_tcp 00:25:25.632 rmmod nvme_fabrics 00:25:25.632 rmmod nvme_keyring 00:25:25.632 16:06:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:25.632 16:06:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:25:25.632 16:06:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:25:25.632 16:06:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 3870019 ']' 00:25:25.632 16:06:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 3870019 00:25:25.632 16:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 3870019 ']' 00:25:25.632 16:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 3870019 00:25:25.632 16:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:25:25.632 16:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:25.632 16:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3870019 00:25:25.632 16:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:25.632 16:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:25.632 16:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3870019' 00:25:25.632 killing process with pid 3870019 00:25:25.632 16:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 3870019 00:25:25.632 16:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 3870019 00:25:25.632 16:06:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:25.632 16:06:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:25.632 16:06:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:25.632 16:06:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:25.632 16:06:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:25.632 16:06:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:25.632 16:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:25.632 16:06:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.168 16:06:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:28.168 00:25:28.169 real 0m17.104s 00:25:28.169 user 0m21.929s 00:25:28.169 sys 0m4.988s 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.169 ************************************ 00:25:28.169 END TEST nvmf_host_discovery 00:25:28.169 ************************************ 00:25:28.169 16:06:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:28.169 16:06:56 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:28.169 16:06:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:28.169 16:06:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:28.169 16:06:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:28.169 ************************************ 00:25:28.169 START TEST nvmf_host_multipath_status 00:25:28.169 ************************************ 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:28.169 * Looking for test storage... 00:25:28.169 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:25:28.169 16:06:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:33.435 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:33.435 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:25:33.435 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:33.435 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:33.435 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:33.435 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:33.435 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:33.435 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:25:33.435 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:33.435 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:25:33.435 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:25:33.435 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:25:33.435 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:33.436 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:33.436 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:33.436 Found net devices under 0000:86:00.0: cvl_0_0 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:33.436 Found net devices under 0000:86:00.1: cvl_0_1 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:33.436 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:33.436 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:25:33.436 00:25:33.436 --- 10.0.0.2 ping statistics --- 00:25:33.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.436 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:33.436 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:33.436 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:25:33.436 00:25:33.436 --- 10.0.0.1 ping statistics --- 00:25:33.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.436 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=3875115 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 3875115 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 3875115 ']' 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:33.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:33.436 16:07:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:33.436 [2024-07-15 16:07:02.006445] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:25:33.436 [2024-07-15 16:07:02.006490] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:33.436 EAL: No free 2048 kB hugepages reported on node 1 00:25:33.436 [2024-07-15 16:07:02.065105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:33.436 [2024-07-15 16:07:02.145482] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:33.437 [2024-07-15 16:07:02.145517] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:33.437 [2024-07-15 16:07:02.145525] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:33.437 [2024-07-15 16:07:02.145534] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:33.437 [2024-07-15 16:07:02.145539] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:33.437 [2024-07-15 16:07:02.145577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:33.437 [2024-07-15 16:07:02.145581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.004 16:07:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:34.004 16:07:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:25:34.004 16:07:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:34.004 16:07:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:34.004 16:07:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:34.004 16:07:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:34.004 16:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3875115 00:25:34.004 16:07:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:34.264 [2024-07-15 16:07:02.994077] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:34.264 16:07:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:34.264 Malloc0 00:25:34.523 16:07:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:34.523 16:07:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:34.780 16:07:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:34.780 [2024-07-15 16:07:03.692130] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:34.780 16:07:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:35.039 [2024-07-15 16:07:03.868597] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:35.039 16:07:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3875590 00:25:35.039 16:07:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:35.039 16:07:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:35.039 16:07:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3875590 /var/tmp/bdevperf.sock 00:25:35.039 16:07:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 3875590 ']' 00:25:35.039 16:07:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:35.039 16:07:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:35.039 16:07:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:35.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:35.039 16:07:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:35.039 16:07:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:35.976 16:07:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:35.976 16:07:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:25:35.976 16:07:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:35.977 16:07:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:25:36.545 Nvme0n1 00:25:36.545 16:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:36.804 Nvme0n1 00:25:36.804 16:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:36.804 16:07:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:39.341 16:07:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:39.341 16:07:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:39.341 16:07:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:39.341 16:07:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:40.278 16:07:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:40.278 16:07:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:40.278 16:07:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.278 16:07:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:40.536 16:07:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.536 16:07:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:40.536 16:07:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.536 16:07:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:40.536 16:07:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:40.536 16:07:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:40.536 16:07:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.536 16:07:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:40.795 16:07:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.795 16:07:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:40.795 16:07:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.795 16:07:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:41.054 16:07:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:41.054 16:07:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:41.054 16:07:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:41.054 16:07:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.313 16:07:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:41.313 16:07:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:41.313 16:07:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.313 16:07:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:41.313 16:07:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:41.313 16:07:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:41.313 16:07:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:41.576 16:07:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:41.917 16:07:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:42.850 16:07:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:42.850 16:07:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:42.850 16:07:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.850 16:07:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:43.108 16:07:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:43.108 16:07:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:43.108 16:07:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.108 16:07:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:43.108 16:07:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.108 16:07:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:43.108 16:07:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.108 16:07:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:43.365 16:07:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.365 16:07:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:43.365 16:07:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.365 16:07:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:43.623 16:07:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.623 16:07:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:43.623 16:07:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.623 16:07:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:43.623 16:07:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.623 16:07:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:43.623 16:07:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.623 16:07:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:43.881 16:07:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.881 16:07:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:43.881 16:07:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:44.139 16:07:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:44.404 16:07:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:45.337 16:07:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:45.337 16:07:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:45.337 16:07:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.337 16:07:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:45.595 16:07:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.595 16:07:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:45.595 16:07:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.595 16:07:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:45.595 16:07:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:45.595 16:07:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:45.595 16:07:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:45.595 16:07:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.852 16:07:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.852 16:07:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:45.852 16:07:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.852 16:07:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:46.110 16:07:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:46.110 16:07:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:46.110 16:07:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.110 16:07:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:46.368 16:07:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:46.368 16:07:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:46.368 16:07:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.368 16:07:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:46.368 16:07:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:46.368 16:07:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:46.368 16:07:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:46.626 16:07:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:46.883 16:07:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:47.812 16:07:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:47.812 16:07:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:47.812 16:07:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.812 16:07:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:48.069 16:07:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.069 16:07:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:48.069 16:07:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.069 16:07:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:48.326 16:07:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:48.326 16:07:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:48.326 16:07:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:48.326 16:07:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.326 16:07:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.326 16:07:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:48.326 16:07:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.326 16:07:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:48.584 16:07:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.584 16:07:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:48.584 16:07:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:48.584 16:07:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.842 16:07:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.842 16:07:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:48.842 16:07:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.842 16:07:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:48.842 16:07:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:48.842 16:07:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:48.842 16:07:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:49.100 16:07:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:49.357 16:07:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:50.298 16:07:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:50.298 16:07:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:50.298 16:07:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.298 16:07:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:50.555 16:07:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:50.555 16:07:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:50.555 16:07:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.555 16:07:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:50.812 16:07:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:50.812 16:07:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:50.812 16:07:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.812 16:07:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:50.812 16:07:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.812 16:07:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:50.812 16:07:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.812 16:07:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:51.069 16:07:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:51.069 16:07:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:51.069 16:07:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.069 16:07:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:51.327 16:07:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:51.327 16:07:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:51.327 16:07:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.327 16:07:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:51.327 16:07:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:51.327 16:07:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:51.327 16:07:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:51.584 16:07:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:51.842 16:07:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:52.776 16:07:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:52.776 16:07:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:52.776 16:07:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.776 16:07:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:53.034 16:07:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:53.034 16:07:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:53.035 16:07:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.035 16:07:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:53.293 16:07:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.293 16:07:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:53.293 16:07:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.293 16:07:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:53.293 16:07:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.293 16:07:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:53.293 16:07:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.293 16:07:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:53.552 16:07:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.552 16:07:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:53.552 16:07:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.552 16:07:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:53.836 16:07:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:53.836 16:07:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:53.836 16:07:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.836 16:07:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:53.836 16:07:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.836 16:07:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:54.095 16:07:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:54.095 16:07:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:54.354 16:07:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:54.612 16:07:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:55.547 16:07:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:55.547 16:07:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:55.547 16:07:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.547 16:07:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:55.827 16:07:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:55.827 16:07:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:55.827 16:07:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.827 16:07:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:55.827 16:07:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:55.827 16:07:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:55.827 16:07:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.827 16:07:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:56.097 16:07:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.097 16:07:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:56.097 16:07:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.097 16:07:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:56.356 16:07:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.356 16:07:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:56.356 16:07:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.356 16:07:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:56.356 16:07:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.356 16:07:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:56.356 16:07:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.356 16:07:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:56.616 16:07:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.616 16:07:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:56.616 16:07:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:56.875 16:07:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:57.134 16:07:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:58.071 16:07:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:58.071 16:07:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:58.071 16:07:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.071 16:07:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:58.330 16:07:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:58.330 16:07:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:58.330 16:07:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.330 16:07:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:58.330 16:07:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.330 16:07:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:58.330 16:07:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.330 16:07:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:58.590 16:07:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.590 16:07:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:58.590 16:07:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.590 16:07:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:58.850 16:07:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.850 16:07:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:58.850 16:07:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.850 16:07:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:58.850 16:07:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.850 16:07:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:58.850 16:07:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.850 16:07:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:59.110 16:07:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.110 16:07:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:59.110 16:07:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:59.369 16:07:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:59.628 16:07:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:00.567 16:07:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:00.567 16:07:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:00.567 16:07:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.567 16:07:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:00.826 16:07:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.826 16:07:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:00.826 16:07:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.826 16:07:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:00.826 16:07:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.826 16:07:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:00.826 16:07:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.826 16:07:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:01.085 16:07:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.085 16:07:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:01.086 16:07:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:01.086 16:07:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.344 16:07:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.344 16:07:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:01.344 16:07:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.344 16:07:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:01.344 16:07:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.344 16:07:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:01.344 16:07:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.344 16:07:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:01.603 16:07:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.603 16:07:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:01.603 16:07:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:01.862 16:07:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:02.121 16:07:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:03.058 16:07:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:03.058 16:07:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:03.058 16:07:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.058 16:07:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:03.317 16:07:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.317 16:07:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:03.317 16:07:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.317 16:07:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:03.576 16:07:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:03.576 16:07:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:03.576 16:07:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.576 16:07:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:03.576 16:07:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.576 16:07:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:03.576 16:07:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.576 16:07:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:03.835 16:07:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.835 16:07:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:03.835 16:07:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.835 16:07:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:04.094 16:07:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:04.094 16:07:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:04.094 16:07:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.094 16:07:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:04.094 16:07:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:04.094 16:07:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3875590 00:26:04.094 16:07:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 3875590 ']' 00:26:04.094 16:07:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 3875590 00:26:04.094 16:07:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:26:04.357 16:07:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:04.357 16:07:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3875590 00:26:04.357 16:07:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:26:04.357 16:07:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:26:04.357 16:07:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3875590' 00:26:04.357 killing process with pid 3875590 00:26:04.357 16:07:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 3875590 00:26:04.357 16:07:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 3875590 00:26:04.357 Connection closed with partial response: 00:26:04.357 00:26:04.357 00:26:04.357 16:07:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3875590 00:26:04.357 16:07:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:04.357 [2024-07-15 16:07:03.929684] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:26:04.357 [2024-07-15 16:07:03.929736] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3875590 ] 00:26:04.357 EAL: No free 2048 kB hugepages reported on node 1 00:26:04.357 [2024-07-15 16:07:03.979781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:04.357 [2024-07-15 16:07:04.053043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:04.357 Running I/O for 90 seconds... 00:26:04.357 [2024-07-15 16:07:17.949901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:27304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.357 [2024-07-15 16:07:17.949941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:04.357 [2024-07-15 16:07:17.949976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:27312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.357 [2024-07-15 16:07:17.949985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:04.357 [2024-07-15 16:07:17.949998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:27320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.357 [2024-07-15 16:07:17.950005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:04.357 [2024-07-15 16:07:17.950017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:26544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.357 [2024-07-15 16:07:17.950024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:04.357 [2024-07-15 16:07:17.950037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.357 [2024-07-15 16:07:17.950044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:04.357 [2024-07-15 16:07:17.950056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:26560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.357 [2024-07-15 16:07:17.950063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:04.357 [2024-07-15 16:07:17.950076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:26568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.357 [2024-07-15 16:07:17.950083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:04.357 [2024-07-15 16:07:17.950095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:26576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.357 [2024-07-15 16:07:17.950102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:04.357 [2024-07-15 16:07:17.950114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:26584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.357 [2024-07-15 16:07:17.950121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:04.357 [2024-07-15 16:07:17.950135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:26592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.357 [2024-07-15 16:07:17.950143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:04.357 [2024-07-15 16:07:17.950155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:26600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.357 [2024-07-15 16:07:17.950169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:04.357 [2024-07-15 16:07:17.950182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:26608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.357 [2024-07-15 16:07:17.950188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:04.357 [2024-07-15 16:07:17.950201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:26616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.357 [2024-07-15 16:07:17.950209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:04.357 [2024-07-15 16:07:17.950222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:26624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.357 [2024-07-15 16:07:17.950233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:04.357 [2024-07-15 16:07:17.950245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:26632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.357 [2024-07-15 16:07:17.950268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:04.357 [2024-07-15 16:07:17.950280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:26640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.357 [2024-07-15 16:07:17.950289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.357 [2024-07-15 16:07:17.950302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:26648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.357 [2024-07-15 16:07:17.950311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:04.357 [2024-07-15 16:07:17.950372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:26656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.357 [2024-07-15 16:07:17.950382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:04.357 [2024-07-15 16:07:17.950397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:26664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.357 [2024-07-15 16:07:17.950404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:04.357 [2024-07-15 16:07:17.950418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:26672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.357 [2024-07-15 16:07:17.950425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:04.357 [2024-07-15 16:07:17.950438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:26680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.357 [2024-07-15 16:07:17.950445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:04.357 [2024-07-15 16:07:17.950459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:26688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.357 [2024-07-15 16:07:17.950466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:04.357 [2024-07-15 16:07:17.950479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:26696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.357 [2024-07-15 16:07:17.950488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:04.357 [2024-07-15 16:07:17.950503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:26704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.357 [2024-07-15 16:07:17.950509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:04.357 [2024-07-15 16:07:17.950523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:26712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.357 [2024-07-15 16:07:17.950530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:04.357 [2024-07-15 16:07:17.950543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:26720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.357 [2024-07-15 16:07:17.950550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:04.357 [2024-07-15 16:07:17.950563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:26728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.357 [2024-07-15 16:07:17.950570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:04.357 [2024-07-15 16:07:17.950584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:26736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.357 [2024-07-15 16:07:17.950591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:04.357 [2024-07-15 16:07:17.950605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:26744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.357 [2024-07-15 16:07:17.950611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:04.357 [2024-07-15 16:07:17.950625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:26752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.357 [2024-07-15 16:07:17.950631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:04.357 [2024-07-15 16:07:17.950644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.357 [2024-07-15 16:07:17.950651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:04.358 [2024-07-15 16:07:17.950665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:26768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.358 [2024-07-15 16:07:17.950672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:04.358 [2024-07-15 16:07:17.950685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:26776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.358 [2024-07-15 16:07:17.950692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:04.358 [2024-07-15 16:07:17.950705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:26784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.358 [2024-07-15 16:07:17.950712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:04.358 [2024-07-15 16:07:17.950727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:26792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.358 [2024-07-15 16:07:17.950734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:04.358 [2024-07-15 16:07:17.950749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.358 [2024-07-15 16:07:17.950756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:04.358 [2024-07-15 16:07:17.950770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:26808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.358 [2024-07-15 16:07:17.950778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:04.358 [2024-07-15 16:07:17.950792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:26816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.358 [2024-07-15 16:07:17.950799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:04.358 [2024-07-15 16:07:17.950812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:26824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.358 [2024-07-15 16:07:17.950818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:04.358 [2024-07-15 16:07:17.950832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:26832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.358 [2024-07-15 16:07:17.950839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:04.358 [2024-07-15 16:07:17.950852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:26840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.358 [2024-07-15 16:07:17.950859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:04.358 [2024-07-15 16:07:17.950872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:26848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.358 [2024-07-15 16:07:17.950879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:04.358 [2024-07-15 16:07:17.950892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:26856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.358 [2024-07-15 16:07:17.950899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:04.358 [2024-07-15 16:07:17.950913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:26864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.358 [2024-07-15 16:07:17.950919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:04.358 [2024-07-15 16:07:17.950932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:26872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.358 [2024-07-15 16:07:17.950939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:04.358 [2024-07-15 16:07:17.950953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:26880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.358 [2024-07-15 16:07:17.950960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:04.358 [2024-07-15 16:07:17.950974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:26888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.358 [2024-07-15 16:07:17.950980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:04.358 [2024-07-15 16:07:17.950995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:26896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.358 [2024-07-15 16:07:17.951002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.358 [2024-07-15 16:07:17.951017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:26904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.358 [2024-07-15 16:07:17.951025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:04.358 [2024-07-15 16:07:17.951038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:26912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.358 [2024-07-15 16:07:17.951045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:04.358 [2024-07-15 16:07:17.951058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:26920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.358 [2024-07-15 16:07:17.951066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:04.358 [2024-07-15 16:07:17.951079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:26928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.358 [2024-07-15 16:07:17.951086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:04.358 [2024-07-15 16:07:17.951099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:26936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.358 [2024-07-15 16:07:17.951106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:04.358 [2024-07-15 16:07:17.951119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:26944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.358 [2024-07-15 16:07:17.951126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:04.358 [2024-07-15 16:07:17.951140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.358 [2024-07-15 16:07:17.951146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:04.358 [2024-07-15 16:07:17.951160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:26960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.358 [2024-07-15 16:07:17.951166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:04.358 [2024-07-15 16:07:17.951181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.358 [2024-07-15 16:07:17.951188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:04.358 [2024-07-15 16:07:17.951201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.358 [2024-07-15 16:07:17.951207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:04.358 [2024-07-15 16:07:17.951222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:26984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.358 [2024-07-15 16:07:17.951235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:04.358 [2024-07-15 16:07:17.951249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:26992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.358 [2024-07-15 16:07:17.951259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:04.358 [2024-07-15 16:07:17.951272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.358 [2024-07-15 16:07:17.951279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:04.358 [2024-07-15 16:07:17.951293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:27008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.358 [2024-07-15 16:07:17.951300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:04.358 [2024-07-15 16:07:17.951314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:27016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.358 [2024-07-15 16:07:17.951321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:04.358 [2024-07-15 16:07:17.951335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:27024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.358 [2024-07-15 16:07:17.951341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:04.358 [2024-07-15 16:07:17.951439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.358 [2024-07-15 16:07:17.951448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:04.358 [2024-07-15 16:07:17.951466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:27040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.358 [2024-07-15 16:07:17.951473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:04.358 [2024-07-15 16:07:17.951490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:27048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.358 [2024-07-15 16:07:17.951497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:04.358 [2024-07-15 16:07:17.951513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:27056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.358 [2024-07-15 16:07:17.951521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:04.358 [2024-07-15 16:07:17.951537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:27064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.358 [2024-07-15 16:07:17.951544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:04.358 [2024-07-15 16:07:17.951561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:27072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.358 [2024-07-15 16:07:17.951567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:04.358 [2024-07-15 16:07:17.951584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:27080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.358 [2024-07-15 16:07:17.951591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:04.358 [2024-07-15 16:07:17.951607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.358 [2024-07-15 16:07:17.951616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:04.358 [2024-07-15 16:07:17.951632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:27096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.359 [2024-07-15 16:07:17.951640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:04.359 [2024-07-15 16:07:17.951656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.359 [2024-07-15 16:07:17.951663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:04.359 [2024-07-15 16:07:17.951680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:27112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.359 [2024-07-15 16:07:17.951687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:04.359 [2024-07-15 16:07:17.951703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:27120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.359 [2024-07-15 16:07:17.951710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:04.359 [2024-07-15 16:07:17.951726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.359 [2024-07-15 16:07:17.951734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:04.359 [2024-07-15 16:07:17.951751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:27136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.359 [2024-07-15 16:07:17.951758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:04.359 [2024-07-15 16:07:17.951774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:27144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.359 [2024-07-15 16:07:17.951781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:04.359 [2024-07-15 16:07:17.951799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:27152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.359 [2024-07-15 16:07:17.951806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.359 [2024-07-15 16:07:17.951822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:27160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.359 [2024-07-15 16:07:17.951829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:04.359 [2024-07-15 16:07:17.951846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:27168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.359 [2024-07-15 16:07:17.951853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.359 [2024-07-15 16:07:17.951869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:27176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.359 [2024-07-15 16:07:17.951876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:04.359 [2024-07-15 16:07:17.951892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:27184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.359 [2024-07-15 16:07:17.951900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:04.359 [2024-07-15 16:07:17.951918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:27192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.359 [2024-07-15 16:07:17.951925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:04.359 [2024-07-15 16:07:17.951941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:27200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.359 [2024-07-15 16:07:17.951948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:04.359 [2024-07-15 16:07:17.951965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:27208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.359 [2024-07-15 16:07:17.951972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:04.359 [2024-07-15 16:07:17.951988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:27216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.359 [2024-07-15 16:07:17.951994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:04.359 [2024-07-15 16:07:17.952011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:27224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.359 [2024-07-15 16:07:17.952019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:04.359 [2024-07-15 16:07:17.952631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:27232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.359 [2024-07-15 16:07:17.952643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:04.359 [2024-07-15 16:07:17.952662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:27328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.359 [2024-07-15 16:07:17.952670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:04.359 [2024-07-15 16:07:17.952688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:27336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.359 [2024-07-15 16:07:17.952695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:04.359 [2024-07-15 16:07:17.952713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:27344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.359 [2024-07-15 16:07:17.952720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:04.359 [2024-07-15 16:07:17.952738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:27352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.359 [2024-07-15 16:07:17.952745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:04.359 [2024-07-15 16:07:17.952763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:27360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.359 [2024-07-15 16:07:17.952770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:04.359 [2024-07-15 16:07:17.952788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.359 [2024-07-15 16:07:17.952795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:04.359 [2024-07-15 16:07:17.952815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:27376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.359 [2024-07-15 16:07:17.952823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:04.359 [2024-07-15 16:07:17.952841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.359 [2024-07-15 16:07:17.952848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:04.359 [2024-07-15 16:07:17.952866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:27248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.359 [2024-07-15 16:07:17.952873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:04.359 [2024-07-15 16:07:17.952891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:27256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.359 [2024-07-15 16:07:17.952899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:04.359 [2024-07-15 16:07:17.952927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:27264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.359 [2024-07-15 16:07:17.952934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:04.359 [2024-07-15 16:07:17.952952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.359 [2024-07-15 16:07:17.952959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:04.359 [2024-07-15 16:07:17.952976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:27280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.359 [2024-07-15 16:07:17.952984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:04.359 [2024-07-15 16:07:17.953004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:27288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.359 [2024-07-15 16:07:17.953010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:04.359 [2024-07-15 16:07:17.953028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:27296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.359 [2024-07-15 16:07:17.953034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:04.359 [2024-07-15 16:07:30.826233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:60552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.359 [2024-07-15 16:07:30.826270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:04.359 [2024-07-15 16:07:30.826305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:60568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.359 [2024-07-15 16:07:30.826313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:04.359 [2024-07-15 16:07:30.826326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:60584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.359 [2024-07-15 16:07:30.826333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:04.359 [2024-07-15 16:07:30.826346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:60600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.359 [2024-07-15 16:07:30.826357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:04.359 [2024-07-15 16:07:30.826370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:60616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.359 [2024-07-15 16:07:30.826376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:04.359 [2024-07-15 16:07:30.826389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:60632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.359 [2024-07-15 16:07:30.826395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:04.359 [2024-07-15 16:07:30.826408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:60648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.359 [2024-07-15 16:07:30.826415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:04.359 [2024-07-15 16:07:30.826428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:60664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.359 [2024-07-15 16:07:30.826434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:04.359 [2024-07-15 16:07:30.827182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:60680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.359 [2024-07-15 16:07:30.827200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:04.360 [2024-07-15 16:07:30.827215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:60696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.360 [2024-07-15 16:07:30.827223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:04.360 [2024-07-15 16:07:30.827242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.360 [2024-07-15 16:07:30.827249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:04.360 [2024-07-15 16:07:30.827261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:60728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.360 [2024-07-15 16:07:30.827269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:04.360 [2024-07-15 16:07:30.827281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:60744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.360 [2024-07-15 16:07:30.827288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:04.360 [2024-07-15 16:07:30.827301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:60760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.360 [2024-07-15 16:07:30.827308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:04.360 [2024-07-15 16:07:30.827321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:60776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.360 [2024-07-15 16:07:30.827328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:04.360 [2024-07-15 16:07:30.827341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:60792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.360 [2024-07-15 16:07:30.827351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:04.360 [2024-07-15 16:07:30.827363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:60808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.360 [2024-07-15 16:07:30.827371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:04.360 [2024-07-15 16:07:30.827384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:60824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.360 [2024-07-15 16:07:30.827392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:04.360 [2024-07-15 16:07:30.827405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:60840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.360 [2024-07-15 16:07:30.827412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:04.360 [2024-07-15 16:07:30.827426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:60856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.360 [2024-07-15 16:07:30.827433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:04.360 [2024-07-15 16:07:30.827445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:60872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.360 [2024-07-15 16:07:30.827452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:04.360 [2024-07-15 16:07:30.827465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:60888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.360 [2024-07-15 16:07:30.827475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:04.360 [2024-07-15 16:07:30.827488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:60904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.360 [2024-07-15 16:07:30.827495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:04.360 [2024-07-15 16:07:30.827507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:60920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.360 [2024-07-15 16:07:30.827514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:04.360 [2024-07-15 16:07:30.827526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:60936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.360 [2024-07-15 16:07:30.827533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:04.360 [2024-07-15 16:07:30.827546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.360 [2024-07-15 16:07:30.827553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:04.360 [2024-07-15 16:07:30.827565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:60968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.360 [2024-07-15 16:07:30.827572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:04.360 [2024-07-15 16:07:30.827584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:60984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.360 [2024-07-15 16:07:30.827592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:04.360 [2024-07-15 16:07:30.827610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:61000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.360 [2024-07-15 16:07:30.827617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:04.360 [2024-07-15 16:07:30.827631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:61016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.360 [2024-07-15 16:07:30.827638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:04.360 [2024-07-15 16:07:30.827652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:61032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.360 [2024-07-15 16:07:30.827660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:04.360 [2024-07-15 16:07:30.827672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:61048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.360 [2024-07-15 16:07:30.827679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.360 [2024-07-15 16:07:30.827691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.360 [2024-07-15 16:07:30.827698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:04.360 [2024-07-15 16:07:30.827711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.360 [2024-07-15 16:07:30.827718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:04.360 [2024-07-15 16:07:30.827730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:61096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.360 [2024-07-15 16:07:30.827737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:04.360 [2024-07-15 16:07:30.827749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:61112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.360 [2024-07-15 16:07:30.827756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:04.360 [2024-07-15 16:07:30.827768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:61128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.360 [2024-07-15 16:07:30.827775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:04.360 [2024-07-15 16:07:30.827787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:61144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.360 [2024-07-15 16:07:30.827794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:04.360 [2024-07-15 16:07:30.827806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:61160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.360 [2024-07-15 16:07:30.827814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:04.360 [2024-07-15 16:07:30.827826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:61176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.360 [2024-07-15 16:07:30.827832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:04.360 [2024-07-15 16:07:30.827846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:61192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.360 [2024-07-15 16:07:30.827854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:04.360 [2024-07-15 16:07:30.827866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:61208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.360 [2024-07-15 16:07:30.827874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:04.360 [2024-07-15 16:07:30.827886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:61224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.360 [2024-07-15 16:07:30.827893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:04.360 [2024-07-15 16:07:30.827905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:61240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.360 [2024-07-15 16:07:30.827911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:04.360 [2024-07-15 16:07:30.827924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:61256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.360 [2024-07-15 16:07:30.827931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:04.360 [2024-07-15 16:07:30.827944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:61272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.360 [2024-07-15 16:07:30.827950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:04.360 [2024-07-15 16:07:30.827962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:61288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.360 [2024-07-15 16:07:30.827969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:04.360 [2024-07-15 16:07:30.827982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:61304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.360 [2024-07-15 16:07:30.827989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:04.360 [2024-07-15 16:07:30.828001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:61320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.360 [2024-07-15 16:07:30.828008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:04.360 [2024-07-15 16:07:30.828020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:61336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.360 [2024-07-15 16:07:30.828028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:04.360 [2024-07-15 16:07:30.828041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:61352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.360 [2024-07-15 16:07:30.828048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:04.361 [2024-07-15 16:07:30.828454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:61368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.361 [2024-07-15 16:07:30.828466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:04.361 [2024-07-15 16:07:30.828480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:61384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.361 [2024-07-15 16:07:30.828490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:04.361 [2024-07-15 16:07:30.828503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:61400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.361 [2024-07-15 16:07:30.828510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:04.361 [2024-07-15 16:07:30.828523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:61416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.361 [2024-07-15 16:07:30.828530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:04.361 [2024-07-15 16:07:30.828543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:61432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.361 [2024-07-15 16:07:30.828549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:04.361 [2024-07-15 16:07:30.828562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:61448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.361 [2024-07-15 16:07:30.828569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:04.361 Received shutdown signal, test time was about 27.228712 seconds 00:26:04.361 00:26:04.361 Latency(us) 00:26:04.361 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:04.361 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:04.361 Verification LBA range: start 0x0 length 0x4000 00:26:04.361 Nvme0n1 : 27.23 10209.25 39.88 0.00 0.00 12516.40 961.67 3019898.88 00:26:04.361 =================================================================================================================== 00:26:04.361 Total : 10209.25 39.88 0.00 0.00 12516.40 961.67 3019898.88 00:26:04.361 16:07:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:04.619 16:07:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:04.619 16:07:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:04.619 16:07:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:04.619 16:07:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:04.619 16:07:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:26:04.619 16:07:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:04.619 16:07:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:26:04.619 16:07:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:04.619 16:07:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:04.619 rmmod nvme_tcp 00:26:04.619 rmmod nvme_fabrics 00:26:04.619 rmmod nvme_keyring 00:26:04.619 16:07:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:04.619 16:07:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:26:04.619 16:07:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:26:04.619 16:07:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 3875115 ']' 00:26:04.619 16:07:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 3875115 00:26:04.619 16:07:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 3875115 ']' 00:26:04.619 16:07:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 3875115 00:26:04.619 16:07:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:26:04.619 16:07:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:04.619 16:07:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3875115 00:26:04.619 16:07:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:04.619 16:07:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:04.619 16:07:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3875115' 00:26:04.619 killing process with pid 3875115 00:26:04.619 16:07:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 3875115 00:26:04.619 16:07:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 3875115 00:26:04.878 16:07:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:04.878 16:07:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:04.878 16:07:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:04.878 16:07:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:04.878 16:07:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:04.878 16:07:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:04.878 16:07:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:04.878 16:07:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:07.412 16:07:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:07.412 00:26:07.412 real 0m39.167s 00:26:07.412 user 1m46.726s 00:26:07.412 sys 0m10.403s 00:26:07.412 16:07:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:07.412 16:07:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:07.412 ************************************ 00:26:07.412 END TEST nvmf_host_multipath_status 00:26:07.412 ************************************ 00:26:07.412 16:07:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:07.412 16:07:35 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:07.412 16:07:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:07.412 16:07:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:07.412 16:07:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:07.412 ************************************ 00:26:07.412 START TEST nvmf_discovery_remove_ifc 00:26:07.412 ************************************ 00:26:07.412 16:07:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:07.412 * Looking for test storage... 00:26:07.412 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:07.412 16:07:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:07.412 16:07:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:07.412 16:07:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:07.412 16:07:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:07.412 16:07:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:07.412 16:07:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:07.412 16:07:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:07.412 16:07:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:07.412 16:07:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:07.412 16:07:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:07.412 16:07:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:07.412 16:07:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:07.413 16:07:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:07.413 16:07:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:07.413 16:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:07.413 16:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:07.413 16:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:07.413 16:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:07.413 16:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:07.413 16:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:07.413 16:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:07.413 16:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:07.413 16:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.413 16:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.413 16:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.413 16:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:07.413 16:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.413 16:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:26:07.413 16:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:07.413 16:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:07.413 16:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:07.413 16:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:07.413 16:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:07.413 16:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:07.413 16:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:07.413 16:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:07.413 16:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:07.413 16:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:07.413 16:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:07.413 16:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:07.413 16:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:07.413 16:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:07.413 16:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:07.413 16:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:07.413 16:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:07.413 16:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:07.413 16:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:07.413 16:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:07.413 16:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:07.413 16:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:07.413 16:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:07.413 16:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:07.413 16:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:07.413 16:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:26:07.413 16:07:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:12.679 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:12.679 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:26:12.679 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:12.679 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:12.679 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:12.679 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:12.679 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:12.679 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:26:12.679 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:12.679 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:26:12.679 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:26:12.679 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:26:12.679 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:12.680 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:12.680 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:12.680 Found net devices under 0000:86:00.0: cvl_0_0 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:12.680 Found net devices under 0000:86:00.1: cvl_0_1 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:12.680 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:12.680 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:26:12.680 00:26:12.680 --- 10.0.0.2 ping statistics --- 00:26:12.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:12.680 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:12.680 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:12.680 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:26:12.680 00:26:12.680 --- 10.0.0.1 ping statistics --- 00:26:12.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:12.680 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=3883915 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 3883915 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 3883915 ']' 00:26:12.680 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:12.681 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:12.681 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:12.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:12.681 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:12.681 16:07:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:12.681 [2024-07-15 16:07:41.323258] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:26:12.681 [2024-07-15 16:07:41.323299] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:12.681 EAL: No free 2048 kB hugepages reported on node 1 00:26:12.681 [2024-07-15 16:07:41.379369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:12.681 [2024-07-15 16:07:41.459795] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:12.681 [2024-07-15 16:07:41.459828] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:12.681 [2024-07-15 16:07:41.459835] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:12.681 [2024-07-15 16:07:41.459841] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:12.681 [2024-07-15 16:07:41.459847] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:12.681 [2024-07-15 16:07:41.459864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:13.249 16:07:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:13.249 16:07:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:26:13.249 16:07:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:13.249 16:07:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:13.249 16:07:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:13.249 16:07:42 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:13.249 16:07:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:13.249 16:07:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.249 16:07:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:13.249 [2024-07-15 16:07:42.175006] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:13.249 [2024-07-15 16:07:42.183119] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:13.507 null0 00:26:13.507 [2024-07-15 16:07:42.215135] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:13.507 16:07:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.507 16:07:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3884158 00:26:13.507 16:07:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:13.507 16:07:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3884158 /tmp/host.sock 00:26:13.507 16:07:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 3884158 ']' 00:26:13.507 16:07:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:26:13.507 16:07:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:13.507 16:07:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:13.507 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:13.507 16:07:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:13.507 16:07:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:13.507 [2024-07-15 16:07:42.283675] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:26:13.507 [2024-07-15 16:07:42.283717] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3884158 ] 00:26:13.507 EAL: No free 2048 kB hugepages reported on node 1 00:26:13.507 [2024-07-15 16:07:42.337891] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.507 [2024-07-15 16:07:42.413540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:14.442 16:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:14.442 16:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:26:14.442 16:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:14.442 16:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:14.442 16:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.442 16:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:14.442 16:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.442 16:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:14.442 16:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.442 16:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:14.442 16:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.442 16:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:14.442 16:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.442 16:07:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:15.373 [2024-07-15 16:07:44.223800] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:15.373 [2024-07-15 16:07:44.223819] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:15.373 [2024-07-15 16:07:44.223832] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:15.630 [2024-07-15 16:07:44.353230] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:15.889 [2024-07-15 16:07:44.578531] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:15.889 [2024-07-15 16:07:44.578630] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:15.889 [2024-07-15 16:07:44.578649] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:15.889 [2024-07-15 16:07:44.578663] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:15.889 [2024-07-15 16:07:44.578681] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:15.889 16:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.889 16:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:15.889 [2024-07-15 16:07:44.582516] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x239ce30 was disconnected and freed. delete nvme_qpair. 00:26:15.889 16:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:15.889 16:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:15.889 16:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:15.889 16:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.889 16:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:15.889 16:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:15.889 16:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:15.889 16:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.889 16:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:15.889 16:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:15.889 16:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:15.889 16:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:15.889 16:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:15.889 16:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:15.889 16:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:15.889 16:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:15.889 16:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.889 16:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:15.889 16:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:15.889 16:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.889 16:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:15.889 16:07:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:17.262 16:07:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:17.262 16:07:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:17.262 16:07:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:17.262 16:07:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.262 16:07:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:17.262 16:07:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:17.262 16:07:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:17.262 16:07:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.262 16:07:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:17.262 16:07:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:18.251 16:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:18.251 16:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:18.251 16:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:18.251 16:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.251 16:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:18.251 16:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:18.251 16:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:18.251 16:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.251 16:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:18.251 16:07:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:19.183 16:07:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:19.184 16:07:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:19.184 16:07:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:19.184 16:07:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.184 16:07:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:19.184 16:07:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:19.184 16:07:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:19.184 16:07:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.184 16:07:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:19.184 16:07:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:20.117 16:07:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:20.117 16:07:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:20.117 16:07:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:20.117 16:07:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.117 16:07:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:20.117 16:07:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:20.117 16:07:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:20.117 16:07:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.117 16:07:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:20.117 16:07:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:21.492 16:07:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:21.492 16:07:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:21.492 16:07:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:21.492 16:07:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.492 16:07:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:21.492 16:07:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:21.492 16:07:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:21.492 16:07:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.492 [2024-07-15 16:07:50.020056] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:21.492 [2024-07-15 16:07:50.020096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:21.492 [2024-07-15 16:07:50.020108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.492 [2024-07-15 16:07:50.020118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:21.492 [2024-07-15 16:07:50.020125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.492 [2024-07-15 16:07:50.020132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:21.492 [2024-07-15 16:07:50.020139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.492 [2024-07-15 16:07:50.020146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:21.492 [2024-07-15 16:07:50.020153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.492 [2024-07-15 16:07:50.020160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:21.492 [2024-07-15 16:07:50.020167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.492 [2024-07-15 16:07:50.020178] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363690 is same with the state(5) to be set 00:26:21.492 [2024-07-15 16:07:50.030081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2363690 (9): Bad file descriptor 00:26:21.492 [2024-07-15 16:07:50.040120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:21.492 16:07:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:21.492 16:07:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:22.427 16:07:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:22.427 16:07:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:22.427 16:07:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:22.427 16:07:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:22.427 16:07:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.427 16:07:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:22.427 16:07:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:22.427 [2024-07-15 16:07:51.074243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:22.427 [2024-07-15 16:07:51.074281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2363690 with addr=10.0.0.2, port=4420 00:26:22.427 [2024-07-15 16:07:51.074296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363690 is same with the state(5) to be set 00:26:22.427 [2024-07-15 16:07:51.074323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2363690 (9): Bad file descriptor 00:26:22.427 [2024-07-15 16:07:51.074729] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:22.428 [2024-07-15 16:07:51.074750] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:22.428 [2024-07-15 16:07:51.074759] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:22.428 [2024-07-15 16:07:51.074770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:22.428 [2024-07-15 16:07:51.074788] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.428 [2024-07-15 16:07:51.074799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:22.428 16:07:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.428 16:07:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:22.428 16:07:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:23.361 [2024-07-15 16:07:52.077277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:23.361 [2024-07-15 16:07:52.077301] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:23.361 [2024-07-15 16:07:52.077309] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:23.361 [2024-07-15 16:07:52.077316] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:26:23.361 [2024-07-15 16:07:52.077329] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.361 [2024-07-15 16:07:52.077347] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:23.361 [2024-07-15 16:07:52.077369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:23.361 [2024-07-15 16:07:52.077384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.361 [2024-07-15 16:07:52.077394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:23.361 [2024-07-15 16:07:52.077401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.361 [2024-07-15 16:07:52.077409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:23.361 [2024-07-15 16:07:52.077416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.362 [2024-07-15 16:07:52.077424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:23.362 [2024-07-15 16:07:52.077431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.362 [2024-07-15 16:07:52.077439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:23.362 [2024-07-15 16:07:52.077446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.362 [2024-07-15 16:07:52.077452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:26:23.362 [2024-07-15 16:07:52.077549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2362a80 (9): Bad file descriptor 00:26:23.362 [2024-07-15 16:07:52.078561] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:23.362 [2024-07-15 16:07:52.078572] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:26:23.362 16:07:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:23.362 16:07:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:23.362 16:07:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:23.362 16:07:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.362 16:07:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:23.362 16:07:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:23.362 16:07:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:23.362 16:07:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.362 16:07:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:23.362 16:07:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:23.362 16:07:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:23.362 16:07:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:23.362 16:07:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:23.362 16:07:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:23.362 16:07:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:23.362 16:07:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.362 16:07:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:23.362 16:07:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:23.362 16:07:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:23.362 16:07:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.362 16:07:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:23.362 16:07:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:24.736 16:07:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:24.736 16:07:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:24.736 16:07:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:24.736 16:07:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:24.736 16:07:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.736 16:07:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:24.736 16:07:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:24.736 16:07:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.736 16:07:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:24.736 16:07:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:25.302 [2024-07-15 16:07:54.128365] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:25.302 [2024-07-15 16:07:54.128382] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:25.302 [2024-07-15 16:07:54.128396] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:25.302 [2024-07-15 16:07:54.215670] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:25.560 16:07:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:25.560 16:07:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:25.560 16:07:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:25.560 16:07:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.560 16:07:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:25.560 16:07:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:25.560 16:07:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:25.560 16:07:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.560 16:07:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:25.560 16:07:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:25.560 [2024-07-15 16:07:54.394355] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:25.560 [2024-07-15 16:07:54.394389] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:25.560 [2024-07-15 16:07:54.394406] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:25.560 [2024-07-15 16:07:54.394419] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:25.560 [2024-07-15 16:07:54.394425] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:25.560 [2024-07-15 16:07:54.397806] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x23798d0 was disconnected and freed. delete nvme_qpair. 00:26:26.493 16:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:26.493 16:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:26.493 16:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:26.493 16:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.493 16:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:26.493 16:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:26.493 16:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:26.493 16:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.493 16:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:26.493 16:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:26.493 16:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3884158 00:26:26.493 16:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 3884158 ']' 00:26:26.493 16:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 3884158 00:26:26.493 16:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:26:26.493 16:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:26.493 16:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3884158 00:26:26.752 16:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:26.752 16:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:26.752 16:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3884158' 00:26:26.752 killing process with pid 3884158 00:26:26.752 16:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 3884158 00:26:26.752 16:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 3884158 00:26:26.752 16:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:26.752 16:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:26.752 16:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:26:26.752 16:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:26.752 16:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:26:26.752 16:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:26.752 16:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:26.752 rmmod nvme_tcp 00:26:26.752 rmmod nvme_fabrics 00:26:26.752 rmmod nvme_keyring 00:26:27.011 16:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:27.011 16:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:26:27.011 16:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:26:27.011 16:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 3883915 ']' 00:26:27.011 16:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 3883915 00:26:27.011 16:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 3883915 ']' 00:26:27.011 16:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 3883915 00:26:27.011 16:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:26:27.011 16:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:27.011 16:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3883915 00:26:27.011 16:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:27.011 16:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:27.011 16:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3883915' 00:26:27.011 killing process with pid 3883915 00:26:27.011 16:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 3883915 00:26:27.011 16:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 3883915 00:26:27.011 16:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:27.011 16:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:27.011 16:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:27.011 16:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:27.011 16:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:27.011 16:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:27.011 16:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:27.011 16:07:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:29.539 16:07:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:29.539 00:26:29.539 real 0m22.106s 00:26:29.539 user 0m28.962s 00:26:29.539 sys 0m5.245s 00:26:29.539 16:07:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:29.539 16:07:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:29.539 ************************************ 00:26:29.539 END TEST nvmf_discovery_remove_ifc 00:26:29.539 ************************************ 00:26:29.539 16:07:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:29.539 16:07:58 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:29.539 16:07:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:29.539 16:07:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:29.539 16:07:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:29.539 ************************************ 00:26:29.539 START TEST nvmf_identify_kernel_target 00:26:29.539 ************************************ 00:26:29.539 16:07:58 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:29.539 * Looking for test storage... 00:26:29.539 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:29.539 16:07:58 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:29.539 16:07:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:29.539 16:07:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:29.539 16:07:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:29.539 16:07:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:29.539 16:07:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:29.539 16:07:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:29.539 16:07:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:29.539 16:07:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:29.539 16:07:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:29.539 16:07:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:29.539 16:07:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:29.539 16:07:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:29.539 16:07:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:29.539 16:07:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:29.539 16:07:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:29.539 16:07:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:29.539 16:07:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:29.539 16:07:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:29.539 16:07:58 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:29.539 16:07:58 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:29.539 16:07:58 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:29.539 16:07:58 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.539 16:07:58 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.539 16:07:58 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.539 16:07:58 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:29.539 16:07:58 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.539 16:07:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:26:29.539 16:07:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:29.539 16:07:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:29.539 16:07:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:29.539 16:07:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:29.539 16:07:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:29.539 16:07:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:29.539 16:07:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:29.539 16:07:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:29.539 16:07:58 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:29.539 16:07:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:29.539 16:07:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:29.539 16:07:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:29.539 16:07:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:29.539 16:07:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:29.539 16:07:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:29.539 16:07:58 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:29.539 16:07:58 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:29.539 16:07:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:29.539 16:07:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:29.539 16:07:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:26:29.539 16:07:58 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:34.801 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:34.801 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:34.801 Found net devices under 0000:86:00.0: cvl_0_0 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:34.801 Found net devices under 0000:86:00.1: cvl_0_1 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:34.801 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:34.802 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:34.802 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:34.802 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:26:34.802 00:26:34.802 --- 10.0.0.2 ping statistics --- 00:26:34.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:34.802 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:26:34.802 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:34.802 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:34.802 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:26:34.802 00:26:34.802 --- 10.0.0.1 ping statistics --- 00:26:34.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:34.802 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:26:34.802 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:34.802 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:26:34.802 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:34.802 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:34.802 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:34.802 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:34.802 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:34.802 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:34.802 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:34.802 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:34.802 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:34.802 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:26:34.802 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:34.802 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:34.802 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.802 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.802 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:34.802 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.802 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:34.802 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:34.802 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:34.802 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:34.802 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:34.802 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:34.802 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:34.802 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:34.802 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:34.802 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:34.802 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:26:34.802 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:34.802 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:34.802 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:34.802 16:08:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:37.333 Waiting for block devices as requested 00:26:37.333 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:26:37.333 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:37.333 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:37.333 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:37.333 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:37.333 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:37.333 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:37.333 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:37.591 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:37.591 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:37.591 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:37.850 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:37.850 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:37.850 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:37.850 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:38.109 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:38.109 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:38.109 16:08:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:38.109 16:08:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:38.109 16:08:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:38.109 16:08:06 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:26:38.109 16:08:06 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:38.109 16:08:06 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:26:38.109 16:08:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:38.109 16:08:06 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:38.109 16:08:06 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:38.109 No valid GPT data, bailing 00:26:38.369 16:08:07 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:38.369 16:08:07 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:26:38.369 16:08:07 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:26:38.369 16:08:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:26:38.369 16:08:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:26:38.369 16:08:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:38.369 16:08:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:38.369 16:08:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:38.369 16:08:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:38.369 16:08:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:26:38.369 16:08:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:26:38.369 16:08:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:26:38.369 16:08:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:26:38.369 16:08:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:26:38.369 16:08:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:26:38.369 16:08:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:26:38.369 16:08:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:38.369 16:08:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:26:38.369 00:26:38.369 Discovery Log Number of Records 2, Generation counter 2 00:26:38.369 =====Discovery Log Entry 0====== 00:26:38.370 trtype: tcp 00:26:38.370 adrfam: ipv4 00:26:38.370 subtype: current discovery subsystem 00:26:38.370 treq: not specified, sq flow control disable supported 00:26:38.370 portid: 1 00:26:38.370 trsvcid: 4420 00:26:38.370 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:38.370 traddr: 10.0.0.1 00:26:38.370 eflags: none 00:26:38.370 sectype: none 00:26:38.370 =====Discovery Log Entry 1====== 00:26:38.370 trtype: tcp 00:26:38.370 adrfam: ipv4 00:26:38.370 subtype: nvme subsystem 00:26:38.370 treq: not specified, sq flow control disable supported 00:26:38.370 portid: 1 00:26:38.370 trsvcid: 4420 00:26:38.370 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:38.370 traddr: 10.0.0.1 00:26:38.370 eflags: none 00:26:38.370 sectype: none 00:26:38.370 16:08:07 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:38.370 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:38.370 EAL: No free 2048 kB hugepages reported on node 1 00:26:38.370 ===================================================== 00:26:38.370 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:38.370 ===================================================== 00:26:38.370 Controller Capabilities/Features 00:26:38.370 ================================ 00:26:38.370 Vendor ID: 0000 00:26:38.370 Subsystem Vendor ID: 0000 00:26:38.370 Serial Number: ea04a1b4e8c2190a81f8 00:26:38.370 Model Number: Linux 00:26:38.370 Firmware Version: 6.7.0-68 00:26:38.370 Recommended Arb Burst: 0 00:26:38.370 IEEE OUI Identifier: 00 00 00 00:26:38.370 Multi-path I/O 00:26:38.370 May have multiple subsystem ports: No 00:26:38.370 May have multiple controllers: No 00:26:38.370 Associated with SR-IOV VF: No 00:26:38.370 Max Data Transfer Size: Unlimited 00:26:38.370 Max Number of Namespaces: 0 00:26:38.370 Max Number of I/O Queues: 1024 00:26:38.370 NVMe Specification Version (VS): 1.3 00:26:38.370 NVMe Specification Version (Identify): 1.3 00:26:38.370 Maximum Queue Entries: 1024 00:26:38.370 Contiguous Queues Required: No 00:26:38.370 Arbitration Mechanisms Supported 00:26:38.370 Weighted Round Robin: Not Supported 00:26:38.370 Vendor Specific: Not Supported 00:26:38.370 Reset Timeout: 7500 ms 00:26:38.370 Doorbell Stride: 4 bytes 00:26:38.370 NVM Subsystem Reset: Not Supported 00:26:38.370 Command Sets Supported 00:26:38.370 NVM Command Set: Supported 00:26:38.370 Boot Partition: Not Supported 00:26:38.370 Memory Page Size Minimum: 4096 bytes 00:26:38.370 Memory Page Size Maximum: 4096 bytes 00:26:38.370 Persistent Memory Region: Not Supported 00:26:38.370 Optional Asynchronous Events Supported 00:26:38.370 Namespace Attribute Notices: Not Supported 00:26:38.370 Firmware Activation Notices: Not Supported 00:26:38.370 ANA Change Notices: Not Supported 00:26:38.370 PLE Aggregate Log Change Notices: Not Supported 00:26:38.370 LBA Status Info Alert Notices: Not Supported 00:26:38.370 EGE Aggregate Log Change Notices: Not Supported 00:26:38.370 Normal NVM Subsystem Shutdown event: Not Supported 00:26:38.370 Zone Descriptor Change Notices: Not Supported 00:26:38.370 Discovery Log Change Notices: Supported 00:26:38.370 Controller Attributes 00:26:38.370 128-bit Host Identifier: Not Supported 00:26:38.370 Non-Operational Permissive Mode: Not Supported 00:26:38.370 NVM Sets: Not Supported 00:26:38.370 Read Recovery Levels: Not Supported 00:26:38.370 Endurance Groups: Not Supported 00:26:38.370 Predictable Latency Mode: Not Supported 00:26:38.370 Traffic Based Keep ALive: Not Supported 00:26:38.370 Namespace Granularity: Not Supported 00:26:38.370 SQ Associations: Not Supported 00:26:38.370 UUID List: Not Supported 00:26:38.370 Multi-Domain Subsystem: Not Supported 00:26:38.370 Fixed Capacity Management: Not Supported 00:26:38.370 Variable Capacity Management: Not Supported 00:26:38.370 Delete Endurance Group: Not Supported 00:26:38.370 Delete NVM Set: Not Supported 00:26:38.370 Extended LBA Formats Supported: Not Supported 00:26:38.370 Flexible Data Placement Supported: Not Supported 00:26:38.370 00:26:38.370 Controller Memory Buffer Support 00:26:38.370 ================================ 00:26:38.370 Supported: No 00:26:38.370 00:26:38.370 Persistent Memory Region Support 00:26:38.370 ================================ 00:26:38.370 Supported: No 00:26:38.370 00:26:38.370 Admin Command Set Attributes 00:26:38.370 ============================ 00:26:38.370 Security Send/Receive: Not Supported 00:26:38.370 Format NVM: Not Supported 00:26:38.370 Firmware Activate/Download: Not Supported 00:26:38.370 Namespace Management: Not Supported 00:26:38.370 Device Self-Test: Not Supported 00:26:38.370 Directives: Not Supported 00:26:38.370 NVMe-MI: Not Supported 00:26:38.370 Virtualization Management: Not Supported 00:26:38.370 Doorbell Buffer Config: Not Supported 00:26:38.370 Get LBA Status Capability: Not Supported 00:26:38.370 Command & Feature Lockdown Capability: Not Supported 00:26:38.370 Abort Command Limit: 1 00:26:38.370 Async Event Request Limit: 1 00:26:38.370 Number of Firmware Slots: N/A 00:26:38.370 Firmware Slot 1 Read-Only: N/A 00:26:38.370 Firmware Activation Without Reset: N/A 00:26:38.370 Multiple Update Detection Support: N/A 00:26:38.370 Firmware Update Granularity: No Information Provided 00:26:38.370 Per-Namespace SMART Log: No 00:26:38.370 Asymmetric Namespace Access Log Page: Not Supported 00:26:38.370 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:38.370 Command Effects Log Page: Not Supported 00:26:38.370 Get Log Page Extended Data: Supported 00:26:38.370 Telemetry Log Pages: Not Supported 00:26:38.370 Persistent Event Log Pages: Not Supported 00:26:38.370 Supported Log Pages Log Page: May Support 00:26:38.370 Commands Supported & Effects Log Page: Not Supported 00:26:38.370 Feature Identifiers & Effects Log Page:May Support 00:26:38.370 NVMe-MI Commands & Effects Log Page: May Support 00:26:38.370 Data Area 4 for Telemetry Log: Not Supported 00:26:38.370 Error Log Page Entries Supported: 1 00:26:38.370 Keep Alive: Not Supported 00:26:38.370 00:26:38.370 NVM Command Set Attributes 00:26:38.370 ========================== 00:26:38.370 Submission Queue Entry Size 00:26:38.370 Max: 1 00:26:38.370 Min: 1 00:26:38.370 Completion Queue Entry Size 00:26:38.370 Max: 1 00:26:38.370 Min: 1 00:26:38.371 Number of Namespaces: 0 00:26:38.371 Compare Command: Not Supported 00:26:38.371 Write Uncorrectable Command: Not Supported 00:26:38.371 Dataset Management Command: Not Supported 00:26:38.371 Write Zeroes Command: Not Supported 00:26:38.371 Set Features Save Field: Not Supported 00:26:38.371 Reservations: Not Supported 00:26:38.371 Timestamp: Not Supported 00:26:38.371 Copy: Not Supported 00:26:38.371 Volatile Write Cache: Not Present 00:26:38.371 Atomic Write Unit (Normal): 1 00:26:38.371 Atomic Write Unit (PFail): 1 00:26:38.371 Atomic Compare & Write Unit: 1 00:26:38.371 Fused Compare & Write: Not Supported 00:26:38.371 Scatter-Gather List 00:26:38.371 SGL Command Set: Supported 00:26:38.371 SGL Keyed: Not Supported 00:26:38.371 SGL Bit Bucket Descriptor: Not Supported 00:26:38.371 SGL Metadata Pointer: Not Supported 00:26:38.371 Oversized SGL: Not Supported 00:26:38.371 SGL Metadata Address: Not Supported 00:26:38.371 SGL Offset: Supported 00:26:38.371 Transport SGL Data Block: Not Supported 00:26:38.371 Replay Protected Memory Block: Not Supported 00:26:38.371 00:26:38.371 Firmware Slot Information 00:26:38.371 ========================= 00:26:38.371 Active slot: 0 00:26:38.371 00:26:38.371 00:26:38.371 Error Log 00:26:38.371 ========= 00:26:38.371 00:26:38.371 Active Namespaces 00:26:38.371 ================= 00:26:38.371 Discovery Log Page 00:26:38.371 ================== 00:26:38.371 Generation Counter: 2 00:26:38.371 Number of Records: 2 00:26:38.371 Record Format: 0 00:26:38.371 00:26:38.371 Discovery Log Entry 0 00:26:38.371 ---------------------- 00:26:38.371 Transport Type: 3 (TCP) 00:26:38.371 Address Family: 1 (IPv4) 00:26:38.371 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:38.371 Entry Flags: 00:26:38.371 Duplicate Returned Information: 0 00:26:38.371 Explicit Persistent Connection Support for Discovery: 0 00:26:38.371 Transport Requirements: 00:26:38.371 Secure Channel: Not Specified 00:26:38.371 Port ID: 1 (0x0001) 00:26:38.371 Controller ID: 65535 (0xffff) 00:26:38.371 Admin Max SQ Size: 32 00:26:38.371 Transport Service Identifier: 4420 00:26:38.371 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:38.371 Transport Address: 10.0.0.1 00:26:38.371 Discovery Log Entry 1 00:26:38.371 ---------------------- 00:26:38.371 Transport Type: 3 (TCP) 00:26:38.371 Address Family: 1 (IPv4) 00:26:38.371 Subsystem Type: 2 (NVM Subsystem) 00:26:38.371 Entry Flags: 00:26:38.371 Duplicate Returned Information: 0 00:26:38.371 Explicit Persistent Connection Support for Discovery: 0 00:26:38.371 Transport Requirements: 00:26:38.371 Secure Channel: Not Specified 00:26:38.371 Port ID: 1 (0x0001) 00:26:38.371 Controller ID: 65535 (0xffff) 00:26:38.371 Admin Max SQ Size: 32 00:26:38.371 Transport Service Identifier: 4420 00:26:38.371 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:38.371 Transport Address: 10.0.0.1 00:26:38.371 16:08:07 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:38.371 EAL: No free 2048 kB hugepages reported on node 1 00:26:38.631 get_feature(0x01) failed 00:26:38.631 get_feature(0x02) failed 00:26:38.631 get_feature(0x04) failed 00:26:38.631 ===================================================== 00:26:38.631 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:38.631 ===================================================== 00:26:38.631 Controller Capabilities/Features 00:26:38.631 ================================ 00:26:38.631 Vendor ID: 0000 00:26:38.631 Subsystem Vendor ID: 0000 00:26:38.631 Serial Number: 7d27eac27f0d126a85e1 00:26:38.631 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:38.631 Firmware Version: 6.7.0-68 00:26:38.631 Recommended Arb Burst: 6 00:26:38.631 IEEE OUI Identifier: 00 00 00 00:26:38.631 Multi-path I/O 00:26:38.631 May have multiple subsystem ports: Yes 00:26:38.631 May have multiple controllers: Yes 00:26:38.631 Associated with SR-IOV VF: No 00:26:38.631 Max Data Transfer Size: Unlimited 00:26:38.631 Max Number of Namespaces: 1024 00:26:38.631 Max Number of I/O Queues: 128 00:26:38.631 NVMe Specification Version (VS): 1.3 00:26:38.631 NVMe Specification Version (Identify): 1.3 00:26:38.631 Maximum Queue Entries: 1024 00:26:38.631 Contiguous Queues Required: No 00:26:38.631 Arbitration Mechanisms Supported 00:26:38.631 Weighted Round Robin: Not Supported 00:26:38.631 Vendor Specific: Not Supported 00:26:38.631 Reset Timeout: 7500 ms 00:26:38.631 Doorbell Stride: 4 bytes 00:26:38.631 NVM Subsystem Reset: Not Supported 00:26:38.631 Command Sets Supported 00:26:38.631 NVM Command Set: Supported 00:26:38.631 Boot Partition: Not Supported 00:26:38.631 Memory Page Size Minimum: 4096 bytes 00:26:38.631 Memory Page Size Maximum: 4096 bytes 00:26:38.631 Persistent Memory Region: Not Supported 00:26:38.631 Optional Asynchronous Events Supported 00:26:38.631 Namespace Attribute Notices: Supported 00:26:38.631 Firmware Activation Notices: Not Supported 00:26:38.631 ANA Change Notices: Supported 00:26:38.631 PLE Aggregate Log Change Notices: Not Supported 00:26:38.631 LBA Status Info Alert Notices: Not Supported 00:26:38.631 EGE Aggregate Log Change Notices: Not Supported 00:26:38.631 Normal NVM Subsystem Shutdown event: Not Supported 00:26:38.631 Zone Descriptor Change Notices: Not Supported 00:26:38.631 Discovery Log Change Notices: Not Supported 00:26:38.631 Controller Attributes 00:26:38.631 128-bit Host Identifier: Supported 00:26:38.631 Non-Operational Permissive Mode: Not Supported 00:26:38.631 NVM Sets: Not Supported 00:26:38.631 Read Recovery Levels: Not Supported 00:26:38.631 Endurance Groups: Not Supported 00:26:38.631 Predictable Latency Mode: Not Supported 00:26:38.631 Traffic Based Keep ALive: Supported 00:26:38.631 Namespace Granularity: Not Supported 00:26:38.631 SQ Associations: Not Supported 00:26:38.631 UUID List: Not Supported 00:26:38.631 Multi-Domain Subsystem: Not Supported 00:26:38.631 Fixed Capacity Management: Not Supported 00:26:38.631 Variable Capacity Management: Not Supported 00:26:38.631 Delete Endurance Group: Not Supported 00:26:38.631 Delete NVM Set: Not Supported 00:26:38.631 Extended LBA Formats Supported: Not Supported 00:26:38.631 Flexible Data Placement Supported: Not Supported 00:26:38.631 00:26:38.631 Controller Memory Buffer Support 00:26:38.631 ================================ 00:26:38.631 Supported: No 00:26:38.631 00:26:38.631 Persistent Memory Region Support 00:26:38.631 ================================ 00:26:38.631 Supported: No 00:26:38.631 00:26:38.631 Admin Command Set Attributes 00:26:38.631 ============================ 00:26:38.631 Security Send/Receive: Not Supported 00:26:38.631 Format NVM: Not Supported 00:26:38.631 Firmware Activate/Download: Not Supported 00:26:38.631 Namespace Management: Not Supported 00:26:38.631 Device Self-Test: Not Supported 00:26:38.631 Directives: Not Supported 00:26:38.631 NVMe-MI: Not Supported 00:26:38.631 Virtualization Management: Not Supported 00:26:38.631 Doorbell Buffer Config: Not Supported 00:26:38.631 Get LBA Status Capability: Not Supported 00:26:38.631 Command & Feature Lockdown Capability: Not Supported 00:26:38.631 Abort Command Limit: 4 00:26:38.631 Async Event Request Limit: 4 00:26:38.631 Number of Firmware Slots: N/A 00:26:38.631 Firmware Slot 1 Read-Only: N/A 00:26:38.631 Firmware Activation Without Reset: N/A 00:26:38.631 Multiple Update Detection Support: N/A 00:26:38.631 Firmware Update Granularity: No Information Provided 00:26:38.631 Per-Namespace SMART Log: Yes 00:26:38.631 Asymmetric Namespace Access Log Page: Supported 00:26:38.631 ANA Transition Time : 10 sec 00:26:38.631 00:26:38.631 Asymmetric Namespace Access Capabilities 00:26:38.631 ANA Optimized State : Supported 00:26:38.631 ANA Non-Optimized State : Supported 00:26:38.631 ANA Inaccessible State : Supported 00:26:38.631 ANA Persistent Loss State : Supported 00:26:38.631 ANA Change State : Supported 00:26:38.631 ANAGRPID is not changed : No 00:26:38.631 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:38.631 00:26:38.631 ANA Group Identifier Maximum : 128 00:26:38.631 Number of ANA Group Identifiers : 128 00:26:38.631 Max Number of Allowed Namespaces : 1024 00:26:38.631 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:38.631 Command Effects Log Page: Supported 00:26:38.631 Get Log Page Extended Data: Supported 00:26:38.631 Telemetry Log Pages: Not Supported 00:26:38.631 Persistent Event Log Pages: Not Supported 00:26:38.631 Supported Log Pages Log Page: May Support 00:26:38.631 Commands Supported & Effects Log Page: Not Supported 00:26:38.631 Feature Identifiers & Effects Log Page:May Support 00:26:38.631 NVMe-MI Commands & Effects Log Page: May Support 00:26:38.631 Data Area 4 for Telemetry Log: Not Supported 00:26:38.631 Error Log Page Entries Supported: 128 00:26:38.631 Keep Alive: Supported 00:26:38.631 Keep Alive Granularity: 1000 ms 00:26:38.631 00:26:38.631 NVM Command Set Attributes 00:26:38.631 ========================== 00:26:38.631 Submission Queue Entry Size 00:26:38.631 Max: 64 00:26:38.631 Min: 64 00:26:38.631 Completion Queue Entry Size 00:26:38.632 Max: 16 00:26:38.632 Min: 16 00:26:38.632 Number of Namespaces: 1024 00:26:38.632 Compare Command: Not Supported 00:26:38.632 Write Uncorrectable Command: Not Supported 00:26:38.632 Dataset Management Command: Supported 00:26:38.632 Write Zeroes Command: Supported 00:26:38.632 Set Features Save Field: Not Supported 00:26:38.632 Reservations: Not Supported 00:26:38.632 Timestamp: Not Supported 00:26:38.632 Copy: Not Supported 00:26:38.632 Volatile Write Cache: Present 00:26:38.632 Atomic Write Unit (Normal): 1 00:26:38.632 Atomic Write Unit (PFail): 1 00:26:38.632 Atomic Compare & Write Unit: 1 00:26:38.632 Fused Compare & Write: Not Supported 00:26:38.632 Scatter-Gather List 00:26:38.632 SGL Command Set: Supported 00:26:38.632 SGL Keyed: Not Supported 00:26:38.632 SGL Bit Bucket Descriptor: Not Supported 00:26:38.632 SGL Metadata Pointer: Not Supported 00:26:38.632 Oversized SGL: Not Supported 00:26:38.632 SGL Metadata Address: Not Supported 00:26:38.632 SGL Offset: Supported 00:26:38.632 Transport SGL Data Block: Not Supported 00:26:38.632 Replay Protected Memory Block: Not Supported 00:26:38.632 00:26:38.632 Firmware Slot Information 00:26:38.632 ========================= 00:26:38.632 Active slot: 0 00:26:38.632 00:26:38.632 Asymmetric Namespace Access 00:26:38.632 =========================== 00:26:38.632 Change Count : 0 00:26:38.632 Number of ANA Group Descriptors : 1 00:26:38.632 ANA Group Descriptor : 0 00:26:38.632 ANA Group ID : 1 00:26:38.632 Number of NSID Values : 1 00:26:38.632 Change Count : 0 00:26:38.632 ANA State : 1 00:26:38.632 Namespace Identifier : 1 00:26:38.632 00:26:38.632 Commands Supported and Effects 00:26:38.632 ============================== 00:26:38.632 Admin Commands 00:26:38.632 -------------- 00:26:38.632 Get Log Page (02h): Supported 00:26:38.632 Identify (06h): Supported 00:26:38.632 Abort (08h): Supported 00:26:38.632 Set Features (09h): Supported 00:26:38.632 Get Features (0Ah): Supported 00:26:38.632 Asynchronous Event Request (0Ch): Supported 00:26:38.632 Keep Alive (18h): Supported 00:26:38.632 I/O Commands 00:26:38.632 ------------ 00:26:38.632 Flush (00h): Supported 00:26:38.632 Write (01h): Supported LBA-Change 00:26:38.632 Read (02h): Supported 00:26:38.632 Write Zeroes (08h): Supported LBA-Change 00:26:38.632 Dataset Management (09h): Supported 00:26:38.632 00:26:38.632 Error Log 00:26:38.632 ========= 00:26:38.632 Entry: 0 00:26:38.632 Error Count: 0x3 00:26:38.632 Submission Queue Id: 0x0 00:26:38.632 Command Id: 0x5 00:26:38.632 Phase Bit: 0 00:26:38.632 Status Code: 0x2 00:26:38.632 Status Code Type: 0x0 00:26:38.632 Do Not Retry: 1 00:26:38.632 Error Location: 0x28 00:26:38.632 LBA: 0x0 00:26:38.632 Namespace: 0x0 00:26:38.632 Vendor Log Page: 0x0 00:26:38.632 ----------- 00:26:38.632 Entry: 1 00:26:38.632 Error Count: 0x2 00:26:38.632 Submission Queue Id: 0x0 00:26:38.632 Command Id: 0x5 00:26:38.632 Phase Bit: 0 00:26:38.632 Status Code: 0x2 00:26:38.632 Status Code Type: 0x0 00:26:38.632 Do Not Retry: 1 00:26:38.632 Error Location: 0x28 00:26:38.632 LBA: 0x0 00:26:38.632 Namespace: 0x0 00:26:38.632 Vendor Log Page: 0x0 00:26:38.632 ----------- 00:26:38.632 Entry: 2 00:26:38.632 Error Count: 0x1 00:26:38.632 Submission Queue Id: 0x0 00:26:38.632 Command Id: 0x4 00:26:38.632 Phase Bit: 0 00:26:38.632 Status Code: 0x2 00:26:38.632 Status Code Type: 0x0 00:26:38.632 Do Not Retry: 1 00:26:38.632 Error Location: 0x28 00:26:38.632 LBA: 0x0 00:26:38.632 Namespace: 0x0 00:26:38.632 Vendor Log Page: 0x0 00:26:38.632 00:26:38.632 Number of Queues 00:26:38.632 ================ 00:26:38.632 Number of I/O Submission Queues: 128 00:26:38.632 Number of I/O Completion Queues: 128 00:26:38.632 00:26:38.632 ZNS Specific Controller Data 00:26:38.632 ============================ 00:26:38.632 Zone Append Size Limit: 0 00:26:38.632 00:26:38.632 00:26:38.632 Active Namespaces 00:26:38.632 ================= 00:26:38.632 get_feature(0x05) failed 00:26:38.632 Namespace ID:1 00:26:38.632 Command Set Identifier: NVM (00h) 00:26:38.632 Deallocate: Supported 00:26:38.632 Deallocated/Unwritten Error: Not Supported 00:26:38.632 Deallocated Read Value: Unknown 00:26:38.632 Deallocate in Write Zeroes: Not Supported 00:26:38.632 Deallocated Guard Field: 0xFFFF 00:26:38.632 Flush: Supported 00:26:38.632 Reservation: Not Supported 00:26:38.632 Namespace Sharing Capabilities: Multiple Controllers 00:26:38.632 Size (in LBAs): 1953525168 (931GiB) 00:26:38.632 Capacity (in LBAs): 1953525168 (931GiB) 00:26:38.632 Utilization (in LBAs): 1953525168 (931GiB) 00:26:38.632 UUID: f3603b6a-4cbd-4053-be76-8ea9fa259b90 00:26:38.632 Thin Provisioning: Not Supported 00:26:38.632 Per-NS Atomic Units: Yes 00:26:38.632 Atomic Boundary Size (Normal): 0 00:26:38.632 Atomic Boundary Size (PFail): 0 00:26:38.632 Atomic Boundary Offset: 0 00:26:38.632 NGUID/EUI64 Never Reused: No 00:26:38.632 ANA group ID: 1 00:26:38.632 Namespace Write Protected: No 00:26:38.632 Number of LBA Formats: 1 00:26:38.632 Current LBA Format: LBA Format #00 00:26:38.632 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:38.632 00:26:38.632 16:08:07 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:38.632 16:08:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:38.632 16:08:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:26:38.632 16:08:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:38.632 16:08:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:26:38.632 16:08:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:38.632 16:08:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:38.632 rmmod nvme_tcp 00:26:38.632 rmmod nvme_fabrics 00:26:38.632 16:08:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:38.632 16:08:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:26:38.632 16:08:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:26:38.632 16:08:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:26:38.632 16:08:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:38.632 16:08:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:38.632 16:08:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:38.632 16:08:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:38.632 16:08:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:38.632 16:08:07 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:38.632 16:08:07 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:38.632 16:08:07 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:40.554 16:08:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:40.554 16:08:09 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:40.554 16:08:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:40.554 16:08:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:26:40.554 16:08:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:40.554 16:08:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:40.554 16:08:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:40.554 16:08:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:40.554 16:08:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:40.554 16:08:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:26:40.835 16:08:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:43.360 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:43.360 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:43.360 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:43.360 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:43.360 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:43.360 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:43.360 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:43.360 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:43.360 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:43.360 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:43.360 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:43.360 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:43.360 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:43.360 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:43.360 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:43.360 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:44.294 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:44.294 00:26:44.294 real 0m15.014s 00:26:44.294 user 0m3.562s 00:26:44.294 sys 0m7.749s 00:26:44.294 16:08:13 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:44.294 16:08:13 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:44.294 ************************************ 00:26:44.294 END TEST nvmf_identify_kernel_target 00:26:44.294 ************************************ 00:26:44.294 16:08:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:44.294 16:08:13 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:44.294 16:08:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:44.294 16:08:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:44.294 16:08:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:44.294 ************************************ 00:26:44.294 START TEST nvmf_auth_host 00:26:44.294 ************************************ 00:26:44.294 16:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:44.552 * Looking for test storage... 00:26:44.552 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:26:44.552 16:08:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:49.811 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:49.811 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:49.811 Found net devices under 0000:86:00.0: cvl_0_0 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:49.811 Found net devices under 0000:86:00.1: cvl_0_1 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:49.811 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:49.812 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:49.812 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:49.812 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:49.812 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:26:49.812 00:26:49.812 --- 10.0.0.2 ping statistics --- 00:26:49.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:49.812 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:26:49.812 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:49.812 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:49.812 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:26:49.812 00:26:49.812 --- 10.0.0.1 ping statistics --- 00:26:49.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:49.812 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:26:49.812 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:49.812 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:26:49.812 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:49.812 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:49.812 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:49.812 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:49.812 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:49.812 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:49.812 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:49.812 16:08:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:26:49.812 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:49.812 16:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:49.812 16:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.812 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:49.812 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=3895906 00:26:49.812 16:08:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 3895906 00:26:49.812 16:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 3895906 ']' 00:26:49.812 16:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:49.812 16:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:49.812 16:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:49.812 16:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:49.812 16:08:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1ea93451401f53c1fb6eecb007d6e63d 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.aEo 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1ea93451401f53c1fb6eecb007d6e63d 0 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1ea93451401f53c1fb6eecb007d6e63d 0 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1ea93451401f53c1fb6eecb007d6e63d 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.aEo 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.aEo 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.aEo 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8e7792c58fa50b824da37003358946383f0535824017fd4ebc26793539290f31 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.0vB 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8e7792c58fa50b824da37003358946383f0535824017fd4ebc26793539290f31 3 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8e7792c58fa50b824da37003358946383f0535824017fd4ebc26793539290f31 3 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8e7792c58fa50b824da37003358946383f0535824017fd4ebc26793539290f31 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.0vB 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.0vB 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.0vB 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=366a7fa7230ebf74350fefc993f8272416a422f364b64ac9 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.v4D 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 366a7fa7230ebf74350fefc993f8272416a422f364b64ac9 0 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 366a7fa7230ebf74350fefc993f8272416a422f364b64ac9 0 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=366a7fa7230ebf74350fefc993f8272416a422f364b64ac9 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.v4D 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.v4D 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.v4D 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=bce17bf4f555c38e6791e511c782161b4d26fe287897020d 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Znw 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key bce17bf4f555c38e6791e511c782161b4d26fe287897020d 2 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 bce17bf4f555c38e6791e511c782161b4d26fe287897020d 2 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=bce17bf4f555c38e6791e511c782161b4d26fe287897020d 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:26:50.742 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Znw 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Znw 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Znw 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1081822ffadfa79f629b6642954dd3d1 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.4nB 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1081822ffadfa79f629b6642954dd3d1 1 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1081822ffadfa79f629b6642954dd3d1 1 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1081822ffadfa79f629b6642954dd3d1 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.4nB 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.4nB 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.4nB 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=43dfd74a9f5cd20ddc86ced8650a2740 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.dHr 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 43dfd74a9f5cd20ddc86ced8650a2740 1 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 43dfd74a9f5cd20ddc86ced8650a2740 1 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=43dfd74a9f5cd20ddc86ced8650a2740 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.dHr 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.dHr 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.dHr 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e7a27c42f781e2771433c89ef49c692fa2470eb900bd37df 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.zgV 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e7a27c42f781e2771433c89ef49c692fa2470eb900bd37df 2 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e7a27c42f781e2771433c89ef49c692fa2470eb900bd37df 2 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e7a27c42f781e2771433c89ef49c692fa2470eb900bd37df 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.zgV 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.zgV 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.zgV 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=73bdc0457c5275d793748b68a9119c39 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.2aC 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 73bdc0457c5275d793748b68a9119c39 0 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 73bdc0457c5275d793748b68a9119c39 0 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=73bdc0457c5275d793748b68a9119c39 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.2aC 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.2aC 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.2aC 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:51.000 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=035f8e128cc209389594da300f73d66c49d8cc957f4fdf63a5a25f6a0df3db8a 00:26:51.257 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:26:51.257 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.mWI 00:26:51.257 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 035f8e128cc209389594da300f73d66c49d8cc957f4fdf63a5a25f6a0df3db8a 3 00:26:51.257 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 035f8e128cc209389594da300f73d66c49d8cc957f4fdf63a5a25f6a0df3db8a 3 00:26:51.257 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:51.257 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:51.257 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=035f8e128cc209389594da300f73d66c49d8cc957f4fdf63a5a25f6a0df3db8a 00:26:51.257 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:26:51.257 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:51.257 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.mWI 00:26:51.257 16:08:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.mWI 00:26:51.257 16:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.mWI 00:26:51.257 16:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:26:51.257 16:08:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3895906 00:26:51.257 16:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 3895906 ']' 00:26:51.257 16:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:51.257 16:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:51.257 16:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:51.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:51.257 16:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:51.257 16:08:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.257 16:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:51.257 16:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:26:51.257 16:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:51.257 16:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.aEo 00:26:51.257 16:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.257 16:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.257 16:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.257 16:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.0vB ]] 00:26:51.257 16:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0vB 00:26:51.257 16:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.257 16:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.257 16:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.257 16:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:51.257 16:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.v4D 00:26:51.257 16:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.257 16:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Znw ]] 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Znw 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.4nB 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.dHr ]] 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.dHr 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.zgV 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.2aC ]] 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.2aC 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.mWI 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:26:51.515 16:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:51.516 16:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:51.516 16:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:51.516 16:08:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:54.045 Waiting for block devices as requested 00:26:54.045 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:26:54.045 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:54.045 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:54.045 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:54.045 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:54.045 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:54.302 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:54.302 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:54.302 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:54.302 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:54.560 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:54.560 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:54.560 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:54.817 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:54.817 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:54.817 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:54.817 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:55.383 16:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:55.383 16:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:55.383 16:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:55.383 16:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:26:55.383 16:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:55.383 16:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:26:55.383 16:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:55.383 16:08:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:55.383 16:08:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:55.383 No valid GPT data, bailing 00:26:55.383 16:08:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:55.640 16:08:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:26:55.640 16:08:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:26:55.640 16:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:26:55.640 16:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:26:55.640 16:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:55.640 16:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:55.640 16:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:55.640 16:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:26:55.640 16:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:26:55.640 16:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:26:55.640 16:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:26:55.640 16:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:26:55.640 16:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:26:55.640 16:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:26:55.640 16:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:26:55.640 16:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:55.641 16:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:26:55.641 00:26:55.641 Discovery Log Number of Records 2, Generation counter 2 00:26:55.641 =====Discovery Log Entry 0====== 00:26:55.641 trtype: tcp 00:26:55.641 adrfam: ipv4 00:26:55.641 subtype: current discovery subsystem 00:26:55.641 treq: not specified, sq flow control disable supported 00:26:55.641 portid: 1 00:26:55.641 trsvcid: 4420 00:26:55.641 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:55.641 traddr: 10.0.0.1 00:26:55.641 eflags: none 00:26:55.641 sectype: none 00:26:55.641 =====Discovery Log Entry 1====== 00:26:55.641 trtype: tcp 00:26:55.641 adrfam: ipv4 00:26:55.641 subtype: nvme subsystem 00:26:55.641 treq: not specified, sq flow control disable supported 00:26:55.641 portid: 1 00:26:55.641 trsvcid: 4420 00:26:55.641 subnqn: nqn.2024-02.io.spdk:cnode0 00:26:55.641 traddr: 10.0.0.1 00:26:55.641 eflags: none 00:26:55.641 sectype: none 00:26:55.641 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:55.641 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:26:55.641 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:55.641 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:55.641 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.641 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:55.641 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:55.641 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:55.641 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY2YTdmYTcyMzBlYmY3NDM1MGZlZmM5OTNmODI3MjQxNmE0MjJmMzY0YjY0YWM5bmI3Ng==: 00:26:55.641 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmNlMTdiZjRmNTU1YzM4ZTY3OTFlNTExYzc4MjE2MWI0ZDI2ZmUyODc4OTcwMjBkWICT/g==: 00:26:55.641 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:55.641 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:55.641 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY2YTdmYTcyMzBlYmY3NDM1MGZlZmM5OTNmODI3MjQxNmE0MjJmMzY0YjY0YWM5bmI3Ng==: 00:26:55.641 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmNlMTdiZjRmNTU1YzM4ZTY3OTFlNTExYzc4MjE2MWI0ZDI2ZmUyODc4OTcwMjBkWICT/g==: ]] 00:26:55.641 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmNlMTdiZjRmNTU1YzM4ZTY3OTFlNTExYzc4MjE2MWI0ZDI2ZmUyODc4OTcwMjBkWICT/g==: 00:26:55.641 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:55.641 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:26:55.641 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:55.641 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:55.641 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:26:55.641 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.641 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:26:55.641 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:55.641 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:55.641 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.641 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:55.641 16:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.641 16:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.641 16:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.641 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.641 16:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:55.641 16:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:55.641 16:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:55.641 16:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.641 16:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.641 16:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:55.641 16:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.641 16:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:55.641 16:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:55.641 16:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:55.641 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:55.641 16:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.641 16:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.899 nvme0n1 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWVhOTM0NTE0MDFmNTNjMWZiNmVlY2IwMDdkNmU2M2QOY6Ga: 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGU3NzkyYzU4ZmE1MGI4MjRkYTM3MDAzMzU4OTQ2MzgzZjA1MzU4MjQwMTdmZDRlYmMyNjc5MzUzOTI5MGYzMTobRkk=: 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWVhOTM0NTE0MDFmNTNjMWZiNmVlY2IwMDdkNmU2M2QOY6Ga: 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGU3NzkyYzU4ZmE1MGI4MjRkYTM3MDAzMzU4OTQ2MzgzZjA1MzU4MjQwMTdmZDRlYmMyNjc5MzUzOTI5MGYzMTobRkk=: ]] 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGU3NzkyYzU4ZmE1MGI4MjRkYTM3MDAzMzU4OTQ2MzgzZjA1MzU4MjQwMTdmZDRlYmMyNjc5MzUzOTI5MGYzMTobRkk=: 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.899 nvme0n1 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.899 16:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.157 16:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.157 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.157 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.157 16:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.157 16:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.157 16:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.157 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.157 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:56.157 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.157 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:56.157 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:56.157 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:56.158 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY2YTdmYTcyMzBlYmY3NDM1MGZlZmM5OTNmODI3MjQxNmE0MjJmMzY0YjY0YWM5bmI3Ng==: 00:26:56.158 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmNlMTdiZjRmNTU1YzM4ZTY3OTFlNTExYzc4MjE2MWI0ZDI2ZmUyODc4OTcwMjBkWICT/g==: 00:26:56.158 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:56.158 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:56.158 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY2YTdmYTcyMzBlYmY3NDM1MGZlZmM5OTNmODI3MjQxNmE0MjJmMzY0YjY0YWM5bmI3Ng==: 00:26:56.158 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmNlMTdiZjRmNTU1YzM4ZTY3OTFlNTExYzc4MjE2MWI0ZDI2ZmUyODc4OTcwMjBkWICT/g==: ]] 00:26:56.158 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmNlMTdiZjRmNTU1YzM4ZTY3OTFlNTExYzc4MjE2MWI0ZDI2ZmUyODc4OTcwMjBkWICT/g==: 00:26:56.158 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:26:56.158 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.158 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:56.158 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:56.158 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:56.158 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.158 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:56.158 16:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.158 16:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.158 16:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.158 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.158 16:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:56.158 16:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:56.158 16:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:56.158 16:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.158 16:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.158 16:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:56.158 16:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.158 16:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:56.158 16:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:56.158 16:08:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:56.158 16:08:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:56.158 16:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.158 16:08:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.158 nvme0n1 00:26:56.158 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.158 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.158 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.158 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.158 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.158 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.158 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.158 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.158 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.158 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.416 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.416 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.416 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:56.416 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.416 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:56.416 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:56.416 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:56.416 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTA4MTgyMmZmYWRmYTc5ZjYyOWI2NjQyOTU0ZGQzZDG/LO1A: 00:26:56.416 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDNkZmQ3NGE5ZjVjZDIwZGRjODZjZWQ4NjUwYTI3NDBSxEQU: 00:26:56.416 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:56.416 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:56.416 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTA4MTgyMmZmYWRmYTc5ZjYyOWI2NjQyOTU0ZGQzZDG/LO1A: 00:26:56.416 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDNkZmQ3NGE5ZjVjZDIwZGRjODZjZWQ4NjUwYTI3NDBSxEQU: ]] 00:26:56.416 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDNkZmQ3NGE5ZjVjZDIwZGRjODZjZWQ4NjUwYTI3NDBSxEQU: 00:26:56.416 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:26:56.416 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.417 nvme0n1 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTdhMjdjNDJmNzgxZTI3NzE0MzNjODllZjQ5YzY5MmZhMjQ3MGViOTAwYmQzN2Rmae8/mw==: 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzNiZGMwNDU3YzUyNzVkNzkzNzQ4YjY4YTkxMTljMzlKbnty: 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTdhMjdjNDJmNzgxZTI3NzE0MzNjODllZjQ5YzY5MmZhMjQ3MGViOTAwYmQzN2Rmae8/mw==: 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzNiZGMwNDU3YzUyNzVkNzkzNzQ4YjY4YTkxMTljMzlKbnty: ]] 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzNiZGMwNDU3YzUyNzVkNzkzNzQ4YjY4YTkxMTljMzlKbnty: 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.417 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.676 nvme0n1 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDM1ZjhlMTI4Y2MyMDkzODk1OTRkYTMwMGY3M2Q2NmM0OWQ4Y2M5NTdmNGZkZjYzYTVhMjVmNmEwZGYzZGI4YSJezX8=: 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDM1ZjhlMTI4Y2MyMDkzODk1OTRkYTMwMGY3M2Q2NmM0OWQ4Y2M5NTdmNGZkZjYzYTVhMjVmNmEwZGYzZGI4YSJezX8=: 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.676 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.934 nvme0n1 00:26:56.934 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.934 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.934 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.934 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.934 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.934 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.934 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.934 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.934 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.934 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.934 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.934 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:56.934 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.934 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:56.934 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.934 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:56.934 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:56.934 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:56.934 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWVhOTM0NTE0MDFmNTNjMWZiNmVlY2IwMDdkNmU2M2QOY6Ga: 00:26:56.934 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGU3NzkyYzU4ZmE1MGI4MjRkYTM3MDAzMzU4OTQ2MzgzZjA1MzU4MjQwMTdmZDRlYmMyNjc5MzUzOTI5MGYzMTobRkk=: 00:26:56.934 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:56.934 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:56.934 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWVhOTM0NTE0MDFmNTNjMWZiNmVlY2IwMDdkNmU2M2QOY6Ga: 00:26:56.934 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGU3NzkyYzU4ZmE1MGI4MjRkYTM3MDAzMzU4OTQ2MzgzZjA1MzU4MjQwMTdmZDRlYmMyNjc5MzUzOTI5MGYzMTobRkk=: ]] 00:26:56.934 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGU3NzkyYzU4ZmE1MGI4MjRkYTM3MDAzMzU4OTQ2MzgzZjA1MzU4MjQwMTdmZDRlYmMyNjc5MzUzOTI5MGYzMTobRkk=: 00:26:56.934 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:56.934 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.934 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:56.934 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:56.934 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:56.934 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.934 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:56.934 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.934 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.934 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.934 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.934 16:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:56.934 16:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:56.934 16:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:56.934 16:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.934 16:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.934 16:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:56.934 16:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.934 16:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:56.934 16:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:56.934 16:08:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:56.934 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:56.934 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.934 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.193 nvme0n1 00:26:57.193 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.193 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.193 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.193 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.193 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.193 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.193 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.193 16:08:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.193 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.193 16:08:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.193 16:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.193 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.193 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:57.193 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.193 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:57.193 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:57.193 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:57.193 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY2YTdmYTcyMzBlYmY3NDM1MGZlZmM5OTNmODI3MjQxNmE0MjJmMzY0YjY0YWM5bmI3Ng==: 00:26:57.193 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmNlMTdiZjRmNTU1YzM4ZTY3OTFlNTExYzc4MjE2MWI0ZDI2ZmUyODc4OTcwMjBkWICT/g==: 00:26:57.193 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:57.193 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:57.193 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY2YTdmYTcyMzBlYmY3NDM1MGZlZmM5OTNmODI3MjQxNmE0MjJmMzY0YjY0YWM5bmI3Ng==: 00:26:57.193 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmNlMTdiZjRmNTU1YzM4ZTY3OTFlNTExYzc4MjE2MWI0ZDI2ZmUyODc4OTcwMjBkWICT/g==: ]] 00:26:57.193 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmNlMTdiZjRmNTU1YzM4ZTY3OTFlNTExYzc4MjE2MWI0ZDI2ZmUyODc4OTcwMjBkWICT/g==: 00:26:57.193 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:57.193 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.193 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:57.193 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:57.193 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:57.193 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.193 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:57.193 16:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.193 16:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.193 16:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.193 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.193 16:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:57.193 16:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:57.193 16:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:57.193 16:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.193 16:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.193 16:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:57.193 16:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.193 16:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:57.193 16:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:57.193 16:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:57.193 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:57.193 16:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.193 16:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.452 nvme0n1 00:26:57.452 16:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.452 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.452 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.452 16:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.452 16:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.452 16:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.452 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.452 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.452 16:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.452 16:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.452 16:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.452 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.452 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:57.452 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.452 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:57.452 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:57.453 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:57.453 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTA4MTgyMmZmYWRmYTc5ZjYyOWI2NjQyOTU0ZGQzZDG/LO1A: 00:26:57.453 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDNkZmQ3NGE5ZjVjZDIwZGRjODZjZWQ4NjUwYTI3NDBSxEQU: 00:26:57.453 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:57.453 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:57.453 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTA4MTgyMmZmYWRmYTc5ZjYyOWI2NjQyOTU0ZGQzZDG/LO1A: 00:26:57.453 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDNkZmQ3NGE5ZjVjZDIwZGRjODZjZWQ4NjUwYTI3NDBSxEQU: ]] 00:26:57.453 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDNkZmQ3NGE5ZjVjZDIwZGRjODZjZWQ4NjUwYTI3NDBSxEQU: 00:26:57.453 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:57.453 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.453 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:57.453 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:57.453 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:57.453 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.453 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:57.453 16:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.453 16:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.453 16:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.453 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.453 16:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:57.453 16:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:57.453 16:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:57.453 16:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.453 16:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.453 16:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:57.453 16:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.453 16:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:57.453 16:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:57.453 16:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:57.453 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:57.453 16:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.453 16:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.712 nvme0n1 00:26:57.712 16:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.712 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.712 16:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.712 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.712 16:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.712 16:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.712 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.712 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.712 16:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.712 16:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.712 16:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.712 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.712 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:57.712 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.712 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:57.712 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:57.712 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:57.712 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTdhMjdjNDJmNzgxZTI3NzE0MzNjODllZjQ5YzY5MmZhMjQ3MGViOTAwYmQzN2Rmae8/mw==: 00:26:57.712 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzNiZGMwNDU3YzUyNzVkNzkzNzQ4YjY4YTkxMTljMzlKbnty: 00:26:57.712 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:57.712 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:57.712 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTdhMjdjNDJmNzgxZTI3NzE0MzNjODllZjQ5YzY5MmZhMjQ3MGViOTAwYmQzN2Rmae8/mw==: 00:26:57.712 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzNiZGMwNDU3YzUyNzVkNzkzNzQ4YjY4YTkxMTljMzlKbnty: ]] 00:26:57.712 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzNiZGMwNDU3YzUyNzVkNzkzNzQ4YjY4YTkxMTljMzlKbnty: 00:26:57.712 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:57.712 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.712 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:57.712 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:57.712 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:57.712 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.712 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:57.712 16:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.712 16:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.712 16:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.712 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.712 16:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:57.712 16:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:57.712 16:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:57.712 16:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.712 16:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.712 16:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:57.712 16:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.712 16:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:57.712 16:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:57.712 16:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:57.712 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:57.712 16:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.712 16:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.972 nvme0n1 00:26:57.972 16:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.972 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.972 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.972 16:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.972 16:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.972 16:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.972 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.972 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.972 16:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.972 16:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.972 16:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.972 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.972 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:57.972 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.972 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:57.972 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:57.972 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:57.972 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDM1ZjhlMTI4Y2MyMDkzODk1OTRkYTMwMGY3M2Q2NmM0OWQ4Y2M5NTdmNGZkZjYzYTVhMjVmNmEwZGYzZGI4YSJezX8=: 00:26:57.972 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:57.972 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:57.972 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:57.972 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDM1ZjhlMTI4Y2MyMDkzODk1OTRkYTMwMGY3M2Q2NmM0OWQ4Y2M5NTdmNGZkZjYzYTVhMjVmNmEwZGYzZGI4YSJezX8=: 00:26:57.972 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:57.972 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:57.972 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.972 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:57.972 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:57.972 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:57.972 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.972 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:57.972 16:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.972 16:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.972 16:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.972 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.972 16:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:57.972 16:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:57.972 16:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:57.972 16:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.972 16:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.972 16:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:57.972 16:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.972 16:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:57.972 16:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:57.973 16:08:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:57.973 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:57.973 16:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.973 16:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.232 nvme0n1 00:26:58.232 16:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.232 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.232 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.232 16:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.232 16:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.232 16:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.232 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.232 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.232 16:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.232 16:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.232 16:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.232 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:58.232 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.232 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:58.232 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.232 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:58.232 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:58.232 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:58.232 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWVhOTM0NTE0MDFmNTNjMWZiNmVlY2IwMDdkNmU2M2QOY6Ga: 00:26:58.232 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGU3NzkyYzU4ZmE1MGI4MjRkYTM3MDAzMzU4OTQ2MzgzZjA1MzU4MjQwMTdmZDRlYmMyNjc5MzUzOTI5MGYzMTobRkk=: 00:26:58.232 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:58.232 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:58.232 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWVhOTM0NTE0MDFmNTNjMWZiNmVlY2IwMDdkNmU2M2QOY6Ga: 00:26:58.232 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGU3NzkyYzU4ZmE1MGI4MjRkYTM3MDAzMzU4OTQ2MzgzZjA1MzU4MjQwMTdmZDRlYmMyNjc5MzUzOTI5MGYzMTobRkk=: ]] 00:26:58.232 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGU3NzkyYzU4ZmE1MGI4MjRkYTM3MDAzMzU4OTQ2MzgzZjA1MzU4MjQwMTdmZDRlYmMyNjc5MzUzOTI5MGYzMTobRkk=: 00:26:58.232 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:58.232 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.232 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:58.232 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:58.232 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:58.232 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.232 16:08:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:58.232 16:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.232 16:08:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.232 16:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.232 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.232 16:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:58.232 16:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:58.232 16:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:58.232 16:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.232 16:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.232 16:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:58.232 16:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.232 16:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:58.232 16:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:58.232 16:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:58.232 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:58.232 16:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.232 16:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.515 nvme0n1 00:26:58.515 16:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.515 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.515 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.515 16:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.515 16:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.515 16:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.515 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.515 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.515 16:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.515 16:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.515 16:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.515 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.515 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:58.515 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.515 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:58.515 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:58.515 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:58.515 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY2YTdmYTcyMzBlYmY3NDM1MGZlZmM5OTNmODI3MjQxNmE0MjJmMzY0YjY0YWM5bmI3Ng==: 00:26:58.515 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmNlMTdiZjRmNTU1YzM4ZTY3OTFlNTExYzc4MjE2MWI0ZDI2ZmUyODc4OTcwMjBkWICT/g==: 00:26:58.515 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:58.515 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:58.515 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY2YTdmYTcyMzBlYmY3NDM1MGZlZmM5OTNmODI3MjQxNmE0MjJmMzY0YjY0YWM5bmI3Ng==: 00:26:58.515 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmNlMTdiZjRmNTU1YzM4ZTY3OTFlNTExYzc4MjE2MWI0ZDI2ZmUyODc4OTcwMjBkWICT/g==: ]] 00:26:58.515 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmNlMTdiZjRmNTU1YzM4ZTY3OTFlNTExYzc4MjE2MWI0ZDI2ZmUyODc4OTcwMjBkWICT/g==: 00:26:58.515 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:58.515 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.515 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:58.515 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:58.515 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:58.515 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.515 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:58.515 16:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.515 16:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.515 16:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.515 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.515 16:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:58.515 16:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:58.515 16:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:58.516 16:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.516 16:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.516 16:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:58.516 16:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.516 16:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:58.516 16:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:58.516 16:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:58.516 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:58.516 16:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.516 16:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.774 nvme0n1 00:26:58.774 16:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.774 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.774 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.774 16:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.774 16:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.774 16:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.774 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.775 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.775 16:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.775 16:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.775 16:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.775 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.775 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:58.775 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.775 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:58.775 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:58.775 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:58.775 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTA4MTgyMmZmYWRmYTc5ZjYyOWI2NjQyOTU0ZGQzZDG/LO1A: 00:26:58.775 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDNkZmQ3NGE5ZjVjZDIwZGRjODZjZWQ4NjUwYTI3NDBSxEQU: 00:26:58.775 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:58.775 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:58.775 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTA4MTgyMmZmYWRmYTc5ZjYyOWI2NjQyOTU0ZGQzZDG/LO1A: 00:26:58.775 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDNkZmQ3NGE5ZjVjZDIwZGRjODZjZWQ4NjUwYTI3NDBSxEQU: ]] 00:26:58.775 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDNkZmQ3NGE5ZjVjZDIwZGRjODZjZWQ4NjUwYTI3NDBSxEQU: 00:26:58.775 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:58.775 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.775 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:58.775 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:58.775 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:58.775 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.775 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:58.775 16:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.775 16:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.775 16:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.775 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.775 16:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:58.775 16:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:58.775 16:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:58.775 16:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.775 16:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.775 16:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:58.775 16:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.775 16:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:58.775 16:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:58.775 16:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:58.775 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:58.775 16:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.775 16:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.034 nvme0n1 00:26:59.034 16:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.034 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.034 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.034 16:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.034 16:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.034 16:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.034 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.034 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.034 16:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.034 16:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.034 16:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.034 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.034 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:59.034 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.034 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:59.034 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:59.034 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:59.034 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTdhMjdjNDJmNzgxZTI3NzE0MzNjODllZjQ5YzY5MmZhMjQ3MGViOTAwYmQzN2Rmae8/mw==: 00:26:59.034 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzNiZGMwNDU3YzUyNzVkNzkzNzQ4YjY4YTkxMTljMzlKbnty: 00:26:59.034 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:59.034 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:59.034 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTdhMjdjNDJmNzgxZTI3NzE0MzNjODllZjQ5YzY5MmZhMjQ3MGViOTAwYmQzN2Rmae8/mw==: 00:26:59.034 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzNiZGMwNDU3YzUyNzVkNzkzNzQ4YjY4YTkxMTljMzlKbnty: ]] 00:26:59.034 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzNiZGMwNDU3YzUyNzVkNzkzNzQ4YjY4YTkxMTljMzlKbnty: 00:26:59.034 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:26:59.034 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.034 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:59.034 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:59.034 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:59.034 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.034 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:59.034 16:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.034 16:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.034 16:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.034 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.034 16:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:59.034 16:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:59.034 16:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:59.034 16:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.034 16:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.034 16:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:59.034 16:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.034 16:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:59.034 16:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:59.034 16:08:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:59.034 16:08:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:59.034 16:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.034 16:08:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.292 nvme0n1 00:26:59.292 16:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.292 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.292 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.292 16:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.292 16:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.292 16:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.550 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.550 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.550 16:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.550 16:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.550 16:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.550 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.550 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:59.550 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.550 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:59.550 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:59.550 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:59.550 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDM1ZjhlMTI4Y2MyMDkzODk1OTRkYTMwMGY3M2Q2NmM0OWQ4Y2M5NTdmNGZkZjYzYTVhMjVmNmEwZGYzZGI4YSJezX8=: 00:26:59.550 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:59.550 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:59.550 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:59.550 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDM1ZjhlMTI4Y2MyMDkzODk1OTRkYTMwMGY3M2Q2NmM0OWQ4Y2M5NTdmNGZkZjYzYTVhMjVmNmEwZGYzZGI4YSJezX8=: 00:26:59.550 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:59.550 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:26:59.550 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.550 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:59.550 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:59.550 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:59.550 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.550 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:59.550 16:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.550 16:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.550 16:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.550 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.550 16:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:59.550 16:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:59.550 16:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:59.550 16:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.550 16:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.550 16:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:59.550 16:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.550 16:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:59.550 16:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:59.550 16:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:59.550 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:59.550 16:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.550 16:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.808 nvme0n1 00:26:59.808 16:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.808 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.808 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.808 16:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.808 16:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.808 16:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.808 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.808 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.808 16:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.808 16:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.808 16:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.808 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:59.808 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.808 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:59.808 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.808 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:59.808 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:59.808 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:59.808 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWVhOTM0NTE0MDFmNTNjMWZiNmVlY2IwMDdkNmU2M2QOY6Ga: 00:26:59.808 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGU3NzkyYzU4ZmE1MGI4MjRkYTM3MDAzMzU4OTQ2MzgzZjA1MzU4MjQwMTdmZDRlYmMyNjc5MzUzOTI5MGYzMTobRkk=: 00:26:59.808 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:59.808 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:59.808 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWVhOTM0NTE0MDFmNTNjMWZiNmVlY2IwMDdkNmU2M2QOY6Ga: 00:26:59.808 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGU3NzkyYzU4ZmE1MGI4MjRkYTM3MDAzMzU4OTQ2MzgzZjA1MzU4MjQwMTdmZDRlYmMyNjc5MzUzOTI5MGYzMTobRkk=: ]] 00:26:59.808 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGU3NzkyYzU4ZmE1MGI4MjRkYTM3MDAzMzU4OTQ2MzgzZjA1MzU4MjQwMTdmZDRlYmMyNjc5MzUzOTI5MGYzMTobRkk=: 00:26:59.808 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:26:59.808 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.808 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:59.808 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:59.808 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:59.808 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.808 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:59.808 16:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.808 16:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.808 16:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.808 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.808 16:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:59.808 16:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:59.809 16:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:59.809 16:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.809 16:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.809 16:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:59.809 16:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.809 16:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:59.809 16:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:59.809 16:08:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:59.809 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:59.809 16:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.809 16:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.067 nvme0n1 00:27:00.067 16:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.067 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.067 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.067 16:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.067 16:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.067 16:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.067 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.067 16:08:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.067 16:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.067 16:08:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.325 16:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.325 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.325 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:00.325 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.325 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:00.325 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:00.325 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:00.325 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY2YTdmYTcyMzBlYmY3NDM1MGZlZmM5OTNmODI3MjQxNmE0MjJmMzY0YjY0YWM5bmI3Ng==: 00:27:00.325 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmNlMTdiZjRmNTU1YzM4ZTY3OTFlNTExYzc4MjE2MWI0ZDI2ZmUyODc4OTcwMjBkWICT/g==: 00:27:00.325 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:00.325 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:00.325 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY2YTdmYTcyMzBlYmY3NDM1MGZlZmM5OTNmODI3MjQxNmE0MjJmMzY0YjY0YWM5bmI3Ng==: 00:27:00.325 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmNlMTdiZjRmNTU1YzM4ZTY3OTFlNTExYzc4MjE2MWI0ZDI2ZmUyODc4OTcwMjBkWICT/g==: ]] 00:27:00.325 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmNlMTdiZjRmNTU1YzM4ZTY3OTFlNTExYzc4MjE2MWI0ZDI2ZmUyODc4OTcwMjBkWICT/g==: 00:27:00.325 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:00.325 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.325 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:00.325 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:00.325 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:00.325 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.325 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:00.325 16:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.325 16:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.325 16:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.325 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.325 16:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:00.325 16:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:00.325 16:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:00.325 16:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.325 16:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.325 16:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:00.325 16:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.325 16:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:00.325 16:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:00.325 16:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:00.325 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:00.325 16:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.325 16:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.584 nvme0n1 00:27:00.584 16:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.584 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.584 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.584 16:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.584 16:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.584 16:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.584 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.584 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.584 16:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.584 16:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.584 16:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.584 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.584 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:00.584 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.584 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:00.584 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:00.584 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:00.584 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTA4MTgyMmZmYWRmYTc5ZjYyOWI2NjQyOTU0ZGQzZDG/LO1A: 00:27:00.584 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDNkZmQ3NGE5ZjVjZDIwZGRjODZjZWQ4NjUwYTI3NDBSxEQU: 00:27:00.584 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:00.584 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:00.584 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTA4MTgyMmZmYWRmYTc5ZjYyOWI2NjQyOTU0ZGQzZDG/LO1A: 00:27:00.584 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDNkZmQ3NGE5ZjVjZDIwZGRjODZjZWQ4NjUwYTI3NDBSxEQU: ]] 00:27:00.584 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDNkZmQ3NGE5ZjVjZDIwZGRjODZjZWQ4NjUwYTI3NDBSxEQU: 00:27:00.584 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:00.584 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.584 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:00.584 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:00.584 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:00.584 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.584 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:00.584 16:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.584 16:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.584 16:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.584 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.584 16:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:00.584 16:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:00.584 16:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:00.584 16:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.584 16:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.584 16:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:00.584 16:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.584 16:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:00.584 16:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:00.584 16:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:00.584 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:00.584 16:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.584 16:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.153 nvme0n1 00:27:01.153 16:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.153 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.153 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.153 16:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.153 16:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.153 16:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.153 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.153 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.153 16:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.153 16:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.153 16:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.153 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.153 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:01.153 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.153 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:01.153 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:01.153 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:01.153 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTdhMjdjNDJmNzgxZTI3NzE0MzNjODllZjQ5YzY5MmZhMjQ3MGViOTAwYmQzN2Rmae8/mw==: 00:27:01.153 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzNiZGMwNDU3YzUyNzVkNzkzNzQ4YjY4YTkxMTljMzlKbnty: 00:27:01.153 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:01.153 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:01.153 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTdhMjdjNDJmNzgxZTI3NzE0MzNjODllZjQ5YzY5MmZhMjQ3MGViOTAwYmQzN2Rmae8/mw==: 00:27:01.153 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzNiZGMwNDU3YzUyNzVkNzkzNzQ4YjY4YTkxMTljMzlKbnty: ]] 00:27:01.153 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzNiZGMwNDU3YzUyNzVkNzkzNzQ4YjY4YTkxMTljMzlKbnty: 00:27:01.153 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:01.153 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.153 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:01.153 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:01.153 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:01.153 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.153 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:01.153 16:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.153 16:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.153 16:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.153 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.153 16:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:01.153 16:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:01.153 16:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:01.153 16:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.153 16:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.153 16:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:01.153 16:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.153 16:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:01.153 16:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:01.153 16:08:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:01.153 16:08:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:01.153 16:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.153 16:08:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.411 nvme0n1 00:27:01.412 16:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.412 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.412 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.412 16:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.412 16:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.412 16:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.412 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.412 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.412 16:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.412 16:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.412 16:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.412 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.412 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:01.412 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.412 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:01.412 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:01.412 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:01.412 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDM1ZjhlMTI4Y2MyMDkzODk1OTRkYTMwMGY3M2Q2NmM0OWQ4Y2M5NTdmNGZkZjYzYTVhMjVmNmEwZGYzZGI4YSJezX8=: 00:27:01.412 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:01.412 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:01.412 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:01.412 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDM1ZjhlMTI4Y2MyMDkzODk1OTRkYTMwMGY3M2Q2NmM0OWQ4Y2M5NTdmNGZkZjYzYTVhMjVmNmEwZGYzZGI4YSJezX8=: 00:27:01.412 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:01.412 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:01.412 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.412 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:01.412 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:01.412 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:01.412 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.412 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:01.672 16:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.672 16:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.672 16:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.672 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.672 16:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:01.672 16:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:01.672 16:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:01.672 16:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.672 16:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.672 16:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:01.672 16:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.672 16:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:01.672 16:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:01.672 16:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:01.672 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:01.672 16:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.672 16:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.960 nvme0n1 00:27:01.960 16:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.960 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.960 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.960 16:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.960 16:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.960 16:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.960 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.960 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.960 16:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.960 16:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.960 16:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.960 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:01.960 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.960 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:01.960 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.960 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:01.960 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:01.960 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:01.960 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWVhOTM0NTE0MDFmNTNjMWZiNmVlY2IwMDdkNmU2M2QOY6Ga: 00:27:01.960 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGU3NzkyYzU4ZmE1MGI4MjRkYTM3MDAzMzU4OTQ2MzgzZjA1MzU4MjQwMTdmZDRlYmMyNjc5MzUzOTI5MGYzMTobRkk=: 00:27:01.960 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:01.960 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:01.960 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWVhOTM0NTE0MDFmNTNjMWZiNmVlY2IwMDdkNmU2M2QOY6Ga: 00:27:01.960 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGU3NzkyYzU4ZmE1MGI4MjRkYTM3MDAzMzU4OTQ2MzgzZjA1MzU4MjQwMTdmZDRlYmMyNjc5MzUzOTI5MGYzMTobRkk=: ]] 00:27:01.960 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGU3NzkyYzU4ZmE1MGI4MjRkYTM3MDAzMzU4OTQ2MzgzZjA1MzU4MjQwMTdmZDRlYmMyNjc5MzUzOTI5MGYzMTobRkk=: 00:27:01.960 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:01.960 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.960 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:01.960 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:01.960 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:01.960 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.960 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:01.960 16:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.960 16:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.960 16:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.960 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.960 16:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:01.960 16:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:01.960 16:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:01.960 16:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.960 16:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.960 16:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:01.960 16:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.960 16:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:01.960 16:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:01.960 16:08:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:01.960 16:08:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:01.960 16:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.960 16:08:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.536 nvme0n1 00:27:02.536 16:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.536 16:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.536 16:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.536 16:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.536 16:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.536 16:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.536 16:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.536 16:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.536 16:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.536 16:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.536 16:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.536 16:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.536 16:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:02.536 16:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.536 16:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:02.536 16:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:02.536 16:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:02.536 16:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY2YTdmYTcyMzBlYmY3NDM1MGZlZmM5OTNmODI3MjQxNmE0MjJmMzY0YjY0YWM5bmI3Ng==: 00:27:02.536 16:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmNlMTdiZjRmNTU1YzM4ZTY3OTFlNTExYzc4MjE2MWI0ZDI2ZmUyODc4OTcwMjBkWICT/g==: 00:27:02.536 16:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:02.536 16:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:02.536 16:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY2YTdmYTcyMzBlYmY3NDM1MGZlZmM5OTNmODI3MjQxNmE0MjJmMzY0YjY0YWM5bmI3Ng==: 00:27:02.536 16:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmNlMTdiZjRmNTU1YzM4ZTY3OTFlNTExYzc4MjE2MWI0ZDI2ZmUyODc4OTcwMjBkWICT/g==: ]] 00:27:02.536 16:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmNlMTdiZjRmNTU1YzM4ZTY3OTFlNTExYzc4MjE2MWI0ZDI2ZmUyODc4OTcwMjBkWICT/g==: 00:27:02.536 16:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:02.536 16:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.536 16:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:02.536 16:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:02.536 16:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:02.536 16:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.536 16:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:02.536 16:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.536 16:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.536 16:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.536 16:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.536 16:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:02.536 16:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:02.536 16:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:02.536 16:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.536 16:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.536 16:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:02.536 16:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.536 16:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:02.536 16:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:02.536 16:08:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:02.536 16:08:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:02.536 16:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.536 16:08:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.102 nvme0n1 00:27:03.102 16:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.102 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.102 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.102 16:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.102 16:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.360 16:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.360 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.360 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.360 16:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.360 16:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.360 16:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.360 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.360 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:03.360 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.360 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:03.360 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:03.360 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:03.360 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTA4MTgyMmZmYWRmYTc5ZjYyOWI2NjQyOTU0ZGQzZDG/LO1A: 00:27:03.360 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDNkZmQ3NGE5ZjVjZDIwZGRjODZjZWQ4NjUwYTI3NDBSxEQU: 00:27:03.360 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:03.360 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:03.360 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTA4MTgyMmZmYWRmYTc5ZjYyOWI2NjQyOTU0ZGQzZDG/LO1A: 00:27:03.360 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDNkZmQ3NGE5ZjVjZDIwZGRjODZjZWQ4NjUwYTI3NDBSxEQU: ]] 00:27:03.360 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDNkZmQ3NGE5ZjVjZDIwZGRjODZjZWQ4NjUwYTI3NDBSxEQU: 00:27:03.360 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:03.360 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.360 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:03.360 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:03.360 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:03.361 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.361 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:03.361 16:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.361 16:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.361 16:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.361 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.361 16:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:03.361 16:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:03.361 16:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:03.361 16:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.361 16:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.361 16:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:03.361 16:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.361 16:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:03.361 16:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:03.361 16:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:03.361 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:03.361 16:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.361 16:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.927 nvme0n1 00:27:03.927 16:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.927 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.927 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.927 16:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.927 16:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.927 16:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.927 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.927 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.927 16:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.927 16:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.927 16:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.927 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.927 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:03.927 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.927 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:03.927 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:03.927 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:03.928 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTdhMjdjNDJmNzgxZTI3NzE0MzNjODllZjQ5YzY5MmZhMjQ3MGViOTAwYmQzN2Rmae8/mw==: 00:27:03.928 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzNiZGMwNDU3YzUyNzVkNzkzNzQ4YjY4YTkxMTljMzlKbnty: 00:27:03.928 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:03.928 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:03.928 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTdhMjdjNDJmNzgxZTI3NzE0MzNjODllZjQ5YzY5MmZhMjQ3MGViOTAwYmQzN2Rmae8/mw==: 00:27:03.928 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzNiZGMwNDU3YzUyNzVkNzkzNzQ4YjY4YTkxMTljMzlKbnty: ]] 00:27:03.928 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzNiZGMwNDU3YzUyNzVkNzkzNzQ4YjY4YTkxMTljMzlKbnty: 00:27:03.928 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:03.928 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.928 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:03.928 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:03.928 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:03.928 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.928 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:03.928 16:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.928 16:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.928 16:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.928 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.928 16:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:03.928 16:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:03.928 16:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:03.928 16:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.928 16:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.928 16:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:03.928 16:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.928 16:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:03.928 16:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:03.928 16:08:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:03.928 16:08:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:03.928 16:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.928 16:08:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.494 nvme0n1 00:27:04.494 16:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.494 16:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.494 16:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.494 16:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.494 16:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.494 16:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.494 16:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.494 16:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.494 16:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.494 16:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.494 16:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.494 16:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.494 16:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:04.494 16:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.494 16:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:04.494 16:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:04.494 16:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:04.494 16:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDM1ZjhlMTI4Y2MyMDkzODk1OTRkYTMwMGY3M2Q2NmM0OWQ4Y2M5NTdmNGZkZjYzYTVhMjVmNmEwZGYzZGI4YSJezX8=: 00:27:04.494 16:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:04.494 16:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:04.494 16:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:04.494 16:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDM1ZjhlMTI4Y2MyMDkzODk1OTRkYTMwMGY3M2Q2NmM0OWQ4Y2M5NTdmNGZkZjYzYTVhMjVmNmEwZGYzZGI4YSJezX8=: 00:27:04.494 16:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:04.494 16:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:04.494 16:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.494 16:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:04.494 16:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:04.494 16:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:04.494 16:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.494 16:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:04.494 16:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.494 16:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.494 16:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.494 16:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.494 16:08:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:04.494 16:08:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:04.494 16:08:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:04.494 16:08:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.494 16:08:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.494 16:08:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:04.494 16:08:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.494 16:08:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:04.494 16:08:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:04.494 16:08:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:04.494 16:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:04.494 16:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.494 16:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.058 nvme0n1 00:27:05.058 16:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.058 16:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.058 16:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.058 16:08:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.058 16:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.058 16:08:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWVhOTM0NTE0MDFmNTNjMWZiNmVlY2IwMDdkNmU2M2QOY6Ga: 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGU3NzkyYzU4ZmE1MGI4MjRkYTM3MDAzMzU4OTQ2MzgzZjA1MzU4MjQwMTdmZDRlYmMyNjc5MzUzOTI5MGYzMTobRkk=: 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWVhOTM0NTE0MDFmNTNjMWZiNmVlY2IwMDdkNmU2M2QOY6Ga: 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGU3NzkyYzU4ZmE1MGI4MjRkYTM3MDAzMzU4OTQ2MzgzZjA1MzU4MjQwMTdmZDRlYmMyNjc5MzUzOTI5MGYzMTobRkk=: ]] 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGU3NzkyYzU4ZmE1MGI4MjRkYTM3MDAzMzU4OTQ2MzgzZjA1MzU4MjQwMTdmZDRlYmMyNjc5MzUzOTI5MGYzMTobRkk=: 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.316 nvme0n1 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:05.316 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:05.573 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY2YTdmYTcyMzBlYmY3NDM1MGZlZmM5OTNmODI3MjQxNmE0MjJmMzY0YjY0YWM5bmI3Ng==: 00:27:05.573 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmNlMTdiZjRmNTU1YzM4ZTY3OTFlNTExYzc4MjE2MWI0ZDI2ZmUyODc4OTcwMjBkWICT/g==: 00:27:05.573 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:05.573 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:05.573 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY2YTdmYTcyMzBlYmY3NDM1MGZlZmM5OTNmODI3MjQxNmE0MjJmMzY0YjY0YWM5bmI3Ng==: 00:27:05.573 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmNlMTdiZjRmNTU1YzM4ZTY3OTFlNTExYzc4MjE2MWI0ZDI2ZmUyODc4OTcwMjBkWICT/g==: ]] 00:27:05.573 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmNlMTdiZjRmNTU1YzM4ZTY3OTFlNTExYzc4MjE2MWI0ZDI2ZmUyODc4OTcwMjBkWICT/g==: 00:27:05.573 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:05.573 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.573 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.574 nvme0n1 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTA4MTgyMmZmYWRmYTc5ZjYyOWI2NjQyOTU0ZGQzZDG/LO1A: 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDNkZmQ3NGE5ZjVjZDIwZGRjODZjZWQ4NjUwYTI3NDBSxEQU: 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTA4MTgyMmZmYWRmYTc5ZjYyOWI2NjQyOTU0ZGQzZDG/LO1A: 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDNkZmQ3NGE5ZjVjZDIwZGRjODZjZWQ4NjUwYTI3NDBSxEQU: ]] 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDNkZmQ3NGE5ZjVjZDIwZGRjODZjZWQ4NjUwYTI3NDBSxEQU: 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.574 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.832 nvme0n1 00:27:05.832 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.832 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.832 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.832 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.832 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.832 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.832 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.832 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.832 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.832 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.832 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.832 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.832 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:05.832 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.832 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:05.832 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:05.832 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:05.832 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTdhMjdjNDJmNzgxZTI3NzE0MzNjODllZjQ5YzY5MmZhMjQ3MGViOTAwYmQzN2Rmae8/mw==: 00:27:05.832 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzNiZGMwNDU3YzUyNzVkNzkzNzQ4YjY4YTkxMTljMzlKbnty: 00:27:05.832 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:05.832 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:05.832 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTdhMjdjNDJmNzgxZTI3NzE0MzNjODllZjQ5YzY5MmZhMjQ3MGViOTAwYmQzN2Rmae8/mw==: 00:27:05.832 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzNiZGMwNDU3YzUyNzVkNzkzNzQ4YjY4YTkxMTljMzlKbnty: ]] 00:27:05.832 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzNiZGMwNDU3YzUyNzVkNzkzNzQ4YjY4YTkxMTljMzlKbnty: 00:27:05.832 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:05.832 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.832 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:05.832 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:05.832 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:05.832 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.832 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:05.832 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.832 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.832 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.832 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.832 16:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:05.832 16:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:05.832 16:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:05.832 16:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.832 16:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.832 16:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:05.832 16:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.832 16:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:05.832 16:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:05.832 16:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:05.832 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:05.832 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.832 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.091 nvme0n1 00:27:06.091 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.091 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.091 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.091 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.091 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.091 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.091 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.091 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.091 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.091 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.091 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.091 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.091 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:06.091 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.091 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:06.091 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:06.091 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:06.091 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDM1ZjhlMTI4Y2MyMDkzODk1OTRkYTMwMGY3M2Q2NmM0OWQ4Y2M5NTdmNGZkZjYzYTVhMjVmNmEwZGYzZGI4YSJezX8=: 00:27:06.091 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:06.091 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:06.091 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:06.091 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDM1ZjhlMTI4Y2MyMDkzODk1OTRkYTMwMGY3M2Q2NmM0OWQ4Y2M5NTdmNGZkZjYzYTVhMjVmNmEwZGYzZGI4YSJezX8=: 00:27:06.091 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:06.091 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:06.091 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.091 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:06.091 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:06.091 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:06.091 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.091 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:06.091 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.091 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.091 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.091 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.091 16:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:06.091 16:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:06.091 16:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:06.091 16:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.091 16:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.091 16:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:06.091 16:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.091 16:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:06.091 16:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:06.091 16:08:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:06.091 16:08:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:06.091 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.091 16:08:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.350 nvme0n1 00:27:06.350 16:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.350 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.350 16:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.350 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.350 16:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.350 16:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.350 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.350 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.350 16:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.350 16:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.350 16:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.350 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:06.350 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.350 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:06.350 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.350 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:06.350 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:06.350 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:06.350 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWVhOTM0NTE0MDFmNTNjMWZiNmVlY2IwMDdkNmU2M2QOY6Ga: 00:27:06.350 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGU3NzkyYzU4ZmE1MGI4MjRkYTM3MDAzMzU4OTQ2MzgzZjA1MzU4MjQwMTdmZDRlYmMyNjc5MzUzOTI5MGYzMTobRkk=: 00:27:06.350 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:06.350 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:06.350 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWVhOTM0NTE0MDFmNTNjMWZiNmVlY2IwMDdkNmU2M2QOY6Ga: 00:27:06.350 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGU3NzkyYzU4ZmE1MGI4MjRkYTM3MDAzMzU4OTQ2MzgzZjA1MzU4MjQwMTdmZDRlYmMyNjc5MzUzOTI5MGYzMTobRkk=: ]] 00:27:06.350 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGU3NzkyYzU4ZmE1MGI4MjRkYTM3MDAzMzU4OTQ2MzgzZjA1MzU4MjQwMTdmZDRlYmMyNjc5MzUzOTI5MGYzMTobRkk=: 00:27:06.350 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:06.350 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.350 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:06.350 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:06.350 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:06.350 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.350 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:06.350 16:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.350 16:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.350 16:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.350 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.350 16:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:06.350 16:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:06.350 16:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:06.350 16:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.350 16:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.350 16:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:06.350 16:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.350 16:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:06.350 16:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:06.350 16:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:06.350 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:06.350 16:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.350 16:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.609 nvme0n1 00:27:06.609 16:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.609 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.609 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.609 16:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.609 16:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.609 16:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.609 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.609 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.609 16:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.609 16:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.609 16:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.609 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.609 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:06.609 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.609 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:06.609 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:06.609 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:06.609 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY2YTdmYTcyMzBlYmY3NDM1MGZlZmM5OTNmODI3MjQxNmE0MjJmMzY0YjY0YWM5bmI3Ng==: 00:27:06.609 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmNlMTdiZjRmNTU1YzM4ZTY3OTFlNTExYzc4MjE2MWI0ZDI2ZmUyODc4OTcwMjBkWICT/g==: 00:27:06.609 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:06.609 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:06.609 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY2YTdmYTcyMzBlYmY3NDM1MGZlZmM5OTNmODI3MjQxNmE0MjJmMzY0YjY0YWM5bmI3Ng==: 00:27:06.609 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmNlMTdiZjRmNTU1YzM4ZTY3OTFlNTExYzc4MjE2MWI0ZDI2ZmUyODc4OTcwMjBkWICT/g==: ]] 00:27:06.609 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmNlMTdiZjRmNTU1YzM4ZTY3OTFlNTExYzc4MjE2MWI0ZDI2ZmUyODc4OTcwMjBkWICT/g==: 00:27:06.609 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:06.609 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.609 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:06.609 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:06.609 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:06.609 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.609 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:06.609 16:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.609 16:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.609 16:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.609 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.609 16:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:06.609 16:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:06.609 16:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:06.609 16:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.609 16:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.609 16:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:06.609 16:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.609 16:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:06.609 16:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:06.609 16:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:06.609 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:06.609 16:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.609 16:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.867 nvme0n1 00:27:06.867 16:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.867 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.867 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.867 16:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.867 16:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.867 16:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.867 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.867 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.867 16:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.867 16:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.867 16:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.867 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.867 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:06.868 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.868 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:06.868 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:06.868 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:06.868 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTA4MTgyMmZmYWRmYTc5ZjYyOWI2NjQyOTU0ZGQzZDG/LO1A: 00:27:06.868 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDNkZmQ3NGE5ZjVjZDIwZGRjODZjZWQ4NjUwYTI3NDBSxEQU: 00:27:06.868 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:06.868 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:06.868 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTA4MTgyMmZmYWRmYTc5ZjYyOWI2NjQyOTU0ZGQzZDG/LO1A: 00:27:06.868 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDNkZmQ3NGE5ZjVjZDIwZGRjODZjZWQ4NjUwYTI3NDBSxEQU: ]] 00:27:06.868 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDNkZmQ3NGE5ZjVjZDIwZGRjODZjZWQ4NjUwYTI3NDBSxEQU: 00:27:06.868 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:06.868 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.868 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:06.868 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:06.868 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:06.868 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.868 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:06.868 16:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.868 16:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.868 16:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.868 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.868 16:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:06.868 16:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:06.868 16:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:06.868 16:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.868 16:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.868 16:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:06.868 16:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.868 16:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:06.868 16:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:06.868 16:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:06.868 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:06.868 16:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.868 16:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.126 nvme0n1 00:27:07.126 16:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.126 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.126 16:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.126 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.126 16:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.126 16:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.126 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.126 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.127 16:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.127 16:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.127 16:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.127 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.127 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:07.127 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.127 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:07.127 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:07.127 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:07.127 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTdhMjdjNDJmNzgxZTI3NzE0MzNjODllZjQ5YzY5MmZhMjQ3MGViOTAwYmQzN2Rmae8/mw==: 00:27:07.127 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzNiZGMwNDU3YzUyNzVkNzkzNzQ4YjY4YTkxMTljMzlKbnty: 00:27:07.127 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:07.127 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:07.127 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTdhMjdjNDJmNzgxZTI3NzE0MzNjODllZjQ5YzY5MmZhMjQ3MGViOTAwYmQzN2Rmae8/mw==: 00:27:07.127 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzNiZGMwNDU3YzUyNzVkNzkzNzQ4YjY4YTkxMTljMzlKbnty: ]] 00:27:07.127 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzNiZGMwNDU3YzUyNzVkNzkzNzQ4YjY4YTkxMTljMzlKbnty: 00:27:07.127 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:07.127 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.127 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:07.127 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:07.127 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:07.127 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.127 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:07.127 16:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.127 16:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.127 16:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.127 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.127 16:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:07.127 16:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:07.127 16:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:07.127 16:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.127 16:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.127 16:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:07.127 16:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.127 16:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:07.127 16:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:07.127 16:08:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:07.127 16:08:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:07.127 16:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.127 16:08:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.385 nvme0n1 00:27:07.385 16:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.385 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.385 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.385 16:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.385 16:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.385 16:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.385 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.385 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.385 16:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.385 16:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.385 16:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.385 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.385 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:07.385 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.385 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:07.385 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:07.385 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:07.385 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDM1ZjhlMTI4Y2MyMDkzODk1OTRkYTMwMGY3M2Q2NmM0OWQ4Y2M5NTdmNGZkZjYzYTVhMjVmNmEwZGYzZGI4YSJezX8=: 00:27:07.385 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:07.385 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:07.385 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:07.385 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDM1ZjhlMTI4Y2MyMDkzODk1OTRkYTMwMGY3M2Q2NmM0OWQ4Y2M5NTdmNGZkZjYzYTVhMjVmNmEwZGYzZGI4YSJezX8=: 00:27:07.385 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:07.385 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:07.385 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.385 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:07.385 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:07.385 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:07.385 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.385 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:07.385 16:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.385 16:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.385 16:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.385 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.385 16:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:07.385 16:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:07.385 16:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:07.385 16:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.385 16:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.385 16:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:07.385 16:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.385 16:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:07.385 16:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:07.385 16:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:07.385 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:07.385 16:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.386 16:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.644 nvme0n1 00:27:07.644 16:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.644 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.644 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.644 16:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.644 16:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.644 16:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.644 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.644 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.644 16:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.644 16:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.644 16:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.644 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:07.644 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.644 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:07.644 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.644 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:07.644 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:07.644 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:07.644 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWVhOTM0NTE0MDFmNTNjMWZiNmVlY2IwMDdkNmU2M2QOY6Ga: 00:27:07.644 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGU3NzkyYzU4ZmE1MGI4MjRkYTM3MDAzMzU4OTQ2MzgzZjA1MzU4MjQwMTdmZDRlYmMyNjc5MzUzOTI5MGYzMTobRkk=: 00:27:07.644 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:07.644 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:07.644 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWVhOTM0NTE0MDFmNTNjMWZiNmVlY2IwMDdkNmU2M2QOY6Ga: 00:27:07.644 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGU3NzkyYzU4ZmE1MGI4MjRkYTM3MDAzMzU4OTQ2MzgzZjA1MzU4MjQwMTdmZDRlYmMyNjc5MzUzOTI5MGYzMTobRkk=: ]] 00:27:07.644 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGU3NzkyYzU4ZmE1MGI4MjRkYTM3MDAzMzU4OTQ2MzgzZjA1MzU4MjQwMTdmZDRlYmMyNjc5MzUzOTI5MGYzMTobRkk=: 00:27:07.644 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:07.644 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.644 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:07.644 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:07.644 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:07.644 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.644 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:07.644 16:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.644 16:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.644 16:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.644 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.644 16:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:07.644 16:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:07.644 16:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:07.644 16:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.644 16:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.644 16:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:07.644 16:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.644 16:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:07.644 16:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:07.644 16:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:07.644 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:07.644 16:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.644 16:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.902 nvme0n1 00:27:07.902 16:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.902 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.902 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.902 16:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.902 16:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.902 16:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.902 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.902 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.902 16:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.902 16:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.902 16:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.902 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.902 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:07.902 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.902 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:07.902 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:07.902 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:07.902 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY2YTdmYTcyMzBlYmY3NDM1MGZlZmM5OTNmODI3MjQxNmE0MjJmMzY0YjY0YWM5bmI3Ng==: 00:27:07.902 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmNlMTdiZjRmNTU1YzM4ZTY3OTFlNTExYzc4MjE2MWI0ZDI2ZmUyODc4OTcwMjBkWICT/g==: 00:27:07.902 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:07.902 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:07.902 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY2YTdmYTcyMzBlYmY3NDM1MGZlZmM5OTNmODI3MjQxNmE0MjJmMzY0YjY0YWM5bmI3Ng==: 00:27:07.902 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmNlMTdiZjRmNTU1YzM4ZTY3OTFlNTExYzc4MjE2MWI0ZDI2ZmUyODc4OTcwMjBkWICT/g==: ]] 00:27:07.902 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmNlMTdiZjRmNTU1YzM4ZTY3OTFlNTExYzc4MjE2MWI0ZDI2ZmUyODc4OTcwMjBkWICT/g==: 00:27:07.902 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:07.902 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.902 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:07.902 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:07.902 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:07.902 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.902 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:07.902 16:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.902 16:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.902 16:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.902 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.902 16:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:07.902 16:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:07.902 16:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:07.902 16:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.902 16:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.902 16:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:07.902 16:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.902 16:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:07.902 16:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:07.902 16:08:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:07.902 16:08:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:07.902 16:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.902 16:08:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.159 nvme0n1 00:27:08.159 16:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.159 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.159 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.159 16:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.159 16:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.159 16:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.159 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.159 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.159 16:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.159 16:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.159 16:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.159 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.159 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:08.159 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.159 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:08.159 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:08.159 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:08.159 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTA4MTgyMmZmYWRmYTc5ZjYyOWI2NjQyOTU0ZGQzZDG/LO1A: 00:27:08.159 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDNkZmQ3NGE5ZjVjZDIwZGRjODZjZWQ4NjUwYTI3NDBSxEQU: 00:27:08.159 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:08.159 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:08.159 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTA4MTgyMmZmYWRmYTc5ZjYyOWI2NjQyOTU0ZGQzZDG/LO1A: 00:27:08.159 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDNkZmQ3NGE5ZjVjZDIwZGRjODZjZWQ4NjUwYTI3NDBSxEQU: ]] 00:27:08.159 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDNkZmQ3NGE5ZjVjZDIwZGRjODZjZWQ4NjUwYTI3NDBSxEQU: 00:27:08.159 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:08.159 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.159 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:08.159 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:08.159 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:08.159 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.159 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:08.159 16:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.159 16:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.159 16:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.159 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.416 16:08:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:08.416 16:08:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:08.416 16:08:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:08.416 16:08:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.416 16:08:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.416 16:08:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:08.416 16:08:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.416 16:08:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:08.416 16:08:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:08.416 16:08:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:08.416 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:08.416 16:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.416 16:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.416 nvme0n1 00:27:08.416 16:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.416 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.416 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.416 16:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.416 16:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.673 16:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.673 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.673 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.673 16:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.673 16:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.673 16:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.673 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.673 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:08.673 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.673 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:08.673 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:08.673 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:08.673 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTdhMjdjNDJmNzgxZTI3NzE0MzNjODllZjQ5YzY5MmZhMjQ3MGViOTAwYmQzN2Rmae8/mw==: 00:27:08.673 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzNiZGMwNDU3YzUyNzVkNzkzNzQ4YjY4YTkxMTljMzlKbnty: 00:27:08.673 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:08.673 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:08.673 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTdhMjdjNDJmNzgxZTI3NzE0MzNjODllZjQ5YzY5MmZhMjQ3MGViOTAwYmQzN2Rmae8/mw==: 00:27:08.673 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzNiZGMwNDU3YzUyNzVkNzkzNzQ4YjY4YTkxMTljMzlKbnty: ]] 00:27:08.673 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzNiZGMwNDU3YzUyNzVkNzkzNzQ4YjY4YTkxMTljMzlKbnty: 00:27:08.673 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:08.673 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.673 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:08.673 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:08.673 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:08.673 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.673 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:08.673 16:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.673 16:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.673 16:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.673 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.673 16:08:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:08.673 16:08:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:08.673 16:08:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:08.673 16:08:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.673 16:08:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.673 16:08:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:08.673 16:08:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.673 16:08:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:08.673 16:08:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:08.673 16:08:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:08.673 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:08.673 16:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.673 16:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.930 nvme0n1 00:27:08.930 16:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.930 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.930 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.930 16:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.930 16:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.930 16:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.930 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.930 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.930 16:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.930 16:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.930 16:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.930 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.930 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:08.931 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.931 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:08.931 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:08.931 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:08.931 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDM1ZjhlMTI4Y2MyMDkzODk1OTRkYTMwMGY3M2Q2NmM0OWQ4Y2M5NTdmNGZkZjYzYTVhMjVmNmEwZGYzZGI4YSJezX8=: 00:27:08.931 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:08.931 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:08.931 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:08.931 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDM1ZjhlMTI4Y2MyMDkzODk1OTRkYTMwMGY3M2Q2NmM0OWQ4Y2M5NTdmNGZkZjYzYTVhMjVmNmEwZGYzZGI4YSJezX8=: 00:27:08.931 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:08.931 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:08.931 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.931 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:08.931 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:08.931 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:08.931 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.931 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:08.931 16:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.931 16:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.931 16:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.931 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.931 16:08:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:08.931 16:08:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:08.931 16:08:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:08.931 16:08:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.931 16:08:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.931 16:08:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:08.931 16:08:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.931 16:08:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:08.931 16:08:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:08.931 16:08:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:08.931 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:08.931 16:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.931 16:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.188 nvme0n1 00:27:09.188 16:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.188 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.188 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.188 16:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.188 16:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.188 16:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.188 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.188 16:08:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.188 16:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.188 16:08:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.188 16:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.188 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:09.188 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.188 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:09.188 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.188 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:09.189 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:09.189 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:09.189 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWVhOTM0NTE0MDFmNTNjMWZiNmVlY2IwMDdkNmU2M2QOY6Ga: 00:27:09.189 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGU3NzkyYzU4ZmE1MGI4MjRkYTM3MDAzMzU4OTQ2MzgzZjA1MzU4MjQwMTdmZDRlYmMyNjc5MzUzOTI5MGYzMTobRkk=: 00:27:09.189 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:09.189 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:09.189 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWVhOTM0NTE0MDFmNTNjMWZiNmVlY2IwMDdkNmU2M2QOY6Ga: 00:27:09.189 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGU3NzkyYzU4ZmE1MGI4MjRkYTM3MDAzMzU4OTQ2MzgzZjA1MzU4MjQwMTdmZDRlYmMyNjc5MzUzOTI5MGYzMTobRkk=: ]] 00:27:09.189 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGU3NzkyYzU4ZmE1MGI4MjRkYTM3MDAzMzU4OTQ2MzgzZjA1MzU4MjQwMTdmZDRlYmMyNjc5MzUzOTI5MGYzMTobRkk=: 00:27:09.189 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:09.189 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.189 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:09.189 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:09.189 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:09.189 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.189 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:09.189 16:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.189 16:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.189 16:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.189 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.189 16:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:09.189 16:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:09.189 16:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:09.189 16:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.189 16:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.189 16:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:09.189 16:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.189 16:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:09.189 16:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:09.189 16:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:09.189 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:09.189 16:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.189 16:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.754 nvme0n1 00:27:09.754 16:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.754 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.754 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.754 16:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.754 16:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.754 16:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.754 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.754 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.754 16:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.754 16:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.754 16:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.754 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.754 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:09.754 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.754 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:09.754 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:09.754 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:09.754 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY2YTdmYTcyMzBlYmY3NDM1MGZlZmM5OTNmODI3MjQxNmE0MjJmMzY0YjY0YWM5bmI3Ng==: 00:27:09.754 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmNlMTdiZjRmNTU1YzM4ZTY3OTFlNTExYzc4MjE2MWI0ZDI2ZmUyODc4OTcwMjBkWICT/g==: 00:27:09.754 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:09.754 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:09.754 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY2YTdmYTcyMzBlYmY3NDM1MGZlZmM5OTNmODI3MjQxNmE0MjJmMzY0YjY0YWM5bmI3Ng==: 00:27:09.754 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmNlMTdiZjRmNTU1YzM4ZTY3OTFlNTExYzc4MjE2MWI0ZDI2ZmUyODc4OTcwMjBkWICT/g==: ]] 00:27:09.754 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmNlMTdiZjRmNTU1YzM4ZTY3OTFlNTExYzc4MjE2MWI0ZDI2ZmUyODc4OTcwMjBkWICT/g==: 00:27:09.754 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:09.754 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.754 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:09.754 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:09.754 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:09.754 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.754 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:09.754 16:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.754 16:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.754 16:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.754 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.754 16:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:09.754 16:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:09.754 16:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:09.754 16:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.754 16:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.754 16:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:09.754 16:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.754 16:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:09.754 16:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:09.754 16:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:09.754 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:09.754 16:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.754 16:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.011 nvme0n1 00:27:10.011 16:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.011 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.011 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.011 16:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.011 16:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.011 16:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.011 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.011 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.011 16:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.011 16:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.011 16:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.011 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.011 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:10.011 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.011 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:10.011 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:10.011 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:10.011 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTA4MTgyMmZmYWRmYTc5ZjYyOWI2NjQyOTU0ZGQzZDG/LO1A: 00:27:10.267 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDNkZmQ3NGE5ZjVjZDIwZGRjODZjZWQ4NjUwYTI3NDBSxEQU: 00:27:10.267 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:10.267 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:10.267 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTA4MTgyMmZmYWRmYTc5ZjYyOWI2NjQyOTU0ZGQzZDG/LO1A: 00:27:10.267 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDNkZmQ3NGE5ZjVjZDIwZGRjODZjZWQ4NjUwYTI3NDBSxEQU: ]] 00:27:10.267 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDNkZmQ3NGE5ZjVjZDIwZGRjODZjZWQ4NjUwYTI3NDBSxEQU: 00:27:10.267 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:10.267 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.267 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:10.267 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:10.267 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:10.267 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.267 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:10.267 16:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.267 16:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.267 16:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.267 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.267 16:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:10.267 16:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:10.267 16:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:10.267 16:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.267 16:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.267 16:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:10.267 16:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.267 16:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:10.267 16:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:10.267 16:08:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:10.267 16:08:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:10.267 16:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.267 16:08:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.524 nvme0n1 00:27:10.524 16:08:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.524 16:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.524 16:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.524 16:08:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.524 16:08:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.524 16:08:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.524 16:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.524 16:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.524 16:08:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.524 16:08:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.524 16:08:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.524 16:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.524 16:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:10.524 16:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.524 16:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:10.524 16:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:10.524 16:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:10.524 16:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTdhMjdjNDJmNzgxZTI3NzE0MzNjODllZjQ5YzY5MmZhMjQ3MGViOTAwYmQzN2Rmae8/mw==: 00:27:10.524 16:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzNiZGMwNDU3YzUyNzVkNzkzNzQ4YjY4YTkxMTljMzlKbnty: 00:27:10.524 16:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:10.524 16:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:10.524 16:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTdhMjdjNDJmNzgxZTI3NzE0MzNjODllZjQ5YzY5MmZhMjQ3MGViOTAwYmQzN2Rmae8/mw==: 00:27:10.524 16:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzNiZGMwNDU3YzUyNzVkNzkzNzQ4YjY4YTkxMTljMzlKbnty: ]] 00:27:10.524 16:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzNiZGMwNDU3YzUyNzVkNzkzNzQ4YjY4YTkxMTljMzlKbnty: 00:27:10.525 16:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:10.525 16:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.525 16:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:10.525 16:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:10.525 16:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:10.525 16:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.525 16:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:10.525 16:08:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.525 16:08:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.525 16:08:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.525 16:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.525 16:08:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:10.525 16:08:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:10.525 16:08:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:10.525 16:08:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.525 16:08:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.525 16:08:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:10.525 16:08:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.525 16:08:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:10.525 16:08:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:10.525 16:08:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:10.525 16:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:10.525 16:08:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.525 16:08:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.089 nvme0n1 00:27:11.089 16:08:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.089 16:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.089 16:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.089 16:08:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.089 16:08:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.089 16:08:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.089 16:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.089 16:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.089 16:08:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.089 16:08:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.089 16:08:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.089 16:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.089 16:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:11.089 16:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.089 16:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:11.089 16:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:11.089 16:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:11.089 16:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDM1ZjhlMTI4Y2MyMDkzODk1OTRkYTMwMGY3M2Q2NmM0OWQ4Y2M5NTdmNGZkZjYzYTVhMjVmNmEwZGYzZGI4YSJezX8=: 00:27:11.089 16:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:11.089 16:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:11.089 16:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:11.089 16:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDM1ZjhlMTI4Y2MyMDkzODk1OTRkYTMwMGY3M2Q2NmM0OWQ4Y2M5NTdmNGZkZjYzYTVhMjVmNmEwZGYzZGI4YSJezX8=: 00:27:11.089 16:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:11.089 16:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:11.089 16:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.089 16:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:11.089 16:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:11.089 16:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:11.089 16:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.089 16:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:11.089 16:08:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.089 16:08:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.089 16:08:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.089 16:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.089 16:08:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:11.089 16:08:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:11.089 16:08:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:11.089 16:08:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.089 16:08:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.089 16:08:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:11.089 16:08:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.089 16:08:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:11.089 16:08:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:11.089 16:08:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:11.089 16:08:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:11.089 16:08:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.089 16:08:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.347 nvme0n1 00:27:11.347 16:08:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.347 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.347 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.347 16:08:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.347 16:08:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.347 16:08:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.347 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.347 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.347 16:08:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.347 16:08:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.603 16:08:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.603 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:11.603 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.603 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:11.603 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.603 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:11.603 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:11.603 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:11.603 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWVhOTM0NTE0MDFmNTNjMWZiNmVlY2IwMDdkNmU2M2QOY6Ga: 00:27:11.603 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGU3NzkyYzU4ZmE1MGI4MjRkYTM3MDAzMzU4OTQ2MzgzZjA1MzU4MjQwMTdmZDRlYmMyNjc5MzUzOTI5MGYzMTobRkk=: 00:27:11.603 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:11.603 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:11.603 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWVhOTM0NTE0MDFmNTNjMWZiNmVlY2IwMDdkNmU2M2QOY6Ga: 00:27:11.603 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGU3NzkyYzU4ZmE1MGI4MjRkYTM3MDAzMzU4OTQ2MzgzZjA1MzU4MjQwMTdmZDRlYmMyNjc5MzUzOTI5MGYzMTobRkk=: ]] 00:27:11.603 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGU3NzkyYzU4ZmE1MGI4MjRkYTM3MDAzMzU4OTQ2MzgzZjA1MzU4MjQwMTdmZDRlYmMyNjc5MzUzOTI5MGYzMTobRkk=: 00:27:11.603 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:11.603 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.603 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:11.603 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:11.603 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:11.603 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.603 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:11.603 16:08:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.603 16:08:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.603 16:08:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.603 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.603 16:08:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:11.603 16:08:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:11.603 16:08:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:11.603 16:08:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.603 16:08:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.603 16:08:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:11.603 16:08:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.603 16:08:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:11.603 16:08:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:11.603 16:08:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:11.603 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:11.603 16:08:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.603 16:08:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.167 nvme0n1 00:27:12.167 16:08:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.167 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.167 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.167 16:08:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.167 16:08:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.167 16:08:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.167 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.167 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.167 16:08:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.167 16:08:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.167 16:08:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.167 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.167 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:12.167 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.167 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:12.167 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:12.167 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:12.167 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY2YTdmYTcyMzBlYmY3NDM1MGZlZmM5OTNmODI3MjQxNmE0MjJmMzY0YjY0YWM5bmI3Ng==: 00:27:12.167 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmNlMTdiZjRmNTU1YzM4ZTY3OTFlNTExYzc4MjE2MWI0ZDI2ZmUyODc4OTcwMjBkWICT/g==: 00:27:12.167 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:12.167 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:12.167 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY2YTdmYTcyMzBlYmY3NDM1MGZlZmM5OTNmODI3MjQxNmE0MjJmMzY0YjY0YWM5bmI3Ng==: 00:27:12.167 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmNlMTdiZjRmNTU1YzM4ZTY3OTFlNTExYzc4MjE2MWI0ZDI2ZmUyODc4OTcwMjBkWICT/g==: ]] 00:27:12.167 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmNlMTdiZjRmNTU1YzM4ZTY3OTFlNTExYzc4MjE2MWI0ZDI2ZmUyODc4OTcwMjBkWICT/g==: 00:27:12.167 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:12.167 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.167 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:12.167 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:12.167 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:12.167 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.167 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:12.167 16:08:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.167 16:08:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.167 16:08:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.167 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.167 16:08:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:12.167 16:08:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:12.167 16:08:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:12.167 16:08:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.167 16:08:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.167 16:08:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:12.167 16:08:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.167 16:08:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:12.167 16:08:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:12.167 16:08:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:12.167 16:08:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:12.167 16:08:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.167 16:08:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.732 nvme0n1 00:27:12.732 16:08:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.732 16:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.732 16:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.732 16:08:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.732 16:08:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.732 16:08:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.732 16:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.732 16:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.732 16:08:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.732 16:08:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.732 16:08:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.732 16:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.732 16:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:12.732 16:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.732 16:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:12.732 16:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:12.732 16:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:12.732 16:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTA4MTgyMmZmYWRmYTc5ZjYyOWI2NjQyOTU0ZGQzZDG/LO1A: 00:27:12.732 16:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDNkZmQ3NGE5ZjVjZDIwZGRjODZjZWQ4NjUwYTI3NDBSxEQU: 00:27:12.732 16:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:12.732 16:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:12.732 16:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTA4MTgyMmZmYWRmYTc5ZjYyOWI2NjQyOTU0ZGQzZDG/LO1A: 00:27:12.732 16:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDNkZmQ3NGE5ZjVjZDIwZGRjODZjZWQ4NjUwYTI3NDBSxEQU: ]] 00:27:12.732 16:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDNkZmQ3NGE5ZjVjZDIwZGRjODZjZWQ4NjUwYTI3NDBSxEQU: 00:27:12.732 16:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:12.732 16:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.732 16:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:12.732 16:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:12.732 16:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:12.732 16:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.732 16:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:12.732 16:08:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.732 16:08:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.732 16:08:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.732 16:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.732 16:08:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:12.732 16:08:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:12.732 16:08:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:12.732 16:08:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.732 16:08:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.732 16:08:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:12.732 16:08:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.732 16:08:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:12.732 16:08:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:12.732 16:08:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:12.732 16:08:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:12.732 16:08:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.732 16:08:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.296 nvme0n1 00:27:13.296 16:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.296 16:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.296 16:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.296 16:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.296 16:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.296 16:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.554 16:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.554 16:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.554 16:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.554 16:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.554 16:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.554 16:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.554 16:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:13.554 16:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.554 16:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:13.554 16:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:13.554 16:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:13.554 16:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTdhMjdjNDJmNzgxZTI3NzE0MzNjODllZjQ5YzY5MmZhMjQ3MGViOTAwYmQzN2Rmae8/mw==: 00:27:13.554 16:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzNiZGMwNDU3YzUyNzVkNzkzNzQ4YjY4YTkxMTljMzlKbnty: 00:27:13.554 16:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:13.554 16:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:13.554 16:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTdhMjdjNDJmNzgxZTI3NzE0MzNjODllZjQ5YzY5MmZhMjQ3MGViOTAwYmQzN2Rmae8/mw==: 00:27:13.554 16:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzNiZGMwNDU3YzUyNzVkNzkzNzQ4YjY4YTkxMTljMzlKbnty: ]] 00:27:13.554 16:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzNiZGMwNDU3YzUyNzVkNzkzNzQ4YjY4YTkxMTljMzlKbnty: 00:27:13.554 16:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:13.554 16:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.554 16:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:13.554 16:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:13.554 16:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:13.554 16:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.554 16:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:13.554 16:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.554 16:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.554 16:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.554 16:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.554 16:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:13.554 16:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:13.554 16:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:13.554 16:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.554 16:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.554 16:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:13.554 16:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.554 16:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:13.554 16:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:13.554 16:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:13.554 16:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:13.554 16:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.554 16:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.118 nvme0n1 00:27:14.118 16:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.118 16:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.118 16:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.118 16:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.118 16:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.118 16:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.118 16:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.118 16:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.118 16:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.118 16:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.118 16:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.118 16:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.118 16:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:14.118 16:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.118 16:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:14.118 16:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:14.118 16:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:14.118 16:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDM1ZjhlMTI4Y2MyMDkzODk1OTRkYTMwMGY3M2Q2NmM0OWQ4Y2M5NTdmNGZkZjYzYTVhMjVmNmEwZGYzZGI4YSJezX8=: 00:27:14.118 16:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:14.118 16:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:14.118 16:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:14.118 16:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDM1ZjhlMTI4Y2MyMDkzODk1OTRkYTMwMGY3M2Q2NmM0OWQ4Y2M5NTdmNGZkZjYzYTVhMjVmNmEwZGYzZGI4YSJezX8=: 00:27:14.118 16:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:14.118 16:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:14.118 16:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.118 16:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:14.118 16:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:14.118 16:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:14.118 16:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.119 16:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:14.119 16:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.119 16:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.119 16:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.119 16:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.119 16:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:14.119 16:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:14.119 16:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:14.119 16:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.119 16:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.119 16:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:14.119 16:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.119 16:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:14.119 16:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:14.119 16:08:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:14.119 16:08:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:14.119 16:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.119 16:08:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.683 nvme0n1 00:27:14.683 16:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.683 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.683 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.683 16:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.683 16:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.683 16:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.683 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.683 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.683 16:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.683 16:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.683 16:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.683 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:14.683 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:14.683 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.683 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:14.683 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.683 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:14.683 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:14.683 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:14.683 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWVhOTM0NTE0MDFmNTNjMWZiNmVlY2IwMDdkNmU2M2QOY6Ga: 00:27:14.683 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGU3NzkyYzU4ZmE1MGI4MjRkYTM3MDAzMzU4OTQ2MzgzZjA1MzU4MjQwMTdmZDRlYmMyNjc5MzUzOTI5MGYzMTobRkk=: 00:27:14.683 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:14.683 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:14.683 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWVhOTM0NTE0MDFmNTNjMWZiNmVlY2IwMDdkNmU2M2QOY6Ga: 00:27:14.683 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGU3NzkyYzU4ZmE1MGI4MjRkYTM3MDAzMzU4OTQ2MzgzZjA1MzU4MjQwMTdmZDRlYmMyNjc5MzUzOTI5MGYzMTobRkk=: ]] 00:27:14.683 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGU3NzkyYzU4ZmE1MGI4MjRkYTM3MDAzMzU4OTQ2MzgzZjA1MzU4MjQwMTdmZDRlYmMyNjc5MzUzOTI5MGYzMTobRkk=: 00:27:14.683 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:14.683 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.683 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:14.683 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:14.683 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:14.683 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.683 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:14.683 16:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.683 16:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.683 16:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.683 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.683 16:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:14.683 16:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:14.683 16:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:14.683 16:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.683 16:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.683 16:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:14.683 16:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.683 16:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:14.683 16:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:14.683 16:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:14.683 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:14.683 16:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.683 16:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.941 nvme0n1 00:27:14.941 16:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.941 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.941 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.941 16:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.941 16:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.941 16:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.941 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.941 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.941 16:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.941 16:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.941 16:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.941 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.941 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:14.941 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.941 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:14.941 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:14.941 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:14.941 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY2YTdmYTcyMzBlYmY3NDM1MGZlZmM5OTNmODI3MjQxNmE0MjJmMzY0YjY0YWM5bmI3Ng==: 00:27:14.941 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmNlMTdiZjRmNTU1YzM4ZTY3OTFlNTExYzc4MjE2MWI0ZDI2ZmUyODc4OTcwMjBkWICT/g==: 00:27:14.941 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:14.941 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:14.941 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY2YTdmYTcyMzBlYmY3NDM1MGZlZmM5OTNmODI3MjQxNmE0MjJmMzY0YjY0YWM5bmI3Ng==: 00:27:14.941 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmNlMTdiZjRmNTU1YzM4ZTY3OTFlNTExYzc4MjE2MWI0ZDI2ZmUyODc4OTcwMjBkWICT/g==: ]] 00:27:14.941 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmNlMTdiZjRmNTU1YzM4ZTY3OTFlNTExYzc4MjE2MWI0ZDI2ZmUyODc4OTcwMjBkWICT/g==: 00:27:14.941 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:14.941 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.941 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:14.941 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:14.941 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:14.941 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.941 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:14.941 16:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.941 16:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.941 16:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.941 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.941 16:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:14.941 16:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:14.941 16:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:14.941 16:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.941 16:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.941 16:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:14.941 16:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.941 16:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:14.941 16:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:14.941 16:08:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:14.941 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:14.941 16:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.941 16:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.198 nvme0n1 00:27:15.198 16:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.198 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.198 16:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.198 16:08:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.198 16:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.198 16:08:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.198 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.198 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.198 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.198 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.198 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.198 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.198 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:15.198 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.198 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:15.198 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:15.198 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:15.198 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTA4MTgyMmZmYWRmYTc5ZjYyOWI2NjQyOTU0ZGQzZDG/LO1A: 00:27:15.198 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDNkZmQ3NGE5ZjVjZDIwZGRjODZjZWQ4NjUwYTI3NDBSxEQU: 00:27:15.198 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:15.198 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:15.198 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTA4MTgyMmZmYWRmYTc5ZjYyOWI2NjQyOTU0ZGQzZDG/LO1A: 00:27:15.198 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDNkZmQ3NGE5ZjVjZDIwZGRjODZjZWQ4NjUwYTI3NDBSxEQU: ]] 00:27:15.198 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDNkZmQ3NGE5ZjVjZDIwZGRjODZjZWQ4NjUwYTI3NDBSxEQU: 00:27:15.198 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:15.198 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.198 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:15.198 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:15.198 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:15.198 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.198 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:15.198 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.198 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.198 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.198 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.198 16:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:15.198 16:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:15.198 16:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:15.198 16:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.198 16:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.198 16:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:15.198 16:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.198 16:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:15.198 16:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:15.198 16:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:15.198 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:15.198 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.198 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.455 nvme0n1 00:27:15.455 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.455 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.455 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.455 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.455 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.455 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.455 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.455 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.455 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.455 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.455 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.455 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.455 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:15.455 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.455 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:15.455 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:15.455 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:15.455 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTdhMjdjNDJmNzgxZTI3NzE0MzNjODllZjQ5YzY5MmZhMjQ3MGViOTAwYmQzN2Rmae8/mw==: 00:27:15.455 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzNiZGMwNDU3YzUyNzVkNzkzNzQ4YjY4YTkxMTljMzlKbnty: 00:27:15.455 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:15.455 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:15.455 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTdhMjdjNDJmNzgxZTI3NzE0MzNjODllZjQ5YzY5MmZhMjQ3MGViOTAwYmQzN2Rmae8/mw==: 00:27:15.455 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzNiZGMwNDU3YzUyNzVkNzkzNzQ4YjY4YTkxMTljMzlKbnty: ]] 00:27:15.455 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzNiZGMwNDU3YzUyNzVkNzkzNzQ4YjY4YTkxMTljMzlKbnty: 00:27:15.455 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:15.455 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.455 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:15.455 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:15.455 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:15.455 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.455 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:15.455 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.455 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.455 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.455 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.455 16:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:15.455 16:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:15.455 16:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:15.455 16:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.455 16:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.455 16:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:15.455 16:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.455 16:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:15.455 16:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:15.455 16:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:15.456 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:15.456 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.456 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.712 nvme0n1 00:27:15.712 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.712 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.712 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.712 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.712 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.712 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.712 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.712 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.712 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.712 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.712 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.712 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.712 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:15.712 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.712 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:15.712 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:15.712 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:15.712 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDM1ZjhlMTI4Y2MyMDkzODk1OTRkYTMwMGY3M2Q2NmM0OWQ4Y2M5NTdmNGZkZjYzYTVhMjVmNmEwZGYzZGI4YSJezX8=: 00:27:15.712 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:15.712 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:15.712 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:15.712 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDM1ZjhlMTI4Y2MyMDkzODk1OTRkYTMwMGY3M2Q2NmM0OWQ4Y2M5NTdmNGZkZjYzYTVhMjVmNmEwZGYzZGI4YSJezX8=: 00:27:15.712 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:15.712 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:15.712 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.712 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:15.712 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:15.712 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:15.712 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.712 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:15.712 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.712 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.712 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.712 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.712 16:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:15.712 16:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:15.712 16:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:15.712 16:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.712 16:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.712 16:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:15.712 16:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.712 16:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:15.712 16:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:15.712 16:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:15.712 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:15.712 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.712 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.969 nvme0n1 00:27:15.969 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.969 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.969 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.969 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.969 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.969 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.969 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.969 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.969 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.969 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.969 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.969 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:15.969 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.969 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:15.969 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.969 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:15.969 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:15.969 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:15.969 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWVhOTM0NTE0MDFmNTNjMWZiNmVlY2IwMDdkNmU2M2QOY6Ga: 00:27:15.969 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGU3NzkyYzU4ZmE1MGI4MjRkYTM3MDAzMzU4OTQ2MzgzZjA1MzU4MjQwMTdmZDRlYmMyNjc5MzUzOTI5MGYzMTobRkk=: 00:27:15.969 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:15.969 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:15.969 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWVhOTM0NTE0MDFmNTNjMWZiNmVlY2IwMDdkNmU2M2QOY6Ga: 00:27:15.969 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGU3NzkyYzU4ZmE1MGI4MjRkYTM3MDAzMzU4OTQ2MzgzZjA1MzU4MjQwMTdmZDRlYmMyNjc5MzUzOTI5MGYzMTobRkk=: ]] 00:27:15.970 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGU3NzkyYzU4ZmE1MGI4MjRkYTM3MDAzMzU4OTQ2MzgzZjA1MzU4MjQwMTdmZDRlYmMyNjc5MzUzOTI5MGYzMTobRkk=: 00:27:15.970 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:15.970 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.970 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:15.970 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:15.970 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:15.970 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.970 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:15.970 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.970 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.970 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.970 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.970 16:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:15.970 16:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:15.970 16:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:15.970 16:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.970 16:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.970 16:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:15.970 16:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.970 16:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:15.970 16:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:15.970 16:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:15.970 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:15.970 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.970 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.226 nvme0n1 00:27:16.226 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.226 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.226 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.226 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.226 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.226 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.226 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.226 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.226 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.226 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.226 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.226 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.226 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:16.226 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.226 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:16.226 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:16.226 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:16.226 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY2YTdmYTcyMzBlYmY3NDM1MGZlZmM5OTNmODI3MjQxNmE0MjJmMzY0YjY0YWM5bmI3Ng==: 00:27:16.226 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmNlMTdiZjRmNTU1YzM4ZTY3OTFlNTExYzc4MjE2MWI0ZDI2ZmUyODc4OTcwMjBkWICT/g==: 00:27:16.226 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:16.226 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:16.226 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY2YTdmYTcyMzBlYmY3NDM1MGZlZmM5OTNmODI3MjQxNmE0MjJmMzY0YjY0YWM5bmI3Ng==: 00:27:16.226 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmNlMTdiZjRmNTU1YzM4ZTY3OTFlNTExYzc4MjE2MWI0ZDI2ZmUyODc4OTcwMjBkWICT/g==: ]] 00:27:16.226 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmNlMTdiZjRmNTU1YzM4ZTY3OTFlNTExYzc4MjE2MWI0ZDI2ZmUyODc4OTcwMjBkWICT/g==: 00:27:16.226 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:16.226 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.226 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:16.226 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:16.226 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:16.226 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.226 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:16.226 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.226 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.226 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.226 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.226 16:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:16.226 16:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:16.226 16:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:16.226 16:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.226 16:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.226 16:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:16.226 16:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.226 16:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:16.226 16:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:16.226 16:08:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:16.226 16:08:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:16.226 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.226 16:08:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.226 nvme0n1 00:27:16.226 16:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.483 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.483 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.483 16:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.483 16:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.483 16:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.483 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.483 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.483 16:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.483 16:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.483 16:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.483 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.483 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:16.483 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.483 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:16.483 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:16.483 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:16.483 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTA4MTgyMmZmYWRmYTc5ZjYyOWI2NjQyOTU0ZGQzZDG/LO1A: 00:27:16.483 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDNkZmQ3NGE5ZjVjZDIwZGRjODZjZWQ4NjUwYTI3NDBSxEQU: 00:27:16.483 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:16.483 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:16.483 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTA4MTgyMmZmYWRmYTc5ZjYyOWI2NjQyOTU0ZGQzZDG/LO1A: 00:27:16.484 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDNkZmQ3NGE5ZjVjZDIwZGRjODZjZWQ4NjUwYTI3NDBSxEQU: ]] 00:27:16.484 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDNkZmQ3NGE5ZjVjZDIwZGRjODZjZWQ4NjUwYTI3NDBSxEQU: 00:27:16.484 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:16.484 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.484 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:16.484 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:16.484 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:16.484 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.484 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:16.484 16:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.484 16:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.484 16:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.484 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.484 16:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:16.484 16:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:16.484 16:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:16.484 16:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.484 16:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.484 16:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:16.484 16:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.484 16:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:16.484 16:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:16.484 16:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:16.484 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:16.484 16:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.484 16:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.484 nvme0n1 00:27:16.484 16:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.484 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.484 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.484 16:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.484 16:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.484 16:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTdhMjdjNDJmNzgxZTI3NzE0MzNjODllZjQ5YzY5MmZhMjQ3MGViOTAwYmQzN2Rmae8/mw==: 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzNiZGMwNDU3YzUyNzVkNzkzNzQ4YjY4YTkxMTljMzlKbnty: 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTdhMjdjNDJmNzgxZTI3NzE0MzNjODllZjQ5YzY5MmZhMjQ3MGViOTAwYmQzN2Rmae8/mw==: 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzNiZGMwNDU3YzUyNzVkNzkzNzQ4YjY4YTkxMTljMzlKbnty: ]] 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzNiZGMwNDU3YzUyNzVkNzkzNzQ4YjY4YTkxMTljMzlKbnty: 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.741 nvme0n1 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.741 16:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.999 16:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.999 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.999 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:16.999 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.999 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:16.999 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:16.999 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:16.999 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDM1ZjhlMTI4Y2MyMDkzODk1OTRkYTMwMGY3M2Q2NmM0OWQ4Y2M5NTdmNGZkZjYzYTVhMjVmNmEwZGYzZGI4YSJezX8=: 00:27:16.999 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:16.999 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:16.999 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:16.999 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDM1ZjhlMTI4Y2MyMDkzODk1OTRkYTMwMGY3M2Q2NmM0OWQ4Y2M5NTdmNGZkZjYzYTVhMjVmNmEwZGYzZGI4YSJezX8=: 00:27:16.999 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:16.999 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:16.999 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.999 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:16.999 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:16.999 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:16.999 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.999 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:16.999 16:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.999 16:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.999 16:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.999 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.999 16:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:16.999 16:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:16.999 16:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:16.999 16:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.999 16:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.999 16:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:16.999 16:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.999 16:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:16.999 16:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:16.999 16:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:16.999 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:16.999 16:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.999 16:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.999 nvme0n1 00:27:16.999 16:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.999 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.999 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.999 16:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.999 16:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.999 16:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.999 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.999 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.999 16:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.999 16:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.258 16:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.258 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:17.258 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.258 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:17.258 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.258 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:17.258 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:17.258 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:17.258 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWVhOTM0NTE0MDFmNTNjMWZiNmVlY2IwMDdkNmU2M2QOY6Ga: 00:27:17.258 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGU3NzkyYzU4ZmE1MGI4MjRkYTM3MDAzMzU4OTQ2MzgzZjA1MzU4MjQwMTdmZDRlYmMyNjc5MzUzOTI5MGYzMTobRkk=: 00:27:17.258 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:17.258 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:17.258 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWVhOTM0NTE0MDFmNTNjMWZiNmVlY2IwMDdkNmU2M2QOY6Ga: 00:27:17.258 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGU3NzkyYzU4ZmE1MGI4MjRkYTM3MDAzMzU4OTQ2MzgzZjA1MzU4MjQwMTdmZDRlYmMyNjc5MzUzOTI5MGYzMTobRkk=: ]] 00:27:17.258 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGU3NzkyYzU4ZmE1MGI4MjRkYTM3MDAzMzU4OTQ2MzgzZjA1MzU4MjQwMTdmZDRlYmMyNjc5MzUzOTI5MGYzMTobRkk=: 00:27:17.258 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:17.258 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.258 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:17.258 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:17.258 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:17.258 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.258 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:17.258 16:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.258 16:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.258 16:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.258 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.258 16:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:17.258 16:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:17.258 16:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:17.258 16:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.258 16:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.258 16:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:17.258 16:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.258 16:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:17.258 16:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:17.258 16:08:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:17.258 16:08:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:17.258 16:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.258 16:08:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.517 nvme0n1 00:27:17.517 16:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.517 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.517 16:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.517 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.517 16:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.517 16:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.517 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.517 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.517 16:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.517 16:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.517 16:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.517 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.517 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:17.517 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.517 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:17.517 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:17.517 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:17.517 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY2YTdmYTcyMzBlYmY3NDM1MGZlZmM5OTNmODI3MjQxNmE0MjJmMzY0YjY0YWM5bmI3Ng==: 00:27:17.517 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmNlMTdiZjRmNTU1YzM4ZTY3OTFlNTExYzc4MjE2MWI0ZDI2ZmUyODc4OTcwMjBkWICT/g==: 00:27:17.517 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:17.517 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:17.517 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY2YTdmYTcyMzBlYmY3NDM1MGZlZmM5OTNmODI3MjQxNmE0MjJmMzY0YjY0YWM5bmI3Ng==: 00:27:17.517 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmNlMTdiZjRmNTU1YzM4ZTY3OTFlNTExYzc4MjE2MWI0ZDI2ZmUyODc4OTcwMjBkWICT/g==: ]] 00:27:17.517 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmNlMTdiZjRmNTU1YzM4ZTY3OTFlNTExYzc4MjE2MWI0ZDI2ZmUyODc4OTcwMjBkWICT/g==: 00:27:17.517 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:17.517 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.517 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:17.517 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:17.517 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:17.517 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.517 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:17.517 16:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.517 16:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.517 16:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.517 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.517 16:08:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:17.517 16:08:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:17.517 16:08:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:17.517 16:08:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.517 16:08:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.517 16:08:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:17.517 16:08:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.517 16:08:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:17.517 16:08:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:17.517 16:08:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:17.517 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:17.517 16:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.517 16:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.774 nvme0n1 00:27:17.774 16:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.774 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.774 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.774 16:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.774 16:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.774 16:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.774 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.774 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.774 16:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.774 16:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.774 16:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.774 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.774 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:17.774 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.774 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:17.774 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:17.774 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:17.774 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTA4MTgyMmZmYWRmYTc5ZjYyOWI2NjQyOTU0ZGQzZDG/LO1A: 00:27:17.774 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDNkZmQ3NGE5ZjVjZDIwZGRjODZjZWQ4NjUwYTI3NDBSxEQU: 00:27:17.774 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:17.774 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:17.774 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTA4MTgyMmZmYWRmYTc5ZjYyOWI2NjQyOTU0ZGQzZDG/LO1A: 00:27:17.774 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDNkZmQ3NGE5ZjVjZDIwZGRjODZjZWQ4NjUwYTI3NDBSxEQU: ]] 00:27:17.774 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDNkZmQ3NGE5ZjVjZDIwZGRjODZjZWQ4NjUwYTI3NDBSxEQU: 00:27:17.774 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:17.774 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.774 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:17.774 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:17.774 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:17.774 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.774 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:17.774 16:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.774 16:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.774 16:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.774 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.774 16:08:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:17.774 16:08:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:17.774 16:08:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:17.774 16:08:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.774 16:08:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.774 16:08:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:17.774 16:08:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.774 16:08:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:17.774 16:08:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:17.774 16:08:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:17.774 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:17.774 16:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.774 16:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.079 nvme0n1 00:27:18.079 16:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.079 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.079 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.079 16:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.079 16:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.079 16:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.079 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.079 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.079 16:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.079 16:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.079 16:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.079 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.079 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:18.079 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.079 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:18.079 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:18.079 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:18.079 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTdhMjdjNDJmNzgxZTI3NzE0MzNjODllZjQ5YzY5MmZhMjQ3MGViOTAwYmQzN2Rmae8/mw==: 00:27:18.079 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzNiZGMwNDU3YzUyNzVkNzkzNzQ4YjY4YTkxMTljMzlKbnty: 00:27:18.079 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:18.079 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:18.079 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTdhMjdjNDJmNzgxZTI3NzE0MzNjODllZjQ5YzY5MmZhMjQ3MGViOTAwYmQzN2Rmae8/mw==: 00:27:18.079 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzNiZGMwNDU3YzUyNzVkNzkzNzQ4YjY4YTkxMTljMzlKbnty: ]] 00:27:18.079 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzNiZGMwNDU3YzUyNzVkNzkzNzQ4YjY4YTkxMTljMzlKbnty: 00:27:18.079 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:18.079 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.079 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:18.079 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:18.079 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:18.079 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.079 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:18.079 16:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.079 16:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.079 16:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.079 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.079 16:08:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:18.079 16:08:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:18.079 16:08:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:18.079 16:08:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.079 16:08:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.079 16:08:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:18.079 16:08:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.079 16:08:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:18.079 16:08:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:18.079 16:08:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:18.079 16:08:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:18.079 16:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.079 16:08:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.337 nvme0n1 00:27:18.337 16:08:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.337 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.337 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.337 16:08:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.337 16:08:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.337 16:08:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.337 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.337 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.337 16:08:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.337 16:08:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.337 16:08:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.337 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.337 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:18.337 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.337 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:18.337 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:18.338 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:18.338 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDM1ZjhlMTI4Y2MyMDkzODk1OTRkYTMwMGY3M2Q2NmM0OWQ4Y2M5NTdmNGZkZjYzYTVhMjVmNmEwZGYzZGI4YSJezX8=: 00:27:18.338 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:18.338 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:18.338 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:18.338 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDM1ZjhlMTI4Y2MyMDkzODk1OTRkYTMwMGY3M2Q2NmM0OWQ4Y2M5NTdmNGZkZjYzYTVhMjVmNmEwZGYzZGI4YSJezX8=: 00:27:18.338 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:18.338 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:18.338 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.338 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:18.338 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:18.338 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:18.338 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.338 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:18.338 16:08:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.338 16:08:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.338 16:08:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.338 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.338 16:08:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:18.338 16:08:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:18.338 16:08:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:18.338 16:08:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.338 16:08:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.338 16:08:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:18.338 16:08:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.338 16:08:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:18.338 16:08:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:18.338 16:08:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:18.338 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:18.338 16:08:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.338 16:08:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.596 nvme0n1 00:27:18.596 16:08:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.596 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.596 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.596 16:08:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.596 16:08:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.596 16:08:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.596 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.596 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.596 16:08:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.596 16:08:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.853 16:08:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.853 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:18.853 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.853 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:18.853 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.853 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:18.853 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:18.853 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:18.853 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWVhOTM0NTE0MDFmNTNjMWZiNmVlY2IwMDdkNmU2M2QOY6Ga: 00:27:18.853 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGU3NzkyYzU4ZmE1MGI4MjRkYTM3MDAzMzU4OTQ2MzgzZjA1MzU4MjQwMTdmZDRlYmMyNjc5MzUzOTI5MGYzMTobRkk=: 00:27:18.853 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:18.853 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:18.853 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWVhOTM0NTE0MDFmNTNjMWZiNmVlY2IwMDdkNmU2M2QOY6Ga: 00:27:18.853 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGU3NzkyYzU4ZmE1MGI4MjRkYTM3MDAzMzU4OTQ2MzgzZjA1MzU4MjQwMTdmZDRlYmMyNjc5MzUzOTI5MGYzMTobRkk=: ]] 00:27:18.853 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGU3NzkyYzU4ZmE1MGI4MjRkYTM3MDAzMzU4OTQ2MzgzZjA1MzU4MjQwMTdmZDRlYmMyNjc5MzUzOTI5MGYzMTobRkk=: 00:27:18.853 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:18.853 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.853 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:18.853 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:18.853 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:18.853 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.853 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:18.854 16:08:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.854 16:08:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.854 16:08:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.854 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.854 16:08:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:18.854 16:08:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:18.854 16:08:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:18.854 16:08:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.854 16:08:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.854 16:08:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:18.854 16:08:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.854 16:08:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:18.854 16:08:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:18.854 16:08:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:18.854 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:18.854 16:08:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.854 16:08:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.112 nvme0n1 00:27:19.112 16:08:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.112 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.112 16:08:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.112 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.112 16:08:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.112 16:08:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.112 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.112 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.112 16:08:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.112 16:08:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.112 16:08:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.112 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.112 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:19.112 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.112 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:19.112 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:19.112 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:19.112 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY2YTdmYTcyMzBlYmY3NDM1MGZlZmM5OTNmODI3MjQxNmE0MjJmMzY0YjY0YWM5bmI3Ng==: 00:27:19.112 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmNlMTdiZjRmNTU1YzM4ZTY3OTFlNTExYzc4MjE2MWI0ZDI2ZmUyODc4OTcwMjBkWICT/g==: 00:27:19.112 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:19.112 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:19.112 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY2YTdmYTcyMzBlYmY3NDM1MGZlZmM5OTNmODI3MjQxNmE0MjJmMzY0YjY0YWM5bmI3Ng==: 00:27:19.112 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmNlMTdiZjRmNTU1YzM4ZTY3OTFlNTExYzc4MjE2MWI0ZDI2ZmUyODc4OTcwMjBkWICT/g==: ]] 00:27:19.112 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmNlMTdiZjRmNTU1YzM4ZTY3OTFlNTExYzc4MjE2MWI0ZDI2ZmUyODc4OTcwMjBkWICT/g==: 00:27:19.112 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:19.112 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.112 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:19.112 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:19.112 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:19.112 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.112 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:19.112 16:08:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.112 16:08:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.112 16:08:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.112 16:08:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.112 16:08:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:19.112 16:08:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:19.112 16:08:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:19.112 16:08:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.112 16:08:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.112 16:08:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:19.112 16:08:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.112 16:08:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:19.112 16:08:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:19.112 16:08:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:19.112 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:19.112 16:08:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.112 16:08:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.677 nvme0n1 00:27:19.677 16:08:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.677 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.677 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.677 16:08:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.677 16:08:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.677 16:08:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.677 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.677 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.677 16:08:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.677 16:08:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.677 16:08:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.677 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.677 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:19.677 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.677 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:19.677 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:19.677 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:19.677 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTA4MTgyMmZmYWRmYTc5ZjYyOWI2NjQyOTU0ZGQzZDG/LO1A: 00:27:19.677 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDNkZmQ3NGE5ZjVjZDIwZGRjODZjZWQ4NjUwYTI3NDBSxEQU: 00:27:19.677 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:19.677 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:19.677 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTA4MTgyMmZmYWRmYTc5ZjYyOWI2NjQyOTU0ZGQzZDG/LO1A: 00:27:19.677 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDNkZmQ3NGE5ZjVjZDIwZGRjODZjZWQ4NjUwYTI3NDBSxEQU: ]] 00:27:19.677 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDNkZmQ3NGE5ZjVjZDIwZGRjODZjZWQ4NjUwYTI3NDBSxEQU: 00:27:19.677 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:19.677 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.677 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:19.677 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:19.677 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:19.678 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.678 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:19.678 16:08:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.678 16:08:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.678 16:08:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.678 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.678 16:08:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:19.678 16:08:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:19.678 16:08:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:19.678 16:08:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.678 16:08:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.678 16:08:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:19.678 16:08:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.678 16:08:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:19.678 16:08:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:19.678 16:08:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:19.678 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:19.678 16:08:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.678 16:08:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.935 nvme0n1 00:27:19.935 16:08:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.935 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.935 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.935 16:08:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.935 16:08:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.935 16:08:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.935 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.935 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.935 16:08:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:19.935 16:08:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.935 16:08:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:19.935 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.935 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:19.935 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.935 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:19.935 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:19.935 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:19.935 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTdhMjdjNDJmNzgxZTI3NzE0MzNjODllZjQ5YzY5MmZhMjQ3MGViOTAwYmQzN2Rmae8/mw==: 00:27:19.936 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzNiZGMwNDU3YzUyNzVkNzkzNzQ4YjY4YTkxMTljMzlKbnty: 00:27:19.936 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:19.936 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:19.936 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTdhMjdjNDJmNzgxZTI3NzE0MzNjODllZjQ5YzY5MmZhMjQ3MGViOTAwYmQzN2Rmae8/mw==: 00:27:19.936 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzNiZGMwNDU3YzUyNzVkNzkzNzQ4YjY4YTkxMTljMzlKbnty: ]] 00:27:19.936 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzNiZGMwNDU3YzUyNzVkNzkzNzQ4YjY4YTkxMTljMzlKbnty: 00:27:19.936 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:19.936 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.936 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:19.936 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:19.936 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:19.936 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.936 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:20.193 16:08:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.193 16:08:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.193 16:08:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.193 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.193 16:08:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:20.193 16:08:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:20.193 16:08:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:20.193 16:08:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.193 16:08:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.193 16:08:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:20.193 16:08:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.193 16:08:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:20.193 16:08:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:20.193 16:08:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:20.193 16:08:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:20.193 16:08:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.193 16:08:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.450 nvme0n1 00:27:20.450 16:08:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.450 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.450 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.450 16:08:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.450 16:08:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.450 16:08:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.450 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.450 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.450 16:08:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.450 16:08:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.450 16:08:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.450 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.450 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:20.450 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.450 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:20.450 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:20.450 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:20.450 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDM1ZjhlMTI4Y2MyMDkzODk1OTRkYTMwMGY3M2Q2NmM0OWQ4Y2M5NTdmNGZkZjYzYTVhMjVmNmEwZGYzZGI4YSJezX8=: 00:27:20.450 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:20.450 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:20.450 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:20.451 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDM1ZjhlMTI4Y2MyMDkzODk1OTRkYTMwMGY3M2Q2NmM0OWQ4Y2M5NTdmNGZkZjYzYTVhMjVmNmEwZGYzZGI4YSJezX8=: 00:27:20.451 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:20.451 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:20.451 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.451 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:20.451 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:20.451 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:20.451 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.451 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:20.451 16:08:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.451 16:08:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.451 16:08:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.451 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.451 16:08:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:20.451 16:08:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:20.451 16:08:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:20.451 16:08:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.451 16:08:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.451 16:08:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:20.451 16:08:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.451 16:08:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:20.451 16:08:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:20.451 16:08:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:20.451 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:20.451 16:08:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.451 16:08:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.015 nvme0n1 00:27:21.015 16:08:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.015 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.015 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.015 16:08:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.015 16:08:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.015 16:08:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.015 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.015 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.015 16:08:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.015 16:08:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.015 16:08:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.015 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:21.015 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.015 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:21.015 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.015 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:21.015 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:21.015 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:21.015 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWVhOTM0NTE0MDFmNTNjMWZiNmVlY2IwMDdkNmU2M2QOY6Ga: 00:27:21.015 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGU3NzkyYzU4ZmE1MGI4MjRkYTM3MDAzMzU4OTQ2MzgzZjA1MzU4MjQwMTdmZDRlYmMyNjc5MzUzOTI5MGYzMTobRkk=: 00:27:21.015 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:21.015 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:21.015 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWVhOTM0NTE0MDFmNTNjMWZiNmVlY2IwMDdkNmU2M2QOY6Ga: 00:27:21.015 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGU3NzkyYzU4ZmE1MGI4MjRkYTM3MDAzMzU4OTQ2MzgzZjA1MzU4MjQwMTdmZDRlYmMyNjc5MzUzOTI5MGYzMTobRkk=: ]] 00:27:21.015 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGU3NzkyYzU4ZmE1MGI4MjRkYTM3MDAzMzU4OTQ2MzgzZjA1MzU4MjQwMTdmZDRlYmMyNjc5MzUzOTI5MGYzMTobRkk=: 00:27:21.015 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:21.015 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.015 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:21.015 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:21.015 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:21.015 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.015 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:21.015 16:08:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.015 16:08:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.015 16:08:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.015 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.015 16:08:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:21.015 16:08:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:21.015 16:08:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:21.015 16:08:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.016 16:08:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.016 16:08:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:21.016 16:08:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.016 16:08:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:21.016 16:08:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:21.016 16:08:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:21.016 16:08:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:21.016 16:08:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.016 16:08:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.580 nvme0n1 00:27:21.580 16:08:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.580 16:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.580 16:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.580 16:08:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.580 16:08:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.580 16:08:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.580 16:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.580 16:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.580 16:08:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.580 16:08:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.580 16:08:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.580 16:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.580 16:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:21.580 16:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.580 16:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:21.580 16:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:21.580 16:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:21.580 16:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY2YTdmYTcyMzBlYmY3NDM1MGZlZmM5OTNmODI3MjQxNmE0MjJmMzY0YjY0YWM5bmI3Ng==: 00:27:21.580 16:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmNlMTdiZjRmNTU1YzM4ZTY3OTFlNTExYzc4MjE2MWI0ZDI2ZmUyODc4OTcwMjBkWICT/g==: 00:27:21.580 16:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:21.580 16:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:21.580 16:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY2YTdmYTcyMzBlYmY3NDM1MGZlZmM5OTNmODI3MjQxNmE0MjJmMzY0YjY0YWM5bmI3Ng==: 00:27:21.580 16:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmNlMTdiZjRmNTU1YzM4ZTY3OTFlNTExYzc4MjE2MWI0ZDI2ZmUyODc4OTcwMjBkWICT/g==: ]] 00:27:21.580 16:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmNlMTdiZjRmNTU1YzM4ZTY3OTFlNTExYzc4MjE2MWI0ZDI2ZmUyODc4OTcwMjBkWICT/g==: 00:27:21.580 16:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:21.580 16:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.580 16:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:21.580 16:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:21.580 16:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:21.580 16:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.580 16:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:21.580 16:08:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.580 16:08:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.580 16:08:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.580 16:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.580 16:08:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:21.580 16:08:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:21.580 16:08:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:21.580 16:08:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.580 16:08:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.580 16:08:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:21.580 16:08:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.580 16:08:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:21.580 16:08:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:21.580 16:08:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:21.580 16:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:21.580 16:08:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.580 16:08:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.144 nvme0n1 00:27:22.144 16:08:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.144 16:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.144 16:08:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.144 16:08:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.144 16:08:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.144 16:08:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.144 16:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.144 16:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.144 16:08:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.144 16:08:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.144 16:08:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.144 16:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.144 16:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:22.144 16:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.144 16:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:22.144 16:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:22.144 16:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:22.144 16:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTA4MTgyMmZmYWRmYTc5ZjYyOWI2NjQyOTU0ZGQzZDG/LO1A: 00:27:22.144 16:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDNkZmQ3NGE5ZjVjZDIwZGRjODZjZWQ4NjUwYTI3NDBSxEQU: 00:27:22.144 16:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:22.144 16:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:22.144 16:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTA4MTgyMmZmYWRmYTc5ZjYyOWI2NjQyOTU0ZGQzZDG/LO1A: 00:27:22.144 16:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDNkZmQ3NGE5ZjVjZDIwZGRjODZjZWQ4NjUwYTI3NDBSxEQU: ]] 00:27:22.144 16:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDNkZmQ3NGE5ZjVjZDIwZGRjODZjZWQ4NjUwYTI3NDBSxEQU: 00:27:22.144 16:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:22.144 16:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.144 16:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:22.144 16:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:22.144 16:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:22.144 16:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.144 16:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:22.144 16:08:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.144 16:08:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.144 16:08:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.144 16:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.144 16:08:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:22.144 16:08:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:22.144 16:08:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:22.144 16:08:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.144 16:08:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.144 16:08:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:22.144 16:08:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.144 16:08:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:22.144 16:08:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:22.144 16:08:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:22.144 16:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:22.144 16:08:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.144 16:08:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.074 nvme0n1 00:27:23.074 16:08:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.074 16:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.074 16:08:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.074 16:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.074 16:08:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.074 16:08:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.074 16:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.074 16:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.074 16:08:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.074 16:08:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.074 16:08:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.074 16:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.074 16:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:23.074 16:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.074 16:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:23.074 16:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:23.074 16:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:23.075 16:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTdhMjdjNDJmNzgxZTI3NzE0MzNjODllZjQ5YzY5MmZhMjQ3MGViOTAwYmQzN2Rmae8/mw==: 00:27:23.075 16:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzNiZGMwNDU3YzUyNzVkNzkzNzQ4YjY4YTkxMTljMzlKbnty: 00:27:23.075 16:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:23.075 16:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:23.075 16:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTdhMjdjNDJmNzgxZTI3NzE0MzNjODllZjQ5YzY5MmZhMjQ3MGViOTAwYmQzN2Rmae8/mw==: 00:27:23.075 16:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzNiZGMwNDU3YzUyNzVkNzkzNzQ4YjY4YTkxMTljMzlKbnty: ]] 00:27:23.075 16:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzNiZGMwNDU3YzUyNzVkNzkzNzQ4YjY4YTkxMTljMzlKbnty: 00:27:23.075 16:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:23.075 16:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.075 16:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:23.075 16:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:23.075 16:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:23.075 16:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.075 16:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:23.075 16:08:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.075 16:08:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.075 16:08:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.075 16:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.075 16:08:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:23.075 16:08:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:23.075 16:08:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:23.075 16:08:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.075 16:08:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.075 16:08:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:23.075 16:08:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.075 16:08:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:23.075 16:08:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:23.075 16:08:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:23.075 16:08:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:23.075 16:08:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.075 16:08:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.639 nvme0n1 00:27:23.639 16:08:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.639 16:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.639 16:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.639 16:08:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.639 16:08:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.639 16:08:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.639 16:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.639 16:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.639 16:08:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.639 16:08:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.639 16:08:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.639 16:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.639 16:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:23.639 16:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.639 16:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:23.639 16:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:23.639 16:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:23.639 16:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDM1ZjhlMTI4Y2MyMDkzODk1OTRkYTMwMGY3M2Q2NmM0OWQ4Y2M5NTdmNGZkZjYzYTVhMjVmNmEwZGYzZGI4YSJezX8=: 00:27:23.639 16:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:23.639 16:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:23.639 16:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:23.639 16:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDM1ZjhlMTI4Y2MyMDkzODk1OTRkYTMwMGY3M2Q2NmM0OWQ4Y2M5NTdmNGZkZjYzYTVhMjVmNmEwZGYzZGI4YSJezX8=: 00:27:23.639 16:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:23.639 16:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:23.639 16:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.639 16:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:23.639 16:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:23.639 16:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:23.639 16:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.639 16:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:23.639 16:08:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.639 16:08:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.639 16:08:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.639 16:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.639 16:08:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:23.639 16:08:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:23.639 16:08:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:23.639 16:08:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.639 16:08:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.639 16:08:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:23.639 16:08:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.639 16:08:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:23.639 16:08:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:23.639 16:08:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:23.639 16:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:23.639 16:08:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.639 16:08:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.203 nvme0n1 00:27:24.203 16:08:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.203 16:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.203 16:08:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.203 16:08:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.203 16:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.203 16:08:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.203 16:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.203 16:08:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.203 16:08:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.203 16:08:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.203 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.203 16:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:24.203 16:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.203 16:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:24.203 16:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:24.203 16:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:24.203 16:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzY2YTdmYTcyMzBlYmY3NDM1MGZlZmM5OTNmODI3MjQxNmE0MjJmMzY0YjY0YWM5bmI3Ng==: 00:27:24.203 16:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmNlMTdiZjRmNTU1YzM4ZTY3OTFlNTExYzc4MjE2MWI0ZDI2ZmUyODc4OTcwMjBkWICT/g==: 00:27:24.203 16:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:24.203 16:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:24.203 16:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzY2YTdmYTcyMzBlYmY3NDM1MGZlZmM5OTNmODI3MjQxNmE0MjJmMzY0YjY0YWM5bmI3Ng==: 00:27:24.203 16:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmNlMTdiZjRmNTU1YzM4ZTY3OTFlNTExYzc4MjE2MWI0ZDI2ZmUyODc4OTcwMjBkWICT/g==: ]] 00:27:24.203 16:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmNlMTdiZjRmNTU1YzM4ZTY3OTFlNTExYzc4MjE2MWI0ZDI2ZmUyODc4OTcwMjBkWICT/g==: 00:27:24.203 16:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:24.203 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.203 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.203 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.203 16:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:24.203 16:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:24.203 16:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:24.203 16:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:24.203 16:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.203 16:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.203 16:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:24.203 16:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.203 16:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:24.203 16:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:24.203 16:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:24.203 16:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:24.203 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:27:24.204 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:24.204 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:24.204 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:24.204 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:24.204 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:24.204 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:24.204 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.204 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.204 request: 00:27:24.204 { 00:27:24.204 "name": "nvme0", 00:27:24.204 "trtype": "tcp", 00:27:24.204 "traddr": "10.0.0.1", 00:27:24.204 "adrfam": "ipv4", 00:27:24.204 "trsvcid": "4420", 00:27:24.204 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:24.204 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:24.204 "prchk_reftag": false, 00:27:24.204 "prchk_guard": false, 00:27:24.204 "hdgst": false, 00:27:24.204 "ddgst": false, 00:27:24.204 "method": "bdev_nvme_attach_controller", 00:27:24.204 "req_id": 1 00:27:24.204 } 00:27:24.204 Got JSON-RPC error response 00:27:24.204 response: 00:27:24.204 { 00:27:24.204 "code": -5, 00:27:24.204 "message": "Input/output error" 00:27:24.204 } 00:27:24.204 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:24.204 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:27:24.204 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:24.204 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:24.204 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:24.204 16:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.204 16:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:24.204 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.204 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.204 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.204 16:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:24.204 16:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:24.204 16:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:24.204 16:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:24.204 16:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:24.204 16:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.204 16:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.204 16:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:24.204 16:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.204 16:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:24.204 16:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:24.204 16:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:24.204 16:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:24.204 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:27:24.204 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:24.204 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:24.204 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:24.204 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:24.204 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:24.204 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:24.204 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.204 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.462 request: 00:27:24.462 { 00:27:24.462 "name": "nvme0", 00:27:24.462 "trtype": "tcp", 00:27:24.462 "traddr": "10.0.0.1", 00:27:24.462 "adrfam": "ipv4", 00:27:24.462 "trsvcid": "4420", 00:27:24.462 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:24.462 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:24.462 "prchk_reftag": false, 00:27:24.462 "prchk_guard": false, 00:27:24.462 "hdgst": false, 00:27:24.462 "ddgst": false, 00:27:24.462 "dhchap_key": "key2", 00:27:24.462 "method": "bdev_nvme_attach_controller", 00:27:24.462 "req_id": 1 00:27:24.462 } 00:27:24.462 Got JSON-RPC error response 00:27:24.462 response: 00:27:24.462 { 00:27:24.462 "code": -5, 00:27:24.462 "message": "Input/output error" 00:27:24.462 } 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.462 request: 00:27:24.462 { 00:27:24.462 "name": "nvme0", 00:27:24.462 "trtype": "tcp", 00:27:24.462 "traddr": "10.0.0.1", 00:27:24.462 "adrfam": "ipv4", 00:27:24.462 "trsvcid": "4420", 00:27:24.462 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:24.462 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:24.462 "prchk_reftag": false, 00:27:24.462 "prchk_guard": false, 00:27:24.462 "hdgst": false, 00:27:24.462 "ddgst": false, 00:27:24.462 "dhchap_key": "key1", 00:27:24.462 "dhchap_ctrlr_key": "ckey2", 00:27:24.462 "method": "bdev_nvme_attach_controller", 00:27:24.462 "req_id": 1 00:27:24.462 } 00:27:24.462 Got JSON-RPC error response 00:27:24.462 response: 00:27:24.462 { 00:27:24.462 "code": -5, 00:27:24.462 "message": "Input/output error" 00:27:24.462 } 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:24.462 rmmod nvme_tcp 00:27:24.462 rmmod nvme_fabrics 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 3895906 ']' 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 3895906 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 3895906 ']' 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 3895906 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:24.462 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3895906 00:27:24.720 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:24.720 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:24.720 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3895906' 00:27:24.720 killing process with pid 3895906 00:27:24.720 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 3895906 00:27:24.720 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 3895906 00:27:24.720 16:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:24.720 16:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:24.720 16:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:24.720 16:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:24.720 16:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:24.720 16:08:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:24.720 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:24.720 16:08:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:27.282 16:08:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:27.283 16:08:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:27.283 16:08:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:27.283 16:08:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:27.283 16:08:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:27.283 16:08:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:27:27.283 16:08:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:27.283 16:08:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:27.283 16:08:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:27.283 16:08:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:27.283 16:08:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:27.283 16:08:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:27.283 16:08:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:29.806 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:29.806 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:29.806 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:29.806 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:29.806 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:29.806 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:29.806 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:29.806 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:29.806 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:29.806 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:29.806 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:29.806 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:29.806 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:29.806 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:29.806 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:29.806 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:30.373 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:27:30.631 16:08:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.aEo /tmp/spdk.key-null.v4D /tmp/spdk.key-sha256.4nB /tmp/spdk.key-sha384.zgV /tmp/spdk.key-sha512.mWI /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:27:30.631 16:08:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:33.162 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:27:33.162 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:33.162 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:27:33.162 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:27:33.162 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:27:33.162 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:27:33.162 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:27:33.162 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:27:33.162 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:27:33.162 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:27:33.162 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:27:33.162 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:27:33.162 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:27:33.162 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:27:33.162 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:27:33.162 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:27:33.162 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:27:33.162 00:27:33.162 real 0m48.543s 00:27:33.162 user 0m43.530s 00:27:33.162 sys 0m11.117s 00:27:33.162 16:09:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:33.162 16:09:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.162 ************************************ 00:27:33.162 END TEST nvmf_auth_host 00:27:33.162 ************************************ 00:27:33.162 16:09:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:33.162 16:09:01 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:27:33.162 16:09:01 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:33.162 16:09:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:33.162 16:09:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:33.162 16:09:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:33.162 ************************************ 00:27:33.162 START TEST nvmf_digest 00:27:33.162 ************************************ 00:27:33.162 16:09:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:33.163 * Looking for test storage... 00:27:33.163 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:27:33.163 16:09:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:38.429 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:38.429 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:38.429 Found net devices under 0000:86:00.0: cvl_0_0 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:38.429 Found net devices under 0000:86:00.1: cvl_0_1 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:38.429 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:38.430 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:27:38.430 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:38.430 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:38.430 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:38.430 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:38.430 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:38.430 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:38.430 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:38.430 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:38.430 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:38.430 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:38.430 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:38.430 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:38.430 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:38.430 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:38.430 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:38.430 16:09:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:38.430 16:09:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:38.430 16:09:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:38.430 16:09:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:38.430 16:09:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:38.430 16:09:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:38.430 16:09:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:38.430 16:09:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:38.430 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:38.430 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:27:38.430 00:27:38.430 --- 10.0.0.2 ping statistics --- 00:27:38.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:38.430 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:27:38.430 16:09:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:38.430 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:38.430 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:27:38.430 00:27:38.430 --- 10.0.0.1 ping statistics --- 00:27:38.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:38.430 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:27:38.430 16:09:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:38.430 16:09:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:27:38.430 16:09:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:38.430 16:09:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:38.430 16:09:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:38.430 16:09:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:38.430 16:09:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:38.430 16:09:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:38.430 16:09:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:38.430 16:09:07 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:38.430 16:09:07 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:27:38.430 16:09:07 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:27:38.430 16:09:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:38.430 16:09:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:38.430 16:09:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:38.430 ************************************ 00:27:38.430 START TEST nvmf_digest_clean 00:27:38.430 ************************************ 00:27:38.430 16:09:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:27:38.430 16:09:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:27:38.430 16:09:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:27:38.430 16:09:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:27:38.430 16:09:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:27:38.430 16:09:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:27:38.430 16:09:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:38.430 16:09:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:38.430 16:09:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:38.430 16:09:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=3909071 00:27:38.430 16:09:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 3909071 00:27:38.430 16:09:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:38.430 16:09:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3909071 ']' 00:27:38.430 16:09:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:38.430 16:09:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:38.430 16:09:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:38.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:38.430 16:09:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:38.430 16:09:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:38.430 [2024-07-15 16:09:07.252233] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:27:38.430 [2024-07-15 16:09:07.252275] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:38.430 EAL: No free 2048 kB hugepages reported on node 1 00:27:38.430 [2024-07-15 16:09:07.309611] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.688 [2024-07-15 16:09:07.390342] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:38.688 [2024-07-15 16:09:07.390376] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:38.688 [2024-07-15 16:09:07.390384] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:38.688 [2024-07-15 16:09:07.390390] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:38.688 [2024-07-15 16:09:07.390395] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:38.688 [2024-07-15 16:09:07.390413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:39.253 16:09:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:39.253 16:09:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:27:39.253 16:09:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:39.253 16:09:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:39.253 16:09:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:39.253 16:09:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:39.253 16:09:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:27:39.253 16:09:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:27:39.253 16:09:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:27:39.253 16:09:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.253 16:09:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:39.253 null0 00:27:39.253 [2024-07-15 16:09:08.169957] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:39.511 [2024-07-15 16:09:08.194117] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:39.511 16:09:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.511 16:09:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:27:39.511 16:09:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:39.511 16:09:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:39.511 16:09:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:39.511 16:09:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:39.511 16:09:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:39.511 16:09:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:39.511 16:09:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3909280 00:27:39.511 16:09:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3909280 /var/tmp/bperf.sock 00:27:39.511 16:09:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:39.511 16:09:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3909280 ']' 00:27:39.511 16:09:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:39.511 16:09:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:39.511 16:09:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:39.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:39.511 16:09:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:39.512 16:09:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:39.512 [2024-07-15 16:09:08.243560] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:27:39.512 [2024-07-15 16:09:08.243609] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3909280 ] 00:27:39.512 EAL: No free 2048 kB hugepages reported on node 1 00:27:39.512 [2024-07-15 16:09:08.298093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:39.512 [2024-07-15 16:09:08.377993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:40.445 16:09:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:40.445 16:09:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:27:40.445 16:09:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:40.445 16:09:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:40.445 16:09:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:40.445 16:09:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:40.445 16:09:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:41.051 nvme0n1 00:27:41.051 16:09:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:41.051 16:09:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:41.051 Running I/O for 2 seconds... 00:27:42.957 00:27:42.957 Latency(us) 00:27:42.957 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:42.957 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:42.957 nvme0n1 : 2.00 26660.18 104.14 0.00 0.00 4796.08 2236.77 15272.74 00:27:42.957 =================================================================================================================== 00:27:42.957 Total : 26660.18 104.14 0.00 0.00 4796.08 2236.77 15272.74 00:27:42.957 0 00:27:42.957 16:09:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:42.957 16:09:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:42.957 16:09:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:42.957 16:09:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:42.957 | select(.opcode=="crc32c") 00:27:42.957 | "\(.module_name) \(.executed)"' 00:27:42.957 16:09:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:43.216 16:09:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:43.216 16:09:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:43.216 16:09:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:43.216 16:09:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:43.216 16:09:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3909280 00:27:43.216 16:09:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3909280 ']' 00:27:43.216 16:09:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3909280 00:27:43.216 16:09:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:27:43.216 16:09:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:43.216 16:09:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3909280 00:27:43.216 16:09:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:43.216 16:09:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:43.216 16:09:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3909280' 00:27:43.216 killing process with pid 3909280 00:27:43.216 16:09:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3909280 00:27:43.216 Received shutdown signal, test time was about 2.000000 seconds 00:27:43.216 00:27:43.216 Latency(us) 00:27:43.216 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:43.216 =================================================================================================================== 00:27:43.216 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:43.216 16:09:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3909280 00:27:43.475 16:09:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:27:43.475 16:09:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:43.475 16:09:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:43.475 16:09:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:43.475 16:09:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:43.475 16:09:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:43.475 16:09:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:43.475 16:09:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3910207 00:27:43.475 16:09:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3910207 /var/tmp/bperf.sock 00:27:43.475 16:09:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:43.475 16:09:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3910207 ']' 00:27:43.475 16:09:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:43.475 16:09:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:43.475 16:09:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:43.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:43.475 16:09:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:43.475 16:09:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:43.475 [2024-07-15 16:09:12.226519] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:27:43.475 [2024-07-15 16:09:12.226566] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3910207 ] 00:27:43.475 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:43.475 Zero copy mechanism will not be used. 00:27:43.475 EAL: No free 2048 kB hugepages reported on node 1 00:27:43.475 [2024-07-15 16:09:12.281432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:43.475 [2024-07-15 16:09:12.354196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:44.426 16:09:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:44.426 16:09:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:27:44.426 16:09:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:44.426 16:09:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:44.426 16:09:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:44.426 16:09:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:44.426 16:09:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:44.689 nvme0n1 00:27:44.689 16:09:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:44.689 16:09:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:44.947 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:44.947 Zero copy mechanism will not be used. 00:27:44.947 Running I/O for 2 seconds... 00:27:46.850 00:27:46.850 Latency(us) 00:27:46.850 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:46.850 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:46.850 nvme0n1 : 2.00 5030.25 628.78 0.00 0.00 3177.91 947.42 5413.84 00:27:46.850 =================================================================================================================== 00:27:46.850 Total : 5030.25 628.78 0.00 0.00 3177.91 947.42 5413.84 00:27:46.850 0 00:27:46.850 16:09:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:46.850 16:09:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:46.850 16:09:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:46.850 16:09:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:46.850 | select(.opcode=="crc32c") 00:27:46.850 | "\(.module_name) \(.executed)"' 00:27:46.850 16:09:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:47.108 16:09:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:47.108 16:09:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:47.108 16:09:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:47.108 16:09:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:47.108 16:09:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3910207 00:27:47.108 16:09:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3910207 ']' 00:27:47.108 16:09:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3910207 00:27:47.108 16:09:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:27:47.108 16:09:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:47.108 16:09:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3910207 00:27:47.108 16:09:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:47.108 16:09:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:47.108 16:09:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3910207' 00:27:47.108 killing process with pid 3910207 00:27:47.108 16:09:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3910207 00:27:47.108 Received shutdown signal, test time was about 2.000000 seconds 00:27:47.108 00:27:47.108 Latency(us) 00:27:47.108 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:47.108 =================================================================================================================== 00:27:47.108 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:47.108 16:09:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3910207 00:27:47.367 16:09:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:27:47.367 16:09:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:47.367 16:09:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:47.367 16:09:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:47.367 16:09:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:47.367 16:09:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:47.367 16:09:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:47.367 16:09:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3910888 00:27:47.367 16:09:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3910888 /var/tmp/bperf.sock 00:27:47.367 16:09:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:47.367 16:09:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3910888 ']' 00:27:47.367 16:09:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:47.367 16:09:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:47.367 16:09:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:47.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:47.367 16:09:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:47.367 16:09:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:47.367 [2024-07-15 16:09:16.150125] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:27:47.367 [2024-07-15 16:09:16.150172] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3910888 ] 00:27:47.367 EAL: No free 2048 kB hugepages reported on node 1 00:27:47.367 [2024-07-15 16:09:16.202863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.367 [2024-07-15 16:09:16.270470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:48.300 16:09:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:48.300 16:09:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:27:48.300 16:09:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:48.300 16:09:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:48.300 16:09:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:48.300 16:09:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:48.300 16:09:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:48.556 nvme0n1 00:27:48.556 16:09:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:48.556 16:09:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:48.813 Running I/O for 2 seconds... 00:27:50.710 00:27:50.710 Latency(us) 00:27:50.710 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:50.710 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:50.710 nvme0n1 : 2.00 28115.71 109.83 0.00 0.00 4548.59 2279.51 11967.44 00:27:50.710 =================================================================================================================== 00:27:50.710 Total : 28115.71 109.83 0.00 0.00 4548.59 2279.51 11967.44 00:27:50.710 0 00:27:50.710 16:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:50.710 16:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:50.710 16:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:50.710 16:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:50.710 | select(.opcode=="crc32c") 00:27:50.710 | "\(.module_name) \(.executed)"' 00:27:50.710 16:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:50.968 16:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:50.968 16:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:50.968 16:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:50.968 16:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:50.968 16:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3910888 00:27:50.968 16:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3910888 ']' 00:27:50.968 16:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3910888 00:27:50.968 16:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:27:50.968 16:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:50.969 16:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3910888 00:27:50.969 16:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:50.969 16:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:50.969 16:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3910888' 00:27:50.969 killing process with pid 3910888 00:27:50.969 16:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3910888 00:27:50.969 Received shutdown signal, test time was about 2.000000 seconds 00:27:50.969 00:27:50.969 Latency(us) 00:27:50.969 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:50.969 =================================================================================================================== 00:27:50.969 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:50.969 16:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3910888 00:27:51.227 16:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:27:51.227 16:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:51.227 16:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:51.227 16:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:51.227 16:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:51.227 16:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:51.227 16:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:51.227 16:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3911582 00:27:51.227 16:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3911582 /var/tmp/bperf.sock 00:27:51.227 16:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:51.227 16:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3911582 ']' 00:27:51.227 16:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:51.227 16:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:51.227 16:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:51.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:51.227 16:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:51.227 16:09:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:51.227 [2024-07-15 16:09:19.998856] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:27:51.227 [2024-07-15 16:09:19.998900] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3911582 ] 00:27:51.227 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:51.227 Zero copy mechanism will not be used. 00:27:51.227 EAL: No free 2048 kB hugepages reported on node 1 00:27:51.227 [2024-07-15 16:09:20.055288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.227 [2024-07-15 16:09:20.128813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:52.161 16:09:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:52.161 16:09:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:27:52.161 16:09:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:52.161 16:09:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:52.161 16:09:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:52.161 16:09:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:52.161 16:09:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:52.420 nvme0n1 00:27:52.420 16:09:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:52.420 16:09:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:52.679 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:52.679 Zero copy mechanism will not be used. 00:27:52.679 Running I/O for 2 seconds... 00:27:54.583 00:27:54.583 Latency(us) 00:27:54.583 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:54.583 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:54.583 nvme0n1 : 2.00 5572.01 696.50 0.00 0.00 2867.05 1816.49 7807.33 00:27:54.583 =================================================================================================================== 00:27:54.583 Total : 5572.01 696.50 0.00 0.00 2867.05 1816.49 7807.33 00:27:54.583 0 00:27:54.583 16:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:54.584 16:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:54.584 16:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:54.584 16:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:54.584 | select(.opcode=="crc32c") 00:27:54.584 | "\(.module_name) \(.executed)"' 00:27:54.584 16:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:54.848 16:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:54.848 16:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:54.848 16:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:54.848 16:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:54.848 16:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3911582 00:27:54.848 16:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3911582 ']' 00:27:54.848 16:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3911582 00:27:54.848 16:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:27:54.848 16:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:54.848 16:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3911582 00:27:54.848 16:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:54.848 16:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:54.848 16:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3911582' 00:27:54.848 killing process with pid 3911582 00:27:54.848 16:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3911582 00:27:54.848 Received shutdown signal, test time was about 2.000000 seconds 00:27:54.848 00:27:54.848 Latency(us) 00:27:54.848 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:54.848 =================================================================================================================== 00:27:54.848 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:54.848 16:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3911582 00:27:55.107 16:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3909071 00:27:55.107 16:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3909071 ']' 00:27:55.107 16:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3909071 00:27:55.107 16:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:27:55.107 16:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:55.107 16:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3909071 00:27:55.107 16:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:55.107 16:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:55.107 16:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3909071' 00:27:55.107 killing process with pid 3909071 00:27:55.107 16:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3909071 00:27:55.107 16:09:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3909071 00:27:55.366 00:27:55.366 real 0m16.868s 00:27:55.366 user 0m32.340s 00:27:55.366 sys 0m4.330s 00:27:55.366 16:09:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:55.366 16:09:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:55.366 ************************************ 00:27:55.366 END TEST nvmf_digest_clean 00:27:55.366 ************************************ 00:27:55.366 16:09:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:27:55.366 16:09:24 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:27:55.366 16:09:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:55.366 16:09:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:55.366 16:09:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:55.366 ************************************ 00:27:55.366 START TEST nvmf_digest_error 00:27:55.366 ************************************ 00:27:55.366 16:09:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:27:55.366 16:09:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:27:55.366 16:09:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:55.366 16:09:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:55.366 16:09:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:55.366 16:09:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=3912305 00:27:55.366 16:09:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 3912305 00:27:55.366 16:09:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:55.366 16:09:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3912305 ']' 00:27:55.366 16:09:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:55.366 16:09:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:55.366 16:09:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:55.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:55.366 16:09:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:55.366 16:09:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:55.366 [2024-07-15 16:09:24.186237] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:27:55.366 [2024-07-15 16:09:24.186275] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:55.366 EAL: No free 2048 kB hugepages reported on node 1 00:27:55.366 [2024-07-15 16:09:24.243734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.625 [2024-07-15 16:09:24.314764] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:55.625 [2024-07-15 16:09:24.314798] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:55.625 [2024-07-15 16:09:24.314805] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:55.625 [2024-07-15 16:09:24.314811] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:55.625 [2024-07-15 16:09:24.314816] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:55.625 [2024-07-15 16:09:24.314833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:56.194 16:09:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:56.194 16:09:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:27:56.194 16:09:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:56.194 16:09:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:56.194 16:09:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:56.194 16:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:56.194 16:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:27:56.194 16:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.194 16:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:56.194 [2024-07-15 16:09:25.024890] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:27:56.194 16:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.194 16:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:27:56.194 16:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:27:56.194 16:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.194 16:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:56.194 null0 00:27:56.194 [2024-07-15 16:09:25.113858] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:56.454 [2024-07-15 16:09:25.138033] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:56.454 16:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.454 16:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:27:56.454 16:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:56.454 16:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:56.454 16:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:56.454 16:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:56.454 16:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3912435 00:27:56.454 16:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3912435 /var/tmp/bperf.sock 00:27:56.454 16:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:27:56.454 16:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3912435 ']' 00:27:56.454 16:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:56.454 16:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:56.454 16:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:56.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:56.454 16:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:56.454 16:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:56.454 [2024-07-15 16:09:25.188455] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:27:56.454 [2024-07-15 16:09:25.188498] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3912435 ] 00:27:56.454 EAL: No free 2048 kB hugepages reported on node 1 00:27:56.454 [2024-07-15 16:09:25.242845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:56.454 [2024-07-15 16:09:25.322402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:57.389 16:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:57.389 16:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:27:57.389 16:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:57.389 16:09:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:57.389 16:09:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:57.389 16:09:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.389 16:09:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:57.389 16:09:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.389 16:09:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:57.389 16:09:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:57.645 nvme0n1 00:27:57.645 16:09:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:57.645 16:09:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.645 16:09:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:57.646 16:09:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.646 16:09:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:57.646 16:09:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:57.904 Running I/O for 2 seconds... 00:27:57.904 [2024-07-15 16:09:26.662639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:57.904 [2024-07-15 16:09:26.662671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.904 [2024-07-15 16:09:26.662682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.904 [2024-07-15 16:09:26.672601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:57.904 [2024-07-15 16:09:26.672626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:18490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.904 [2024-07-15 16:09:26.672636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.904 [2024-07-15 16:09:26.683030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:57.904 [2024-07-15 16:09:26.683053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.904 [2024-07-15 16:09:26.683062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.904 [2024-07-15 16:09:26.692612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:57.904 [2024-07-15 16:09:26.692633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.904 [2024-07-15 16:09:26.692642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.904 [2024-07-15 16:09:26.701513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:57.904 [2024-07-15 16:09:26.701534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.904 [2024-07-15 16:09:26.701543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.904 [2024-07-15 16:09:26.711337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:57.904 [2024-07-15 16:09:26.711358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.904 [2024-07-15 16:09:26.711368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.904 [2024-07-15 16:09:26.721660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:57.904 [2024-07-15 16:09:26.721681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.904 [2024-07-15 16:09:26.721688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.904 [2024-07-15 16:09:26.731790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:57.904 [2024-07-15 16:09:26.731810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.904 [2024-07-15 16:09:26.731818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.904 [2024-07-15 16:09:26.739781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:57.904 [2024-07-15 16:09:26.739802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.904 [2024-07-15 16:09:26.739810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.904 [2024-07-15 16:09:26.750130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:57.904 [2024-07-15 16:09:26.750150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.904 [2024-07-15 16:09:26.750158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.904 [2024-07-15 16:09:26.761140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:57.904 [2024-07-15 16:09:26.761162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.904 [2024-07-15 16:09:26.761171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.904 [2024-07-15 16:09:26.769904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:57.904 [2024-07-15 16:09:26.769925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.904 [2024-07-15 16:09:26.769933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.905 [2024-07-15 16:09:26.779971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:57.905 [2024-07-15 16:09:26.779993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:25021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.905 [2024-07-15 16:09:26.780005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.905 [2024-07-15 16:09:26.789783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:57.905 [2024-07-15 16:09:26.789804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.905 [2024-07-15 16:09:26.789812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.905 [2024-07-15 16:09:26.798869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:57.905 [2024-07-15 16:09:26.798891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.905 [2024-07-15 16:09:26.798900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.905 [2024-07-15 16:09:26.808965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:57.905 [2024-07-15 16:09:26.808985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.905 [2024-07-15 16:09:26.808994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.905 [2024-07-15 16:09:26.818487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:57.905 [2024-07-15 16:09:26.818507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.905 [2024-07-15 16:09:26.818516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.905 [2024-07-15 16:09:26.828382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:57.905 [2024-07-15 16:09:26.828402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.905 [2024-07-15 16:09:26.828410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.905 [2024-07-15 16:09:26.836608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:57.905 [2024-07-15 16:09:26.836629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.905 [2024-07-15 16:09:26.836637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.164 [2024-07-15 16:09:26.847507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.164 [2024-07-15 16:09:26.847528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.164 [2024-07-15 16:09:26.847536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.164 [2024-07-15 16:09:26.859908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.164 [2024-07-15 16:09:26.859930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.164 [2024-07-15 16:09:26.859937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.164 [2024-07-15 16:09:26.870019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.164 [2024-07-15 16:09:26.870043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.164 [2024-07-15 16:09:26.870051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.164 [2024-07-15 16:09:26.877852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.164 [2024-07-15 16:09:26.877872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.164 [2024-07-15 16:09:26.877880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.164 [2024-07-15 16:09:26.888158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.164 [2024-07-15 16:09:26.888178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.164 [2024-07-15 16:09:26.888186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.164 [2024-07-15 16:09:26.898924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.164 [2024-07-15 16:09:26.898945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.164 [2024-07-15 16:09:26.898954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.164 [2024-07-15 16:09:26.907275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.164 [2024-07-15 16:09:26.907296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.164 [2024-07-15 16:09:26.907305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.164 [2024-07-15 16:09:26.918331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.164 [2024-07-15 16:09:26.918353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.164 [2024-07-15 16:09:26.918361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.164 [2024-07-15 16:09:26.927777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.164 [2024-07-15 16:09:26.927798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.164 [2024-07-15 16:09:26.927807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.164 [2024-07-15 16:09:26.937419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.164 [2024-07-15 16:09:26.937440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.164 [2024-07-15 16:09:26.937448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.164 [2024-07-15 16:09:26.947737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.164 [2024-07-15 16:09:26.947758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.164 [2024-07-15 16:09:26.947765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.164 [2024-07-15 16:09:26.956658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.164 [2024-07-15 16:09:26.956678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:13604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.165 [2024-07-15 16:09:26.956686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.165 [2024-07-15 16:09:26.966006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.165 [2024-07-15 16:09:26.966027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.165 [2024-07-15 16:09:26.966035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.165 [2024-07-15 16:09:26.974769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.165 [2024-07-15 16:09:26.974789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.165 [2024-07-15 16:09:26.974798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.165 [2024-07-15 16:09:26.985795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.165 [2024-07-15 16:09:26.985816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.165 [2024-07-15 16:09:26.985824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.165 [2024-07-15 16:09:26.997890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.165 [2024-07-15 16:09:26.997911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.165 [2024-07-15 16:09:26.997919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.165 [2024-07-15 16:09:27.006248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.165 [2024-07-15 16:09:27.006268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.165 [2024-07-15 16:09:27.006277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.165 [2024-07-15 16:09:27.017164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.165 [2024-07-15 16:09:27.017184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.165 [2024-07-15 16:09:27.017192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.165 [2024-07-15 16:09:27.026678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.165 [2024-07-15 16:09:27.026699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.165 [2024-07-15 16:09:27.026707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.165 [2024-07-15 16:09:27.035818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.165 [2024-07-15 16:09:27.035842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.165 [2024-07-15 16:09:27.035850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.165 [2024-07-15 16:09:27.045944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.165 [2024-07-15 16:09:27.045964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.165 [2024-07-15 16:09:27.045972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.165 [2024-07-15 16:09:27.054879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.165 [2024-07-15 16:09:27.054899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.165 [2024-07-15 16:09:27.054907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.165 [2024-07-15 16:09:27.063949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.165 [2024-07-15 16:09:27.063969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.165 [2024-07-15 16:09:27.063977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.165 [2024-07-15 16:09:27.074662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.165 [2024-07-15 16:09:27.074682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.165 [2024-07-15 16:09:27.074690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.165 [2024-07-15 16:09:27.083004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.165 [2024-07-15 16:09:27.083025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.165 [2024-07-15 16:09:27.083033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.165 [2024-07-15 16:09:27.093588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.165 [2024-07-15 16:09:27.093609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.165 [2024-07-15 16:09:27.093617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.424 [2024-07-15 16:09:27.102026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.424 [2024-07-15 16:09:27.102047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.424 [2024-07-15 16:09:27.102055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.424 [2024-07-15 16:09:27.113927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.424 [2024-07-15 16:09:27.113948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.424 [2024-07-15 16:09:27.113956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.424 [2024-07-15 16:09:27.126423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.424 [2024-07-15 16:09:27.126443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.424 [2024-07-15 16:09:27.126451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.424 [2024-07-15 16:09:27.138792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.424 [2024-07-15 16:09:27.138813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.424 [2024-07-15 16:09:27.138821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.424 [2024-07-15 16:09:27.149945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.424 [2024-07-15 16:09:27.149965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.424 [2024-07-15 16:09:27.149973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.424 [2024-07-15 16:09:27.158702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.424 [2024-07-15 16:09:27.158722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.424 [2024-07-15 16:09:27.158730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.424 [2024-07-15 16:09:27.170514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.424 [2024-07-15 16:09:27.170535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.424 [2024-07-15 16:09:27.170543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.424 [2024-07-15 16:09:27.181815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.424 [2024-07-15 16:09:27.181835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.424 [2024-07-15 16:09:27.181844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.424 [2024-07-15 16:09:27.191638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.424 [2024-07-15 16:09:27.191658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.424 [2024-07-15 16:09:27.191666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.424 [2024-07-15 16:09:27.200541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.424 [2024-07-15 16:09:27.200562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:18360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.424 [2024-07-15 16:09:27.200570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.424 [2024-07-15 16:09:27.209823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.424 [2024-07-15 16:09:27.209843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.424 [2024-07-15 16:09:27.209856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.424 [2024-07-15 16:09:27.219647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.424 [2024-07-15 16:09:27.219668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:8916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.424 [2024-07-15 16:09:27.219676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.424 [2024-07-15 16:09:27.229624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.424 [2024-07-15 16:09:27.229644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:18375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.424 [2024-07-15 16:09:27.229652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.424 [2024-07-15 16:09:27.237470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.424 [2024-07-15 16:09:27.237490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:15027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.424 [2024-07-15 16:09:27.237498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.424 [2024-07-15 16:09:27.248316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.424 [2024-07-15 16:09:27.248336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:8267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.424 [2024-07-15 16:09:27.248345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.424 [2024-07-15 16:09:27.257771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.424 [2024-07-15 16:09:27.257790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.424 [2024-07-15 16:09:27.257798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.424 [2024-07-15 16:09:27.266748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.424 [2024-07-15 16:09:27.266769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.424 [2024-07-15 16:09:27.266777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.424 [2024-07-15 16:09:27.275840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.424 [2024-07-15 16:09:27.275860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.424 [2024-07-15 16:09:27.275868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.424 [2024-07-15 16:09:27.285872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.424 [2024-07-15 16:09:27.285892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.424 [2024-07-15 16:09:27.285901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.424 [2024-07-15 16:09:27.294296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.424 [2024-07-15 16:09:27.294320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.424 [2024-07-15 16:09:27.294329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.424 [2024-07-15 16:09:27.305306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.424 [2024-07-15 16:09:27.305327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:25552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.424 [2024-07-15 16:09:27.305335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.424 [2024-07-15 16:09:27.314469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.424 [2024-07-15 16:09:27.314490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.424 [2024-07-15 16:09:27.314498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.425 [2024-07-15 16:09:27.326986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.425 [2024-07-15 16:09:27.327006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.425 [2024-07-15 16:09:27.327014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.425 [2024-07-15 16:09:27.338007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.425 [2024-07-15 16:09:27.338028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.425 [2024-07-15 16:09:27.338036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.425 [2024-07-15 16:09:27.346999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.425 [2024-07-15 16:09:27.347021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.425 [2024-07-15 16:09:27.347029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.683 [2024-07-15 16:09:27.358624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.683 [2024-07-15 16:09:27.358645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:25522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.683 [2024-07-15 16:09:27.358653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.683 [2024-07-15 16:09:27.371187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.683 [2024-07-15 16:09:27.371207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.683 [2024-07-15 16:09:27.371215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.683 [2024-07-15 16:09:27.379812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.683 [2024-07-15 16:09:27.379833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.683 [2024-07-15 16:09:27.379841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.683 [2024-07-15 16:09:27.390102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.683 [2024-07-15 16:09:27.390123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.683 [2024-07-15 16:09:27.390131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.683 [2024-07-15 16:09:27.400454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.683 [2024-07-15 16:09:27.400475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.683 [2024-07-15 16:09:27.400483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.684 [2024-07-15 16:09:27.409235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.684 [2024-07-15 16:09:27.409255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.684 [2024-07-15 16:09:27.409263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.684 [2024-07-15 16:09:27.422152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.684 [2024-07-15 16:09:27.422173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.684 [2024-07-15 16:09:27.422181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.684 [2024-07-15 16:09:27.434236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.684 [2024-07-15 16:09:27.434257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.684 [2024-07-15 16:09:27.434265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.684 [2024-07-15 16:09:27.446277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.684 [2024-07-15 16:09:27.446298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.684 [2024-07-15 16:09:27.446306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.684 [2024-07-15 16:09:27.454744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.684 [2024-07-15 16:09:27.454765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.684 [2024-07-15 16:09:27.454773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.684 [2024-07-15 16:09:27.465347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.684 [2024-07-15 16:09:27.465367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:24991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.684 [2024-07-15 16:09:27.465375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.684 [2024-07-15 16:09:27.473443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.684 [2024-07-15 16:09:27.473463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.684 [2024-07-15 16:09:27.473474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.684 [2024-07-15 16:09:27.484063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.684 [2024-07-15 16:09:27.484084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.684 [2024-07-15 16:09:27.484092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.684 [2024-07-15 16:09:27.493753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.684 [2024-07-15 16:09:27.493772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.684 [2024-07-15 16:09:27.493781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.684 [2024-07-15 16:09:27.502498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.684 [2024-07-15 16:09:27.502518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.684 [2024-07-15 16:09:27.502526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.684 [2024-07-15 16:09:27.513861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.684 [2024-07-15 16:09:27.513881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.684 [2024-07-15 16:09:27.513889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.684 [2024-07-15 16:09:27.523201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.684 [2024-07-15 16:09:27.523222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.684 [2024-07-15 16:09:27.523235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.684 [2024-07-15 16:09:27.533806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.684 [2024-07-15 16:09:27.533826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.684 [2024-07-15 16:09:27.533834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.684 [2024-07-15 16:09:27.543067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.684 [2024-07-15 16:09:27.543087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.684 [2024-07-15 16:09:27.543096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.684 [2024-07-15 16:09:27.552927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.684 [2024-07-15 16:09:27.552947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:25423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.684 [2024-07-15 16:09:27.552955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.684 [2024-07-15 16:09:27.561114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.684 [2024-07-15 16:09:27.561135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.684 [2024-07-15 16:09:27.561143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.684 [2024-07-15 16:09:27.571087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.684 [2024-07-15 16:09:27.571108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.684 [2024-07-15 16:09:27.571116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.684 [2024-07-15 16:09:27.580295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.684 [2024-07-15 16:09:27.580316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.684 [2024-07-15 16:09:27.580324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.684 [2024-07-15 16:09:27.589953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.684 [2024-07-15 16:09:27.589972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.684 [2024-07-15 16:09:27.589980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.684 [2024-07-15 16:09:27.598754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.684 [2024-07-15 16:09:27.598775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.684 [2024-07-15 16:09:27.598783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.684 [2024-07-15 16:09:27.609534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.684 [2024-07-15 16:09:27.609554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.684 [2024-07-15 16:09:27.609562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.943 [2024-07-15 16:09:27.618210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.943 [2024-07-15 16:09:27.618254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:14070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.943 [2024-07-15 16:09:27.618263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.943 [2024-07-15 16:09:27.629280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.943 [2024-07-15 16:09:27.629299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.943 [2024-07-15 16:09:27.629307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.943 [2024-07-15 16:09:27.637835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.943 [2024-07-15 16:09:27.637855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.943 [2024-07-15 16:09:27.637866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.943 [2024-07-15 16:09:27.647978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.943 [2024-07-15 16:09:27.647998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:15027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.943 [2024-07-15 16:09:27.648006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.943 [2024-07-15 16:09:27.657818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.943 [2024-07-15 16:09:27.657838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.943 [2024-07-15 16:09:27.657846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.943 [2024-07-15 16:09:27.667146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.943 [2024-07-15 16:09:27.667166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.943 [2024-07-15 16:09:27.667174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.943 [2024-07-15 16:09:27.676394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.943 [2024-07-15 16:09:27.676414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.943 [2024-07-15 16:09:27.676422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.943 [2024-07-15 16:09:27.686372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.943 [2024-07-15 16:09:27.686393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.943 [2024-07-15 16:09:27.686402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.943 [2024-07-15 16:09:27.695832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.943 [2024-07-15 16:09:27.695853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.943 [2024-07-15 16:09:27.695862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.943 [2024-07-15 16:09:27.705774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.943 [2024-07-15 16:09:27.705793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.943 [2024-07-15 16:09:27.705801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.943 [2024-07-15 16:09:27.715145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.943 [2024-07-15 16:09:27.715166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.943 [2024-07-15 16:09:27.715175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.943 [2024-07-15 16:09:27.723797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.943 [2024-07-15 16:09:27.723822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.943 [2024-07-15 16:09:27.723830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.943 [2024-07-15 16:09:27.733309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.943 [2024-07-15 16:09:27.733331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.943 [2024-07-15 16:09:27.733339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.943 [2024-07-15 16:09:27.744606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.943 [2024-07-15 16:09:27.744627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.943 [2024-07-15 16:09:27.744636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.943 [2024-07-15 16:09:27.753444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.943 [2024-07-15 16:09:27.753464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.943 [2024-07-15 16:09:27.753472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.943 [2024-07-15 16:09:27.764975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.943 [2024-07-15 16:09:27.764996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.943 [2024-07-15 16:09:27.765004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.943 [2024-07-15 16:09:27.775970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.943 [2024-07-15 16:09:27.775992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.943 [2024-07-15 16:09:27.776001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.943 [2024-07-15 16:09:27.787126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.943 [2024-07-15 16:09:27.787147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:11041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.943 [2024-07-15 16:09:27.787156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.943 [2024-07-15 16:09:27.798834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.943 [2024-07-15 16:09:27.798856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.943 [2024-07-15 16:09:27.798865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.943 [2024-07-15 16:09:27.808229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.943 [2024-07-15 16:09:27.808250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.943 [2024-07-15 16:09:27.808258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.943 [2024-07-15 16:09:27.820219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.943 [2024-07-15 16:09:27.820247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.943 [2024-07-15 16:09:27.820255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.943 [2024-07-15 16:09:27.832070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.943 [2024-07-15 16:09:27.832093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.943 [2024-07-15 16:09:27.832101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.943 [2024-07-15 16:09:27.840000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.943 [2024-07-15 16:09:27.840021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.943 [2024-07-15 16:09:27.840029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.943 [2024-07-15 16:09:27.850519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.943 [2024-07-15 16:09:27.850541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.943 [2024-07-15 16:09:27.850549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.943 [2024-07-15 16:09:27.859684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.943 [2024-07-15 16:09:27.859706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.943 [2024-07-15 16:09:27.859714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.943 [2024-07-15 16:09:27.869864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:58.943 [2024-07-15 16:09:27.869885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.943 [2024-07-15 16:09:27.869894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.203 [2024-07-15 16:09:27.881270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.203 [2024-07-15 16:09:27.881291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.203 [2024-07-15 16:09:27.881299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.203 [2024-07-15 16:09:27.892443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.203 [2024-07-15 16:09:27.892464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.203 [2024-07-15 16:09:27.892472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.203 [2024-07-15 16:09:27.899966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.203 [2024-07-15 16:09:27.899986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.203 [2024-07-15 16:09:27.899997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.203 [2024-07-15 16:09:27.909902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.203 [2024-07-15 16:09:27.909923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.203 [2024-07-15 16:09:27.909931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.203 [2024-07-15 16:09:27.919363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.203 [2024-07-15 16:09:27.919384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:10020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.203 [2024-07-15 16:09:27.919392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.203 [2024-07-15 16:09:27.928825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.203 [2024-07-15 16:09:27.928847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.203 [2024-07-15 16:09:27.928856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.203 [2024-07-15 16:09:27.939644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.203 [2024-07-15 16:09:27.939666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.203 [2024-07-15 16:09:27.939674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.203 [2024-07-15 16:09:27.948603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.203 [2024-07-15 16:09:27.948623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.203 [2024-07-15 16:09:27.948632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.203 [2024-07-15 16:09:27.958355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.203 [2024-07-15 16:09:27.958375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.203 [2024-07-15 16:09:27.958384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.203 [2024-07-15 16:09:27.968270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.203 [2024-07-15 16:09:27.968291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.203 [2024-07-15 16:09:27.968300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.203 [2024-07-15 16:09:27.977385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.203 [2024-07-15 16:09:27.977406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.203 [2024-07-15 16:09:27.977414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.203 [2024-07-15 16:09:27.986497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.203 [2024-07-15 16:09:27.986517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.204 [2024-07-15 16:09:27.986525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.204 [2024-07-15 16:09:27.995912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.204 [2024-07-15 16:09:27.995932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.204 [2024-07-15 16:09:27.995940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.204 [2024-07-15 16:09:28.005106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.204 [2024-07-15 16:09:28.005126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.204 [2024-07-15 16:09:28.005134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.204 [2024-07-15 16:09:28.014637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.204 [2024-07-15 16:09:28.014658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.204 [2024-07-15 16:09:28.014665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.204 [2024-07-15 16:09:28.024755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.204 [2024-07-15 16:09:28.024776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.204 [2024-07-15 16:09:28.024784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.204 [2024-07-15 16:09:28.032857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.204 [2024-07-15 16:09:28.032879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.204 [2024-07-15 16:09:28.032887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.204 [2024-07-15 16:09:28.043620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.204 [2024-07-15 16:09:28.043641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.204 [2024-07-15 16:09:28.043649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.204 [2024-07-15 16:09:28.051704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.204 [2024-07-15 16:09:28.051725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.204 [2024-07-15 16:09:28.051733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.204 [2024-07-15 16:09:28.062136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.204 [2024-07-15 16:09:28.062157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.204 [2024-07-15 16:09:28.062168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.204 [2024-07-15 16:09:28.072140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.204 [2024-07-15 16:09:28.072161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.204 [2024-07-15 16:09:28.072169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.204 [2024-07-15 16:09:28.080596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.204 [2024-07-15 16:09:28.080615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:8900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.204 [2024-07-15 16:09:28.080623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.204 [2024-07-15 16:09:28.092879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.204 [2024-07-15 16:09:28.092900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.204 [2024-07-15 16:09:28.092908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.204 [2024-07-15 16:09:28.104271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.204 [2024-07-15 16:09:28.104293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.204 [2024-07-15 16:09:28.104301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.204 [2024-07-15 16:09:28.112166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.204 [2024-07-15 16:09:28.112187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.204 [2024-07-15 16:09:28.112195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.204 [2024-07-15 16:09:28.122902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.204 [2024-07-15 16:09:28.122923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:11132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.204 [2024-07-15 16:09:28.122932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.204 [2024-07-15 16:09:28.134732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.204 [2024-07-15 16:09:28.134753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.204 [2024-07-15 16:09:28.134762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.466 [2024-07-15 16:09:28.144089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.466 [2024-07-15 16:09:28.144110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.466 [2024-07-15 16:09:28.144118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.466 [2024-07-15 16:09:28.152864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.466 [2024-07-15 16:09:28.152889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.466 [2024-07-15 16:09:28.152898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.466 [2024-07-15 16:09:28.164253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.466 [2024-07-15 16:09:28.164275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.466 [2024-07-15 16:09:28.164283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.466 [2024-07-15 16:09:28.175428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.466 [2024-07-15 16:09:28.175449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.466 [2024-07-15 16:09:28.175457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.466 [2024-07-15 16:09:28.184301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.466 [2024-07-15 16:09:28.184323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.466 [2024-07-15 16:09:28.184332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.466 [2024-07-15 16:09:28.195202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.466 [2024-07-15 16:09:28.195223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:14195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.466 [2024-07-15 16:09:28.195237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.466 [2024-07-15 16:09:28.204441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.466 [2024-07-15 16:09:28.204461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.466 [2024-07-15 16:09:28.204470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.466 [2024-07-15 16:09:28.213002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.466 [2024-07-15 16:09:28.213024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.466 [2024-07-15 16:09:28.213032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.466 [2024-07-15 16:09:28.223875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.466 [2024-07-15 16:09:28.223897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.466 [2024-07-15 16:09:28.223905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.466 [2024-07-15 16:09:28.232921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.466 [2024-07-15 16:09:28.232941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.466 [2024-07-15 16:09:28.232949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.466 [2024-07-15 16:09:28.241342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.466 [2024-07-15 16:09:28.241363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.466 [2024-07-15 16:09:28.241371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.466 [2024-07-15 16:09:28.251026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.466 [2024-07-15 16:09:28.251047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.466 [2024-07-15 16:09:28.251056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.466 [2024-07-15 16:09:28.262264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.466 [2024-07-15 16:09:28.262285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.466 [2024-07-15 16:09:28.262294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.466 [2024-07-15 16:09:28.270920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.466 [2024-07-15 16:09:28.270940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.466 [2024-07-15 16:09:28.270947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.466 [2024-07-15 16:09:28.283657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.466 [2024-07-15 16:09:28.283677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.466 [2024-07-15 16:09:28.283685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.466 [2024-07-15 16:09:28.293814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.466 [2024-07-15 16:09:28.293834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.466 [2024-07-15 16:09:28.293842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.466 [2024-07-15 16:09:28.302525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.466 [2024-07-15 16:09:28.302546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.466 [2024-07-15 16:09:28.302554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.466 [2024-07-15 16:09:28.312114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.466 [2024-07-15 16:09:28.312135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.466 [2024-07-15 16:09:28.312143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.466 [2024-07-15 16:09:28.320584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.466 [2024-07-15 16:09:28.320604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.466 [2024-07-15 16:09:28.320616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.466 [2024-07-15 16:09:28.331709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.466 [2024-07-15 16:09:28.331730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.466 [2024-07-15 16:09:28.331738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.466 [2024-07-15 16:09:28.343013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.466 [2024-07-15 16:09:28.343034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.466 [2024-07-15 16:09:28.343042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.466 [2024-07-15 16:09:28.352074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.466 [2024-07-15 16:09:28.352094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.466 [2024-07-15 16:09:28.352103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.466 [2024-07-15 16:09:28.360408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.466 [2024-07-15 16:09:28.360429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.466 [2024-07-15 16:09:28.360437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.466 [2024-07-15 16:09:28.372091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.466 [2024-07-15 16:09:28.372113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.466 [2024-07-15 16:09:28.372121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.466 [2024-07-15 16:09:28.384638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.466 [2024-07-15 16:09:28.384659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.466 [2024-07-15 16:09:28.384667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.466 [2024-07-15 16:09:28.397000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.466 [2024-07-15 16:09:28.397021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.466 [2024-07-15 16:09:28.397030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.750 [2024-07-15 16:09:28.409533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.750 [2024-07-15 16:09:28.409553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.750 [2024-07-15 16:09:28.409561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.750 [2024-07-15 16:09:28.418153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.750 [2024-07-15 16:09:28.418174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.750 [2024-07-15 16:09:28.418182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.750 [2024-07-15 16:09:28.429673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.750 [2024-07-15 16:09:28.429695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.750 [2024-07-15 16:09:28.429703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.751 [2024-07-15 16:09:28.437680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.751 [2024-07-15 16:09:28.437701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.751 [2024-07-15 16:09:28.437709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.751 [2024-07-15 16:09:28.449405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.751 [2024-07-15 16:09:28.449425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.751 [2024-07-15 16:09:28.449434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.751 [2024-07-15 16:09:28.461819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.751 [2024-07-15 16:09:28.461840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.751 [2024-07-15 16:09:28.461848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.751 [2024-07-15 16:09:28.473093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.751 [2024-07-15 16:09:28.473113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.751 [2024-07-15 16:09:28.473121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.751 [2024-07-15 16:09:28.482593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.751 [2024-07-15 16:09:28.482614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.751 [2024-07-15 16:09:28.482622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.751 [2024-07-15 16:09:28.494161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.751 [2024-07-15 16:09:28.494181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.751 [2024-07-15 16:09:28.494189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.751 [2024-07-15 16:09:28.506444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.751 [2024-07-15 16:09:28.506465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.751 [2024-07-15 16:09:28.506476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.751 [2024-07-15 16:09:28.518655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.751 [2024-07-15 16:09:28.518676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.751 [2024-07-15 16:09:28.518684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.751 [2024-07-15 16:09:28.526703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.751 [2024-07-15 16:09:28.526723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.751 [2024-07-15 16:09:28.526731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.751 [2024-07-15 16:09:28.537278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.751 [2024-07-15 16:09:28.537299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.751 [2024-07-15 16:09:28.537308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.751 [2024-07-15 16:09:28.547323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.751 [2024-07-15 16:09:28.547344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.751 [2024-07-15 16:09:28.547352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.751 [2024-07-15 16:09:28.559941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.751 [2024-07-15 16:09:28.559962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.751 [2024-07-15 16:09:28.559970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.751 [2024-07-15 16:09:28.568363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.751 [2024-07-15 16:09:28.568383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.751 [2024-07-15 16:09:28.568391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.751 [2024-07-15 16:09:28.578319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.751 [2024-07-15 16:09:28.578339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.751 [2024-07-15 16:09:28.578347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.751 [2024-07-15 16:09:28.586755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.751 [2024-07-15 16:09:28.586775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.751 [2024-07-15 16:09:28.586783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.751 [2024-07-15 16:09:28.598361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.751 [2024-07-15 16:09:28.598385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.751 [2024-07-15 16:09:28.598393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.751 [2024-07-15 16:09:28.609622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.751 [2024-07-15 16:09:28.609643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:25528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.751 [2024-07-15 16:09:28.609651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.751 [2024-07-15 16:09:28.618021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.751 [2024-07-15 16:09:28.618041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.751 [2024-07-15 16:09:28.618049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.751 [2024-07-15 16:09:28.631470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.751 [2024-07-15 16:09:28.631491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.751 [2024-07-15 16:09:28.631499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.751 [2024-07-15 16:09:28.639999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.751 [2024-07-15 16:09:28.640020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.751 [2024-07-15 16:09:28.640028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.751 [2024-07-15 16:09:28.652181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1152f20) 00:27:59.751 [2024-07-15 16:09:28.652202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.751 [2024-07-15 16:09:28.652210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.751 00:27:59.751 Latency(us) 00:27:59.751 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:59.752 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:59.752 nvme0n1 : 2.00 25402.00 99.23 0.00 0.00 5034.36 2094.30 16526.47 00:27:59.752 =================================================================================================================== 00:27:59.752 Total : 25402.00 99.23 0.00 0.00 5034.36 2094.30 16526.47 00:27:59.752 0 00:28:00.024 16:09:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:00.024 16:09:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:00.024 16:09:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:00.024 | .driver_specific 00:28:00.024 | .nvme_error 00:28:00.024 | .status_code 00:28:00.024 | .command_transient_transport_error' 00:28:00.024 16:09:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:00.024 16:09:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 199 > 0 )) 00:28:00.024 16:09:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3912435 00:28:00.024 16:09:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3912435 ']' 00:28:00.024 16:09:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3912435 00:28:00.024 16:09:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:00.024 16:09:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:00.024 16:09:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3912435 00:28:00.024 16:09:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:00.024 16:09:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:00.024 16:09:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3912435' 00:28:00.024 killing process with pid 3912435 00:28:00.024 16:09:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3912435 00:28:00.024 Received shutdown signal, test time was about 2.000000 seconds 00:28:00.024 00:28:00.024 Latency(us) 00:28:00.024 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:00.024 =================================================================================================================== 00:28:00.024 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:00.024 16:09:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3912435 00:28:00.282 16:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:00.282 16:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:00.282 16:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:00.282 16:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:00.282 16:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:00.282 16:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3913041 00:28:00.282 16:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3913041 /var/tmp/bperf.sock 00:28:00.282 16:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:00.282 16:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3913041 ']' 00:28:00.282 16:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:00.282 16:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:00.282 16:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:00.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:00.282 16:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:00.282 16:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:00.282 [2024-07-15 16:09:29.128653] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:28:00.282 [2024-07-15 16:09:29.128703] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3913041 ] 00:28:00.282 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:00.282 Zero copy mechanism will not be used. 00:28:00.282 EAL: No free 2048 kB hugepages reported on node 1 00:28:00.282 [2024-07-15 16:09:29.183372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:00.540 [2024-07-15 16:09:29.252104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:01.107 16:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:01.107 16:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:01.107 16:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:01.107 16:09:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:01.366 16:09:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:01.366 16:09:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.366 16:09:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:01.366 16:09:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.366 16:09:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:01.366 16:09:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:01.624 nvme0n1 00:28:01.624 16:09:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:01.624 16:09:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.624 16:09:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:01.883 16:09:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.883 16:09:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:01.883 16:09:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:01.883 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:01.883 Zero copy mechanism will not be used. 00:28:01.883 Running I/O for 2 seconds... 00:28:01.883 [2024-07-15 16:09:30.661832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:01.883 [2024-07-15 16:09:30.661868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.883 [2024-07-15 16:09:30.661878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:01.883 [2024-07-15 16:09:30.671263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:01.883 [2024-07-15 16:09:30.671290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.883 [2024-07-15 16:09:30.671299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:01.883 [2024-07-15 16:09:30.680518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:01.883 [2024-07-15 16:09:30.680542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.883 [2024-07-15 16:09:30.680551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:01.883 [2024-07-15 16:09:30.689725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:01.883 [2024-07-15 16:09:30.689750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.883 [2024-07-15 16:09:30.689763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:01.883 [2024-07-15 16:09:30.699419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:01.883 [2024-07-15 16:09:30.699443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.883 [2024-07-15 16:09:30.699452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:01.883 [2024-07-15 16:09:30.708489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:01.883 [2024-07-15 16:09:30.708514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.883 [2024-07-15 16:09:30.708523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:01.883 [2024-07-15 16:09:30.717726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:01.883 [2024-07-15 16:09:30.717749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.883 [2024-07-15 16:09:30.717758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:01.883 [2024-07-15 16:09:30.725656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:01.883 [2024-07-15 16:09:30.725680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.883 [2024-07-15 16:09:30.725689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:01.883 [2024-07-15 16:09:30.734527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:01.883 [2024-07-15 16:09:30.734552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.883 [2024-07-15 16:09:30.734561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:01.883 [2024-07-15 16:09:30.744212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:01.883 [2024-07-15 16:09:30.744242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.883 [2024-07-15 16:09:30.744250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:01.883 [2024-07-15 16:09:30.754109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:01.883 [2024-07-15 16:09:30.754132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.883 [2024-07-15 16:09:30.754141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:01.884 [2024-07-15 16:09:30.764021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:01.884 [2024-07-15 16:09:30.764045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.884 [2024-07-15 16:09:30.764053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:01.884 [2024-07-15 16:09:30.773906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:01.884 [2024-07-15 16:09:30.773933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.884 [2024-07-15 16:09:30.773941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:01.884 [2024-07-15 16:09:30.784711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:01.884 [2024-07-15 16:09:30.784735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.884 [2024-07-15 16:09:30.784744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:01.884 [2024-07-15 16:09:30.794297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:01.884 [2024-07-15 16:09:30.794320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.884 [2024-07-15 16:09:30.794328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:01.884 [2024-07-15 16:09:30.803697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:01.884 [2024-07-15 16:09:30.803720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.884 [2024-07-15 16:09:30.803729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:01.884 [2024-07-15 16:09:30.813605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:01.884 [2024-07-15 16:09:30.813628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.884 [2024-07-15 16:09:30.813636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.143 [2024-07-15 16:09:30.823774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.143 [2024-07-15 16:09:30.823797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.143 [2024-07-15 16:09:30.823805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.143 [2024-07-15 16:09:30.833687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.143 [2024-07-15 16:09:30.833711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.143 [2024-07-15 16:09:30.833719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.143 [2024-07-15 16:09:30.842968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.143 [2024-07-15 16:09:30.842992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.143 [2024-07-15 16:09:30.843001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.143 [2024-07-15 16:09:30.852178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.143 [2024-07-15 16:09:30.852200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.143 [2024-07-15 16:09:30.852209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.143 [2024-07-15 16:09:30.861900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.143 [2024-07-15 16:09:30.861923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.143 [2024-07-15 16:09:30.861932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.143 [2024-07-15 16:09:30.871694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.143 [2024-07-15 16:09:30.871717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.143 [2024-07-15 16:09:30.871726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.143 [2024-07-15 16:09:30.882137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.143 [2024-07-15 16:09:30.882160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.143 [2024-07-15 16:09:30.882168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.143 [2024-07-15 16:09:30.891499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.143 [2024-07-15 16:09:30.891523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.143 [2024-07-15 16:09:30.891531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.143 [2024-07-15 16:09:30.902312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.144 [2024-07-15 16:09:30.902336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.144 [2024-07-15 16:09:30.902345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.144 [2024-07-15 16:09:30.912151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.144 [2024-07-15 16:09:30.912175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.144 [2024-07-15 16:09:30.912183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.144 [2024-07-15 16:09:30.921985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.144 [2024-07-15 16:09:30.922008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.144 [2024-07-15 16:09:30.922017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.144 [2024-07-15 16:09:30.931812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.144 [2024-07-15 16:09:30.931834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.144 [2024-07-15 16:09:30.931843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.144 [2024-07-15 16:09:30.941133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.144 [2024-07-15 16:09:30.941158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.144 [2024-07-15 16:09:30.941170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.144 [2024-07-15 16:09:30.950752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.144 [2024-07-15 16:09:30.950776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.144 [2024-07-15 16:09:30.950786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.144 [2024-07-15 16:09:30.960483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.144 [2024-07-15 16:09:30.960507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.144 [2024-07-15 16:09:30.960515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.144 [2024-07-15 16:09:30.970316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.144 [2024-07-15 16:09:30.970339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.144 [2024-07-15 16:09:30.970348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.144 [2024-07-15 16:09:30.979325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.144 [2024-07-15 16:09:30.979349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.144 [2024-07-15 16:09:30.979357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.144 [2024-07-15 16:09:30.988641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.144 [2024-07-15 16:09:30.988667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.144 [2024-07-15 16:09:30.988676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.144 [2024-07-15 16:09:30.998457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.144 [2024-07-15 16:09:30.998481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.144 [2024-07-15 16:09:30.998489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.144 [2024-07-15 16:09:31.008365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.144 [2024-07-15 16:09:31.008389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.144 [2024-07-15 16:09:31.008398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.144 [2024-07-15 16:09:31.017498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.144 [2024-07-15 16:09:31.017522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.144 [2024-07-15 16:09:31.017531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.144 [2024-07-15 16:09:31.027903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.144 [2024-07-15 16:09:31.027931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.144 [2024-07-15 16:09:31.027940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.144 [2024-07-15 16:09:31.038183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.144 [2024-07-15 16:09:31.038207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.144 [2024-07-15 16:09:31.038215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.144 [2024-07-15 16:09:31.047861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.144 [2024-07-15 16:09:31.047883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.144 [2024-07-15 16:09:31.047892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.144 [2024-07-15 16:09:31.057508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.144 [2024-07-15 16:09:31.057531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.144 [2024-07-15 16:09:31.057539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.144 [2024-07-15 16:09:31.067323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.144 [2024-07-15 16:09:31.067345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.144 [2024-07-15 16:09:31.067353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.144 [2024-07-15 16:09:31.076265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.144 [2024-07-15 16:09:31.076288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.144 [2024-07-15 16:09:31.076297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.405 [2024-07-15 16:09:31.085711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.405 [2024-07-15 16:09:31.085734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.405 [2024-07-15 16:09:31.085742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.405 [2024-07-15 16:09:31.095202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.405 [2024-07-15 16:09:31.095232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.405 [2024-07-15 16:09:31.095241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.405 [2024-07-15 16:09:31.105062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.405 [2024-07-15 16:09:31.105084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.405 [2024-07-15 16:09:31.105093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.405 [2024-07-15 16:09:31.114513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.405 [2024-07-15 16:09:31.114536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.405 [2024-07-15 16:09:31.114544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.405 [2024-07-15 16:09:31.123592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.405 [2024-07-15 16:09:31.123615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.405 [2024-07-15 16:09:31.123623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.405 [2024-07-15 16:09:31.133266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.405 [2024-07-15 16:09:31.133287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.405 [2024-07-15 16:09:31.133296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.405 [2024-07-15 16:09:31.143104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.405 [2024-07-15 16:09:31.143127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.405 [2024-07-15 16:09:31.143136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.405 [2024-07-15 16:09:31.152989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.405 [2024-07-15 16:09:31.153011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.405 [2024-07-15 16:09:31.153018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.405 [2024-07-15 16:09:31.162937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.405 [2024-07-15 16:09:31.162959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.405 [2024-07-15 16:09:31.162967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.405 [2024-07-15 16:09:31.172557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.405 [2024-07-15 16:09:31.172586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.405 [2024-07-15 16:09:31.172594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.405 [2024-07-15 16:09:31.181071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.405 [2024-07-15 16:09:31.181093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.405 [2024-07-15 16:09:31.181102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.405 [2024-07-15 16:09:31.189393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.405 [2024-07-15 16:09:31.189415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.405 [2024-07-15 16:09:31.189427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.405 [2024-07-15 16:09:31.197375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.405 [2024-07-15 16:09:31.197397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.405 [2024-07-15 16:09:31.197406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.405 [2024-07-15 16:09:31.204701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.405 [2024-07-15 16:09:31.204723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.405 [2024-07-15 16:09:31.204731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.405 [2024-07-15 16:09:31.211634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.405 [2024-07-15 16:09:31.211657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.406 [2024-07-15 16:09:31.211665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.406 [2024-07-15 16:09:31.219816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.406 [2024-07-15 16:09:31.219839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.406 [2024-07-15 16:09:31.219848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.406 [2024-07-15 16:09:31.226988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.406 [2024-07-15 16:09:31.227011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.406 [2024-07-15 16:09:31.227020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.406 [2024-07-15 16:09:31.233561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.406 [2024-07-15 16:09:31.233583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.406 [2024-07-15 16:09:31.233591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.406 [2024-07-15 16:09:31.239688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.406 [2024-07-15 16:09:31.239709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.406 [2024-07-15 16:09:31.239718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.406 [2024-07-15 16:09:31.245700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.406 [2024-07-15 16:09:31.245722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.406 [2024-07-15 16:09:31.245730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.406 [2024-07-15 16:09:31.251571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.406 [2024-07-15 16:09:31.251592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.406 [2024-07-15 16:09:31.251600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.406 [2024-07-15 16:09:31.258532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.406 [2024-07-15 16:09:31.258553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.406 [2024-07-15 16:09:31.258562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.406 [2024-07-15 16:09:31.264407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.406 [2024-07-15 16:09:31.264429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.406 [2024-07-15 16:09:31.264437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.406 [2024-07-15 16:09:31.270219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.406 [2024-07-15 16:09:31.270245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.406 [2024-07-15 16:09:31.270253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.406 [2024-07-15 16:09:31.276058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.406 [2024-07-15 16:09:31.276080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.406 [2024-07-15 16:09:31.276088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.406 [2024-07-15 16:09:31.281907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.406 [2024-07-15 16:09:31.281929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.406 [2024-07-15 16:09:31.281937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.406 [2024-07-15 16:09:31.287763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.406 [2024-07-15 16:09:31.287785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.406 [2024-07-15 16:09:31.287793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.406 [2024-07-15 16:09:31.293699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.406 [2024-07-15 16:09:31.293720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.406 [2024-07-15 16:09:31.293728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.406 [2024-07-15 16:09:31.299398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.406 [2024-07-15 16:09:31.299419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.406 [2024-07-15 16:09:31.299430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.406 [2024-07-15 16:09:31.305098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.406 [2024-07-15 16:09:31.305120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.406 [2024-07-15 16:09:31.305128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.406 [2024-07-15 16:09:31.310851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.406 [2024-07-15 16:09:31.310872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.406 [2024-07-15 16:09:31.310880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.406 [2024-07-15 16:09:31.316658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.406 [2024-07-15 16:09:31.316679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.406 [2024-07-15 16:09:31.316687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.406 [2024-07-15 16:09:31.322413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.406 [2024-07-15 16:09:31.322433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.406 [2024-07-15 16:09:31.322441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.406 [2024-07-15 16:09:31.327550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.406 [2024-07-15 16:09:31.327572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.406 [2024-07-15 16:09:31.327581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.406 [2024-07-15 16:09:31.333117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.406 [2024-07-15 16:09:31.333139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.406 [2024-07-15 16:09:31.333147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.406 [2024-07-15 16:09:31.338728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.665 [2024-07-15 16:09:31.338748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.665 [2024-07-15 16:09:31.338757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.665 [2024-07-15 16:09:31.344676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.665 [2024-07-15 16:09:31.344697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.665 [2024-07-15 16:09:31.344706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.665 [2024-07-15 16:09:31.350252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.665 [2024-07-15 16:09:31.350277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.665 [2024-07-15 16:09:31.350284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.665 [2024-07-15 16:09:31.355858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.665 [2024-07-15 16:09:31.355878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.665 [2024-07-15 16:09:31.355886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.665 [2024-07-15 16:09:31.361609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.665 [2024-07-15 16:09:31.361631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.665 [2024-07-15 16:09:31.361639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.665 [2024-07-15 16:09:31.367325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.665 [2024-07-15 16:09:31.367346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.665 [2024-07-15 16:09:31.367354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.665 [2024-07-15 16:09:31.373032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.665 [2024-07-15 16:09:31.373054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.665 [2024-07-15 16:09:31.373062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.665 [2024-07-15 16:09:31.378711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.665 [2024-07-15 16:09:31.378732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.665 [2024-07-15 16:09:31.378740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.665 [2024-07-15 16:09:31.384344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.665 [2024-07-15 16:09:31.384365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.665 [2024-07-15 16:09:31.384373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.665 [2024-07-15 16:09:31.389975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.665 [2024-07-15 16:09:31.389997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.665 [2024-07-15 16:09:31.390005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.665 [2024-07-15 16:09:31.395907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.665 [2024-07-15 16:09:31.395928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.665 [2024-07-15 16:09:31.395936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.665 [2024-07-15 16:09:31.401725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.665 [2024-07-15 16:09:31.401747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.665 [2024-07-15 16:09:31.401755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.665 [2024-07-15 16:09:31.407380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.665 [2024-07-15 16:09:31.407401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.665 [2024-07-15 16:09:31.407409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.665 [2024-07-15 16:09:31.413020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.665 [2024-07-15 16:09:31.413041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.665 [2024-07-15 16:09:31.413049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.665 [2024-07-15 16:09:31.418774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.665 [2024-07-15 16:09:31.418796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.666 [2024-07-15 16:09:31.418804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.666 [2024-07-15 16:09:31.424520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.666 [2024-07-15 16:09:31.424542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.666 [2024-07-15 16:09:31.424549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.666 [2024-07-15 16:09:31.430393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.666 [2024-07-15 16:09:31.430414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.666 [2024-07-15 16:09:31.430422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.666 [2024-07-15 16:09:31.436248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.666 [2024-07-15 16:09:31.436269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.666 [2024-07-15 16:09:31.436276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.666 [2024-07-15 16:09:31.441996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.666 [2024-07-15 16:09:31.442017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.666 [2024-07-15 16:09:31.442025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.666 [2024-07-15 16:09:31.447770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.666 [2024-07-15 16:09:31.447790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.666 [2024-07-15 16:09:31.447804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.666 [2024-07-15 16:09:31.453495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.666 [2024-07-15 16:09:31.453516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.666 [2024-07-15 16:09:31.453524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.666 [2024-07-15 16:09:31.459212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.666 [2024-07-15 16:09:31.459238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.666 [2024-07-15 16:09:31.459247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.666 [2024-07-15 16:09:31.464902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.666 [2024-07-15 16:09:31.464923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.666 [2024-07-15 16:09:31.464931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.666 [2024-07-15 16:09:31.470757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.666 [2024-07-15 16:09:31.470778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.666 [2024-07-15 16:09:31.470786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.666 [2024-07-15 16:09:31.476588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.666 [2024-07-15 16:09:31.476609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.666 [2024-07-15 16:09:31.476617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.666 [2024-07-15 16:09:31.482354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.666 [2024-07-15 16:09:31.482375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.666 [2024-07-15 16:09:31.482382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.666 [2024-07-15 16:09:31.488171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.666 [2024-07-15 16:09:31.488191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.666 [2024-07-15 16:09:31.488199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.666 [2024-07-15 16:09:31.493902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.666 [2024-07-15 16:09:31.493922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.666 [2024-07-15 16:09:31.493930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.666 [2024-07-15 16:09:31.499612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.666 [2024-07-15 16:09:31.499636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.666 [2024-07-15 16:09:31.499645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.666 [2024-07-15 16:09:31.505221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.666 [2024-07-15 16:09:31.505248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.666 [2024-07-15 16:09:31.505256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.666 [2024-07-15 16:09:31.510947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.666 [2024-07-15 16:09:31.510968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.666 [2024-07-15 16:09:31.510975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.666 [2024-07-15 16:09:31.516620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.666 [2024-07-15 16:09:31.516641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.666 [2024-07-15 16:09:31.516649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.666 [2024-07-15 16:09:31.522194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.666 [2024-07-15 16:09:31.522215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.666 [2024-07-15 16:09:31.522223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.666 [2024-07-15 16:09:31.527873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.666 [2024-07-15 16:09:31.527894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.666 [2024-07-15 16:09:31.527901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.666 [2024-07-15 16:09:31.533598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.666 [2024-07-15 16:09:31.533619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.666 [2024-07-15 16:09:31.533627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.666 [2024-07-15 16:09:31.539295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.666 [2024-07-15 16:09:31.539316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.666 [2024-07-15 16:09:31.539324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.666 [2024-07-15 16:09:31.544910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.666 [2024-07-15 16:09:31.544931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.666 [2024-07-15 16:09:31.544939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.666 [2024-07-15 16:09:31.550618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.666 [2024-07-15 16:09:31.550639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.666 [2024-07-15 16:09:31.550646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.666 [2024-07-15 16:09:31.556347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.666 [2024-07-15 16:09:31.556367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.666 [2024-07-15 16:09:31.556375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.666 [2024-07-15 16:09:31.561807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.666 [2024-07-15 16:09:31.561828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.666 [2024-07-15 16:09:31.561836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.666 [2024-07-15 16:09:31.567311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.666 [2024-07-15 16:09:31.567332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.666 [2024-07-15 16:09:31.567341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.666 [2024-07-15 16:09:31.572878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.666 [2024-07-15 16:09:31.572899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.666 [2024-07-15 16:09:31.572907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.666 [2024-07-15 16:09:31.578506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.666 [2024-07-15 16:09:31.578528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.666 [2024-07-15 16:09:31.578537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.666 [2024-07-15 16:09:31.584205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.666 [2024-07-15 16:09:31.584233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.666 [2024-07-15 16:09:31.584241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.666 [2024-07-15 16:09:31.589905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.666 [2024-07-15 16:09:31.589926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.666 [2024-07-15 16:09:31.589934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.666 [2024-07-15 16:09:31.595578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.666 [2024-07-15 16:09:31.595599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.666 [2024-07-15 16:09:31.595610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.926 [2024-07-15 16:09:31.601468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.926 [2024-07-15 16:09:31.601500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.926 [2024-07-15 16:09:31.601509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.926 [2024-07-15 16:09:31.607379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.926 [2024-07-15 16:09:31.607401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.926 [2024-07-15 16:09:31.607408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.926 [2024-07-15 16:09:31.613005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.926 [2024-07-15 16:09:31.613026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.926 [2024-07-15 16:09:31.613034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.926 [2024-07-15 16:09:31.618599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.926 [2024-07-15 16:09:31.618621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.926 [2024-07-15 16:09:31.618629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.926 [2024-07-15 16:09:31.624286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.926 [2024-07-15 16:09:31.624307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.926 [2024-07-15 16:09:31.624315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.926 [2024-07-15 16:09:31.630016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.926 [2024-07-15 16:09:31.630037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.926 [2024-07-15 16:09:31.630045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.926 [2024-07-15 16:09:31.635645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.926 [2024-07-15 16:09:31.635667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.926 [2024-07-15 16:09:31.635674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.926 [2024-07-15 16:09:31.641347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.926 [2024-07-15 16:09:31.641369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.926 [2024-07-15 16:09:31.641377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.926 [2024-07-15 16:09:31.647033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.926 [2024-07-15 16:09:31.647054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.926 [2024-07-15 16:09:31.647062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.926 [2024-07-15 16:09:31.652718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.926 [2024-07-15 16:09:31.652740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.926 [2024-07-15 16:09:31.652748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.926 [2024-07-15 16:09:31.658424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.926 [2024-07-15 16:09:31.658445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.926 [2024-07-15 16:09:31.658453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.926 [2024-07-15 16:09:31.664130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.926 [2024-07-15 16:09:31.664151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.926 [2024-07-15 16:09:31.664159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.926 [2024-07-15 16:09:31.669919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.926 [2024-07-15 16:09:31.669940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.926 [2024-07-15 16:09:31.669949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.926 [2024-07-15 16:09:31.675677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.926 [2024-07-15 16:09:31.675698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.926 [2024-07-15 16:09:31.675707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.926 [2024-07-15 16:09:31.681383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.926 [2024-07-15 16:09:31.681405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.926 [2024-07-15 16:09:31.681413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.926 [2024-07-15 16:09:31.687000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.926 [2024-07-15 16:09:31.687022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.926 [2024-07-15 16:09:31.687029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.926 [2024-07-15 16:09:31.692864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.926 [2024-07-15 16:09:31.692886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.926 [2024-07-15 16:09:31.692897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.926 [2024-07-15 16:09:31.698765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.926 [2024-07-15 16:09:31.698786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.926 [2024-07-15 16:09:31.698794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.926 [2024-07-15 16:09:31.704422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.926 [2024-07-15 16:09:31.704451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.926 [2024-07-15 16:09:31.704459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.926 [2024-07-15 16:09:31.710107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.926 [2024-07-15 16:09:31.710129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.926 [2024-07-15 16:09:31.710137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.926 [2024-07-15 16:09:31.715862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.926 [2024-07-15 16:09:31.715884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.927 [2024-07-15 16:09:31.715892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.927 [2024-07-15 16:09:31.721444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.927 [2024-07-15 16:09:31.721465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.927 [2024-07-15 16:09:31.721473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.927 [2024-07-15 16:09:31.726916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.927 [2024-07-15 16:09:31.726938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.927 [2024-07-15 16:09:31.726946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.927 [2024-07-15 16:09:31.732514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.927 [2024-07-15 16:09:31.732535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.927 [2024-07-15 16:09:31.732543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.927 [2024-07-15 16:09:31.738132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.927 [2024-07-15 16:09:31.738152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.927 [2024-07-15 16:09:31.738160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.927 [2024-07-15 16:09:31.743751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.927 [2024-07-15 16:09:31.743776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.927 [2024-07-15 16:09:31.743784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.927 [2024-07-15 16:09:31.749384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.927 [2024-07-15 16:09:31.749404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.927 [2024-07-15 16:09:31.749413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.927 [2024-07-15 16:09:31.754973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.927 [2024-07-15 16:09:31.754995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.927 [2024-07-15 16:09:31.755002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.927 [2024-07-15 16:09:31.760703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.927 [2024-07-15 16:09:31.760725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.927 [2024-07-15 16:09:31.760733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.927 [2024-07-15 16:09:31.766446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.927 [2024-07-15 16:09:31.766468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.927 [2024-07-15 16:09:31.766476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.927 [2024-07-15 16:09:31.772108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.927 [2024-07-15 16:09:31.772129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.927 [2024-07-15 16:09:31.772137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.927 [2024-07-15 16:09:31.777707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.927 [2024-07-15 16:09:31.777728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.927 [2024-07-15 16:09:31.777736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.927 [2024-07-15 16:09:31.783355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.927 [2024-07-15 16:09:31.783377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.927 [2024-07-15 16:09:31.783385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.927 [2024-07-15 16:09:31.789116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.927 [2024-07-15 16:09:31.789137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.927 [2024-07-15 16:09:31.789146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.927 [2024-07-15 16:09:31.794622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.927 [2024-07-15 16:09:31.794644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.927 [2024-07-15 16:09:31.794652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.927 [2024-07-15 16:09:31.800106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.927 [2024-07-15 16:09:31.800128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.927 [2024-07-15 16:09:31.800136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.927 [2024-07-15 16:09:31.805709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.927 [2024-07-15 16:09:31.805731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.927 [2024-07-15 16:09:31.805739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.927 [2024-07-15 16:09:31.811314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.927 [2024-07-15 16:09:31.811336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.927 [2024-07-15 16:09:31.811344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.927 [2024-07-15 16:09:31.816947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.927 [2024-07-15 16:09:31.816968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.927 [2024-07-15 16:09:31.816976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.927 [2024-07-15 16:09:31.822451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.927 [2024-07-15 16:09:31.822472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.927 [2024-07-15 16:09:31.822480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.927 [2024-07-15 16:09:31.827902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.927 [2024-07-15 16:09:31.827924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.927 [2024-07-15 16:09:31.827932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.927 [2024-07-15 16:09:31.833491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.927 [2024-07-15 16:09:31.833512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.927 [2024-07-15 16:09:31.833520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.927 [2024-07-15 16:09:31.839075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.927 [2024-07-15 16:09:31.839096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.927 [2024-07-15 16:09:31.839107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.927 [2024-07-15 16:09:31.845157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.927 [2024-07-15 16:09:31.845178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.927 [2024-07-15 16:09:31.845186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.927 [2024-07-15 16:09:31.852402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:02.927 [2024-07-15 16:09:31.852424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.927 [2024-07-15 16:09:31.852432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.187 [2024-07-15 16:09:31.860040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.187 [2024-07-15 16:09:31.860063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.187 [2024-07-15 16:09:31.860071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.187 [2024-07-15 16:09:31.867083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.187 [2024-07-15 16:09:31.867104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.187 [2024-07-15 16:09:31.867113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.187 [2024-07-15 16:09:31.873951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.187 [2024-07-15 16:09:31.873972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.187 [2024-07-15 16:09:31.873981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.187 [2024-07-15 16:09:31.880758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.187 [2024-07-15 16:09:31.880780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.187 [2024-07-15 16:09:31.880788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.187 [2024-07-15 16:09:31.887652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.187 [2024-07-15 16:09:31.887674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.187 [2024-07-15 16:09:31.887682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.187 [2024-07-15 16:09:31.895182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.187 [2024-07-15 16:09:31.895205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.187 [2024-07-15 16:09:31.895213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.187 [2024-07-15 16:09:31.902844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.187 [2024-07-15 16:09:31.902868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.187 [2024-07-15 16:09:31.902877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.187 [2024-07-15 16:09:31.909924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.187 [2024-07-15 16:09:31.909947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.187 [2024-07-15 16:09:31.909955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.187 [2024-07-15 16:09:31.918130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.187 [2024-07-15 16:09:31.918153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.187 [2024-07-15 16:09:31.918162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.187 [2024-07-15 16:09:31.926481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.187 [2024-07-15 16:09:31.926503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.187 [2024-07-15 16:09:31.926511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.187 [2024-07-15 16:09:31.935527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.187 [2024-07-15 16:09:31.935549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.187 [2024-07-15 16:09:31.935557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.187 [2024-07-15 16:09:31.944437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.187 [2024-07-15 16:09:31.944459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.187 [2024-07-15 16:09:31.944467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.187 [2024-07-15 16:09:31.952980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.187 [2024-07-15 16:09:31.953002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.187 [2024-07-15 16:09:31.953011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.187 [2024-07-15 16:09:31.961655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.187 [2024-07-15 16:09:31.961677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.187 [2024-07-15 16:09:31.961686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.187 [2024-07-15 16:09:31.970216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.187 [2024-07-15 16:09:31.970244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.187 [2024-07-15 16:09:31.970253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.187 [2024-07-15 16:09:31.978338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.187 [2024-07-15 16:09:31.978360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.187 [2024-07-15 16:09:31.978368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.187 [2024-07-15 16:09:31.984558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.187 [2024-07-15 16:09:31.984579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.187 [2024-07-15 16:09:31.984587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.187 [2024-07-15 16:09:31.991245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.187 [2024-07-15 16:09:31.991266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.187 [2024-07-15 16:09:31.991273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.187 [2024-07-15 16:09:31.997275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.187 [2024-07-15 16:09:31.997296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.187 [2024-07-15 16:09:31.997304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.187 [2024-07-15 16:09:32.000578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.187 [2024-07-15 16:09:32.000600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.187 [2024-07-15 16:09:32.000609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.187 [2024-07-15 16:09:32.006580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.187 [2024-07-15 16:09:32.006602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.187 [2024-07-15 16:09:32.006610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.187 [2024-07-15 16:09:32.012652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.187 [2024-07-15 16:09:32.012674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.188 [2024-07-15 16:09:32.012682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.188 [2024-07-15 16:09:32.018456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.188 [2024-07-15 16:09:32.018480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.188 [2024-07-15 16:09:32.018489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.188 [2024-07-15 16:09:32.024404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.188 [2024-07-15 16:09:32.024426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.188 [2024-07-15 16:09:32.024437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.188 [2024-07-15 16:09:32.029961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.188 [2024-07-15 16:09:32.029983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.188 [2024-07-15 16:09:32.029991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.188 [2024-07-15 16:09:32.035739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.188 [2024-07-15 16:09:32.035761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.188 [2024-07-15 16:09:32.035770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.188 [2024-07-15 16:09:32.041544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.188 [2024-07-15 16:09:32.041565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.188 [2024-07-15 16:09:32.041574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.188 [2024-07-15 16:09:32.047261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.188 [2024-07-15 16:09:32.047283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.188 [2024-07-15 16:09:32.047291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.188 [2024-07-15 16:09:32.052942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.188 [2024-07-15 16:09:32.052963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.188 [2024-07-15 16:09:32.052970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.188 [2024-07-15 16:09:32.058779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.188 [2024-07-15 16:09:32.058801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.188 [2024-07-15 16:09:32.058809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.188 [2024-07-15 16:09:32.064564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.188 [2024-07-15 16:09:32.064585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.188 [2024-07-15 16:09:32.064594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.188 [2024-07-15 16:09:32.070452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.188 [2024-07-15 16:09:32.070474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.188 [2024-07-15 16:09:32.070482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.188 [2024-07-15 16:09:32.075917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.188 [2024-07-15 16:09:32.075938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.188 [2024-07-15 16:09:32.075947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.188 [2024-07-15 16:09:32.081634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.188 [2024-07-15 16:09:32.081656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.188 [2024-07-15 16:09:32.081663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.188 [2024-07-15 16:09:32.087164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.188 [2024-07-15 16:09:32.087187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.188 [2024-07-15 16:09:32.087195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.188 [2024-07-15 16:09:32.092804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.188 [2024-07-15 16:09:32.092826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.188 [2024-07-15 16:09:32.092834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.188 [2024-07-15 16:09:32.098276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.188 [2024-07-15 16:09:32.098298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.188 [2024-07-15 16:09:32.098306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.188 [2024-07-15 16:09:32.103806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.188 [2024-07-15 16:09:32.103829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.188 [2024-07-15 16:09:32.103837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.188 [2024-07-15 16:09:32.109486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.188 [2024-07-15 16:09:32.109509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.188 [2024-07-15 16:09:32.109518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.188 [2024-07-15 16:09:32.115279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.188 [2024-07-15 16:09:32.115302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.188 [2024-07-15 16:09:32.115310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.447 [2024-07-15 16:09:32.120499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.447 [2024-07-15 16:09:32.120523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.447 [2024-07-15 16:09:32.120534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.447 [2024-07-15 16:09:32.126030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.447 [2024-07-15 16:09:32.126052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.447 [2024-07-15 16:09:32.126061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.447 [2024-07-15 16:09:32.131812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.447 [2024-07-15 16:09:32.131836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.447 [2024-07-15 16:09:32.131844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.447 [2024-07-15 16:09:32.138350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.447 [2024-07-15 16:09:32.138375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.447 [2024-07-15 16:09:32.138384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.447 [2024-07-15 16:09:32.145024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.447 [2024-07-15 16:09:32.145048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.447 [2024-07-15 16:09:32.145057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.447 [2024-07-15 16:09:32.151996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.447 [2024-07-15 16:09:32.152019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.447 [2024-07-15 16:09:32.152028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.447 [2024-07-15 16:09:32.158193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.447 [2024-07-15 16:09:32.158215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.447 [2024-07-15 16:09:32.158223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.447 [2024-07-15 16:09:32.164392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.447 [2024-07-15 16:09:32.164414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.447 [2024-07-15 16:09:32.164423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.447 [2024-07-15 16:09:32.170463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.447 [2024-07-15 16:09:32.170486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.447 [2024-07-15 16:09:32.170494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.447 [2024-07-15 16:09:32.176453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.447 [2024-07-15 16:09:32.176479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.447 [2024-07-15 16:09:32.176487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.447 [2024-07-15 16:09:32.182369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.447 [2024-07-15 16:09:32.182390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.447 [2024-07-15 16:09:32.182398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.448 [2024-07-15 16:09:32.188299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.448 [2024-07-15 16:09:32.188321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.448 [2024-07-15 16:09:32.188329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.448 [2024-07-15 16:09:32.194240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.448 [2024-07-15 16:09:32.194262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.448 [2024-07-15 16:09:32.194270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.448 [2024-07-15 16:09:32.199861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.448 [2024-07-15 16:09:32.199883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.448 [2024-07-15 16:09:32.199891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.448 [2024-07-15 16:09:32.205609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.448 [2024-07-15 16:09:32.205631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.448 [2024-07-15 16:09:32.205639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.448 [2024-07-15 16:09:32.211360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.448 [2024-07-15 16:09:32.211382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.448 [2024-07-15 16:09:32.211389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.448 [2024-07-15 16:09:32.217043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.448 [2024-07-15 16:09:32.217065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.448 [2024-07-15 16:09:32.217072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.448 [2024-07-15 16:09:32.222663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.448 [2024-07-15 16:09:32.222685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.448 [2024-07-15 16:09:32.222693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.448 [2024-07-15 16:09:32.228333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.448 [2024-07-15 16:09:32.228355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.448 [2024-07-15 16:09:32.228363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.448 [2024-07-15 16:09:32.234007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.448 [2024-07-15 16:09:32.234028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.448 [2024-07-15 16:09:32.234036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.448 [2024-07-15 16:09:32.239672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.448 [2024-07-15 16:09:32.239694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.448 [2024-07-15 16:09:32.239702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.448 [2024-07-15 16:09:32.245302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.448 [2024-07-15 16:09:32.245324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.448 [2024-07-15 16:09:32.245332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.448 [2024-07-15 16:09:32.250919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.448 [2024-07-15 16:09:32.250940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.448 [2024-07-15 16:09:32.250948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.448 [2024-07-15 16:09:32.256433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.448 [2024-07-15 16:09:32.256454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.448 [2024-07-15 16:09:32.256463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.448 [2024-07-15 16:09:32.259477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.448 [2024-07-15 16:09:32.259498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.448 [2024-07-15 16:09:32.259506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.448 [2024-07-15 16:09:32.265237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.448 [2024-07-15 16:09:32.265258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.448 [2024-07-15 16:09:32.265266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.448 [2024-07-15 16:09:32.270653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.448 [2024-07-15 16:09:32.270674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.448 [2024-07-15 16:09:32.270684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.448 [2024-07-15 16:09:32.276112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.448 [2024-07-15 16:09:32.276134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.448 [2024-07-15 16:09:32.276141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.448 [2024-07-15 16:09:32.281670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.448 [2024-07-15 16:09:32.281691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.448 [2024-07-15 16:09:32.281699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.448 [2024-07-15 16:09:32.287107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.448 [2024-07-15 16:09:32.287127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.448 [2024-07-15 16:09:32.287135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.448 [2024-07-15 16:09:32.292570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.448 [2024-07-15 16:09:32.292591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.448 [2024-07-15 16:09:32.292599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.448 [2024-07-15 16:09:32.297921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.448 [2024-07-15 16:09:32.297943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.448 [2024-07-15 16:09:32.297951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.448 [2024-07-15 16:09:32.303202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.448 [2024-07-15 16:09:32.303230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.448 [2024-07-15 16:09:32.303238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.448 [2024-07-15 16:09:32.308595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.448 [2024-07-15 16:09:32.308616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.448 [2024-07-15 16:09:32.308625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.448 [2024-07-15 16:09:32.313999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.448 [2024-07-15 16:09:32.314020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.448 [2024-07-15 16:09:32.314028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.448 [2024-07-15 16:09:32.319549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.448 [2024-07-15 16:09:32.319574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.448 [2024-07-15 16:09:32.319582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.448 [2024-07-15 16:09:32.325179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.448 [2024-07-15 16:09:32.325201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.448 [2024-07-15 16:09:32.325209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.448 [2024-07-15 16:09:32.330711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.448 [2024-07-15 16:09:32.330732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.448 [2024-07-15 16:09:32.330740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.448 [2024-07-15 16:09:32.336234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.448 [2024-07-15 16:09:32.336255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.448 [2024-07-15 16:09:32.336263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.448 [2024-07-15 16:09:32.341636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.448 [2024-07-15 16:09:32.341657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.449 [2024-07-15 16:09:32.341665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.449 [2024-07-15 16:09:32.347026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.449 [2024-07-15 16:09:32.347048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.449 [2024-07-15 16:09:32.347056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.449 [2024-07-15 16:09:32.352589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.449 [2024-07-15 16:09:32.352611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.449 [2024-07-15 16:09:32.352619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.449 [2024-07-15 16:09:32.358222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.449 [2024-07-15 16:09:32.358249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.449 [2024-07-15 16:09:32.358257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.449 [2024-07-15 16:09:32.363806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.449 [2024-07-15 16:09:32.363827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.449 [2024-07-15 16:09:32.363835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.449 [2024-07-15 16:09:32.369189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.449 [2024-07-15 16:09:32.369211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.449 [2024-07-15 16:09:32.369218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.449 [2024-07-15 16:09:32.374757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.449 [2024-07-15 16:09:32.374778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.449 [2024-07-15 16:09:32.374788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.449 [2024-07-15 16:09:32.380386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.449 [2024-07-15 16:09:32.380409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.449 [2024-07-15 16:09:32.380417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.708 [2024-07-15 16:09:32.386058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.708 [2024-07-15 16:09:32.386080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.708 [2024-07-15 16:09:32.386089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.708 [2024-07-15 16:09:32.391689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.708 [2024-07-15 16:09:32.391711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.708 [2024-07-15 16:09:32.391719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.708 [2024-07-15 16:09:32.397157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.708 [2024-07-15 16:09:32.397179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.708 [2024-07-15 16:09:32.397191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.708 [2024-07-15 16:09:32.402793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.708 [2024-07-15 16:09:32.402815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.708 [2024-07-15 16:09:32.402823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.708 [2024-07-15 16:09:32.408449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.708 [2024-07-15 16:09:32.408471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.708 [2024-07-15 16:09:32.408479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.708 [2024-07-15 16:09:32.414239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.708 [2024-07-15 16:09:32.414262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.708 [2024-07-15 16:09:32.414272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.708 [2024-07-15 16:09:32.420798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.708 [2024-07-15 16:09:32.420820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.708 [2024-07-15 16:09:32.420828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.708 [2024-07-15 16:09:32.428365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.708 [2024-07-15 16:09:32.428388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.708 [2024-07-15 16:09:32.428397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.708 [2024-07-15 16:09:32.435010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.708 [2024-07-15 16:09:32.435032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.708 [2024-07-15 16:09:32.435041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.708 [2024-07-15 16:09:32.442640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.708 [2024-07-15 16:09:32.442662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.708 [2024-07-15 16:09:32.442670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.708 [2024-07-15 16:09:32.450471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.708 [2024-07-15 16:09:32.450494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.708 [2024-07-15 16:09:32.450503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.708 [2024-07-15 16:09:32.458652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.708 [2024-07-15 16:09:32.458675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.708 [2024-07-15 16:09:32.458683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.708 [2024-07-15 16:09:32.467189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.708 [2024-07-15 16:09:32.467212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.708 [2024-07-15 16:09:32.467221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.708 [2024-07-15 16:09:32.475435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.708 [2024-07-15 16:09:32.475458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.708 [2024-07-15 16:09:32.475467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.708 [2024-07-15 16:09:32.483931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.708 [2024-07-15 16:09:32.483953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.708 [2024-07-15 16:09:32.483962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.708 [2024-07-15 16:09:32.492523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.708 [2024-07-15 16:09:32.492546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.708 [2024-07-15 16:09:32.492555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.708 [2024-07-15 16:09:32.501277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.708 [2024-07-15 16:09:32.501300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.708 [2024-07-15 16:09:32.501308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.708 [2024-07-15 16:09:32.510431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.708 [2024-07-15 16:09:32.510455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.708 [2024-07-15 16:09:32.510463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.708 [2024-07-15 16:09:32.519108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.708 [2024-07-15 16:09:32.519131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.708 [2024-07-15 16:09:32.519139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.709 [2024-07-15 16:09:32.527911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.709 [2024-07-15 16:09:32.527934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.709 [2024-07-15 16:09:32.527942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.709 [2024-07-15 16:09:32.536551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.709 [2024-07-15 16:09:32.536573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.709 [2024-07-15 16:09:32.536581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.709 [2024-07-15 16:09:32.545476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.709 [2024-07-15 16:09:32.545499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.709 [2024-07-15 16:09:32.545507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.709 [2024-07-15 16:09:32.552864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.709 [2024-07-15 16:09:32.552885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.709 [2024-07-15 16:09:32.552897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.709 [2024-07-15 16:09:32.559653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.709 [2024-07-15 16:09:32.559675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.709 [2024-07-15 16:09:32.559683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.709 [2024-07-15 16:09:32.566208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.709 [2024-07-15 16:09:32.566236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.709 [2024-07-15 16:09:32.566245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.709 [2024-07-15 16:09:32.573000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.709 [2024-07-15 16:09:32.573022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.709 [2024-07-15 16:09:32.573031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.709 [2024-07-15 16:09:32.578953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.709 [2024-07-15 16:09:32.578974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.709 [2024-07-15 16:09:32.578982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.709 [2024-07-15 16:09:32.586912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.709 [2024-07-15 16:09:32.586933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.709 [2024-07-15 16:09:32.586941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.709 [2024-07-15 16:09:32.594686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.709 [2024-07-15 16:09:32.594707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.709 [2024-07-15 16:09:32.594715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.709 [2024-07-15 16:09:32.602799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.709 [2024-07-15 16:09:32.602821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.709 [2024-07-15 16:09:32.602829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.709 [2024-07-15 16:09:32.611043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.709 [2024-07-15 16:09:32.611064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.709 [2024-07-15 16:09:32.611072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.709 [2024-07-15 16:09:32.618294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.709 [2024-07-15 16:09:32.618318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.709 [2024-07-15 16:09:32.618326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.709 [2024-07-15 16:09:32.625353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.709 [2024-07-15 16:09:32.625374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.709 [2024-07-15 16:09:32.625382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.709 [2024-07-15 16:09:32.632848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.709 [2024-07-15 16:09:32.632869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.709 [2024-07-15 16:09:32.632877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.709 [2024-07-15 16:09:32.640740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.709 [2024-07-15 16:09:32.640763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.709 [2024-07-15 16:09:32.640772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.968 [2024-07-15 16:09:32.649096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.968 [2024-07-15 16:09:32.649118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.968 [2024-07-15 16:09:32.649126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.968 [2024-07-15 16:09:32.656759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aac0b0) 00:28:03.968 [2024-07-15 16:09:32.656781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.968 [2024-07-15 16:09:32.656789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.968 00:28:03.968 Latency(us) 00:28:03.968 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:03.968 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:03.968 nvme0n1 : 2.00 4558.29 569.79 0.00 0.00 3506.46 715.91 10713.71 00:28:03.968 =================================================================================================================== 00:28:03.968 Total : 4558.29 569.79 0.00 0.00 3506.46 715.91 10713.71 00:28:03.968 0 00:28:03.968 16:09:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:03.968 16:09:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:03.968 16:09:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:03.968 | .driver_specific 00:28:03.968 | .nvme_error 00:28:03.968 | .status_code 00:28:03.968 | .command_transient_transport_error' 00:28:03.968 16:09:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:03.968 16:09:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 294 > 0 )) 00:28:03.968 16:09:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3913041 00:28:03.968 16:09:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3913041 ']' 00:28:03.968 16:09:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3913041 00:28:03.968 16:09:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:03.968 16:09:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:03.968 16:09:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3913041 00:28:03.968 16:09:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:03.968 16:09:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:04.227 16:09:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3913041' 00:28:04.227 killing process with pid 3913041 00:28:04.227 16:09:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3913041 00:28:04.227 Received shutdown signal, test time was about 2.000000 seconds 00:28:04.227 00:28:04.227 Latency(us) 00:28:04.227 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:04.227 =================================================================================================================== 00:28:04.227 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:04.227 16:09:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3913041 00:28:04.227 16:09:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:04.227 16:09:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:04.227 16:09:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:04.227 16:09:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:04.227 16:09:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:04.227 16:09:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3913731 00:28:04.227 16:09:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3913731 /var/tmp/bperf.sock 00:28:04.227 16:09:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:04.227 16:09:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3913731 ']' 00:28:04.227 16:09:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:04.227 16:09:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:04.227 16:09:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:04.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:04.227 16:09:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:04.227 16:09:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:04.227 [2024-07-15 16:09:33.132495] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:28:04.227 [2024-07-15 16:09:33.132540] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3913731 ] 00:28:04.227 EAL: No free 2048 kB hugepages reported on node 1 00:28:04.495 [2024-07-15 16:09:33.187314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:04.495 [2024-07-15 16:09:33.260141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:05.064 16:09:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:05.064 16:09:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:05.064 16:09:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:05.064 16:09:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:05.322 16:09:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:05.322 16:09:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.322 16:09:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:05.322 16:09:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.322 16:09:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:05.322 16:09:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:05.591 nvme0n1 00:28:05.591 16:09:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:05.591 16:09:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.591 16:09:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:05.591 16:09:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.591 16:09:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:05.591 16:09:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:05.591 Running I/O for 2 seconds... 00:28:05.591 [2024-07-15 16:09:34.509337] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190ed920 00:28:05.591 [2024-07-15 16:09:34.510202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.591 [2024-07-15 16:09:34.510238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:05.591 [2024-07-15 16:09:34.518767] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190eea00 00:28:05.591 [2024-07-15 16:09:34.519620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.591 [2024-07-15 16:09:34.519645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:05.850 [2024-07-15 16:09:34.528146] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190efae0 00:28:05.850 [2024-07-15 16:09:34.529072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.850 [2024-07-15 16:09:34.529094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:05.850 [2024-07-15 16:09:34.537448] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f0bc0 00:28:05.850 [2024-07-15 16:09:34.538358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.850 [2024-07-15 16:09:34.538378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:05.850 [2024-07-15 16:09:34.546606] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f1ca0 00:28:05.850 [2024-07-15 16:09:34.547503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.850 [2024-07-15 16:09:34.547522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:05.850 [2024-07-15 16:09:34.555770] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f2d80 00:28:05.850 [2024-07-15 16:09:34.556646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.850 [2024-07-15 16:09:34.556665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:05.850 [2024-07-15 16:09:34.564974] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f3e60 00:28:05.850 [2024-07-15 16:09:34.565889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.850 [2024-07-15 16:09:34.565908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:05.850 [2024-07-15 16:09:34.574093] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e73e0 00:28:05.850 [2024-07-15 16:09:34.574981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.850 [2024-07-15 16:09:34.575001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:05.850 [2024-07-15 16:09:34.583263] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e6300 00:28:05.850 [2024-07-15 16:09:34.584116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.850 [2024-07-15 16:09:34.584135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:05.850 [2024-07-15 16:09:34.592393] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190df988 00:28:05.850 [2024-07-15 16:09:34.593267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:15177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.850 [2024-07-15 16:09:34.593286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:05.850 [2024-07-15 16:09:34.601496] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e0a68 00:28:05.850 [2024-07-15 16:09:34.602398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.850 [2024-07-15 16:09:34.602417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:05.850 [2024-07-15 16:09:34.610608] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e1b48 00:28:05.850 [2024-07-15 16:09:34.611509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.850 [2024-07-15 16:09:34.611527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:05.850 [2024-07-15 16:09:34.619769] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e2c28 00:28:05.850 [2024-07-15 16:09:34.620630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.850 [2024-07-15 16:09:34.620648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:05.850 [2024-07-15 16:09:34.628946] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e3d08 00:28:05.850 [2024-07-15 16:09:34.629829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.851 [2024-07-15 16:09:34.629849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:05.851 [2024-07-15 16:09:34.638346] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e4de8 00:28:05.851 [2024-07-15 16:09:34.639230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.851 [2024-07-15 16:09:34.639249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:05.851 [2024-07-15 16:09:34.647506] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190ec408 00:28:05.851 [2024-07-15 16:09:34.648410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.851 [2024-07-15 16:09:34.648430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:05.851 [2024-07-15 16:09:34.656619] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190ed4e8 00:28:05.851 [2024-07-15 16:09:34.657534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.851 [2024-07-15 16:09:34.657553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:05.851 [2024-07-15 16:09:34.665741] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190ee5c8 00:28:05.851 [2024-07-15 16:09:34.666650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.851 [2024-07-15 16:09:34.666669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:05.851 [2024-07-15 16:09:34.674830] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190ef6a8 00:28:05.851 [2024-07-15 16:09:34.675743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.851 [2024-07-15 16:09:34.675762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:05.851 [2024-07-15 16:09:34.683953] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f0788 00:28:05.851 [2024-07-15 16:09:34.684857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.851 [2024-07-15 16:09:34.684876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:05.851 [2024-07-15 16:09:34.693167] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f1868 00:28:05.851 [2024-07-15 16:09:34.694029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.851 [2024-07-15 16:09:34.694048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:05.851 [2024-07-15 16:09:34.702292] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f2948 00:28:05.851 [2024-07-15 16:09:34.703193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.851 [2024-07-15 16:09:34.703215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:05.851 [2024-07-15 16:09:34.711484] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f3a28 00:28:05.851 [2024-07-15 16:09:34.712319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.851 [2024-07-15 16:09:34.712338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:05.851 [2024-07-15 16:09:34.720571] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f4b08 00:28:05.851 [2024-07-15 16:09:34.721489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.851 [2024-07-15 16:09:34.721508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:05.851 [2024-07-15 16:09:34.729704] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e6fa8 00:28:05.851 [2024-07-15 16:09:34.730610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.851 [2024-07-15 16:09:34.730629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:05.851 [2024-07-15 16:09:34.738884] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e5ec8 00:28:05.851 [2024-07-15 16:09:34.739766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.851 [2024-07-15 16:09:34.739786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:05.851 [2024-07-15 16:09:34.747978] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190dfdc0 00:28:05.851 [2024-07-15 16:09:34.748884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.851 [2024-07-15 16:09:34.748903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:05.851 [2024-07-15 16:09:34.757102] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e0ea0 00:28:05.851 [2024-07-15 16:09:34.758014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.851 [2024-07-15 16:09:34.758033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:05.851 [2024-07-15 16:09:34.766300] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e1f80 00:28:05.851 [2024-07-15 16:09:34.767182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.851 [2024-07-15 16:09:34.767201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:05.851 [2024-07-15 16:09:34.775521] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e3060 00:28:05.851 [2024-07-15 16:09:34.776418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.851 [2024-07-15 16:09:34.776437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.110 [2024-07-15 16:09:34.784830] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e4140 00:28:06.110 [2024-07-15 16:09:34.785750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.110 [2024-07-15 16:09:34.785769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.110 [2024-07-15 16:09:34.794122] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e5220 00:28:06.110 [2024-07-15 16:09:34.795012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.110 [2024-07-15 16:09:34.795031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.110 [2024-07-15 16:09:34.803217] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190ec840 00:28:06.110 [2024-07-15 16:09:34.804125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.110 [2024-07-15 16:09:34.804143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.110 [2024-07-15 16:09:34.812331] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190ed920 00:28:06.110 [2024-07-15 16:09:34.813213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:17282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.110 [2024-07-15 16:09:34.813234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.110 [2024-07-15 16:09:34.821682] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190eea00 00:28:06.110 [2024-07-15 16:09:34.822589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.110 [2024-07-15 16:09:34.822608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.110 [2024-07-15 16:09:34.830860] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190efae0 00:28:06.111 [2024-07-15 16:09:34.831782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.111 [2024-07-15 16:09:34.831801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.111 [2024-07-15 16:09:34.840035] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f0bc0 00:28:06.111 [2024-07-15 16:09:34.840943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.111 [2024-07-15 16:09:34.840961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.111 [2024-07-15 16:09:34.849147] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f1ca0 00:28:06.111 [2024-07-15 16:09:34.850057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.111 [2024-07-15 16:09:34.850076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.111 [2024-07-15 16:09:34.858311] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f2d80 00:28:06.111 [2024-07-15 16:09:34.859164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.111 [2024-07-15 16:09:34.859183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.111 [2024-07-15 16:09:34.867478] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f3e60 00:28:06.111 [2024-07-15 16:09:34.868357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.111 [2024-07-15 16:09:34.868376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.111 [2024-07-15 16:09:34.876563] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e73e0 00:28:06.111 [2024-07-15 16:09:34.877464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.111 [2024-07-15 16:09:34.877483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.111 [2024-07-15 16:09:34.885680] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e6300 00:28:06.111 [2024-07-15 16:09:34.886587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:3650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.111 [2024-07-15 16:09:34.886606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.111 [2024-07-15 16:09:34.894746] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190df988 00:28:06.111 [2024-07-15 16:09:34.895607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.111 [2024-07-15 16:09:34.895626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.111 [2024-07-15 16:09:34.903949] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e0a68 00:28:06.111 [2024-07-15 16:09:34.904849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:15571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.111 [2024-07-15 16:09:34.904867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.111 [2024-07-15 16:09:34.913066] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e1b48 00:28:06.111 [2024-07-15 16:09:34.913956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.111 [2024-07-15 16:09:34.913975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.111 [2024-07-15 16:09:34.922178] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e2c28 00:28:06.111 [2024-07-15 16:09:34.923087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.111 [2024-07-15 16:09:34.923106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.111 [2024-07-15 16:09:34.931340] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e3d08 00:28:06.111 [2024-07-15 16:09:34.932243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:18326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.111 [2024-07-15 16:09:34.932262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.111 [2024-07-15 16:09:34.940581] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e4de8 00:28:06.111 [2024-07-15 16:09:34.941462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.111 [2024-07-15 16:09:34.941486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.111 [2024-07-15 16:09:34.949687] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190ec408 00:28:06.111 [2024-07-15 16:09:34.950566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.111 [2024-07-15 16:09:34.950585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.111 [2024-07-15 16:09:34.958839] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190ed4e8 00:28:06.111 [2024-07-15 16:09:34.959718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.111 [2024-07-15 16:09:34.959738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.111 [2024-07-15 16:09:34.968042] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190ee5c8 00:28:06.111 [2024-07-15 16:09:34.968944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.111 [2024-07-15 16:09:34.968963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.111 [2024-07-15 16:09:34.977163] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190ef6a8 00:28:06.111 [2024-07-15 16:09:34.978044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:8733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.111 [2024-07-15 16:09:34.978064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.111 [2024-07-15 16:09:34.986316] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f0788 00:28:06.111 [2024-07-15 16:09:34.987186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.111 [2024-07-15 16:09:34.987206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.111 [2024-07-15 16:09:34.995439] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f1868 00:28:06.111 [2024-07-15 16:09:34.996317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:6421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.111 [2024-07-15 16:09:34.996337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.111 [2024-07-15 16:09:35.004803] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f2948 00:28:06.111 [2024-07-15 16:09:35.005735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.112 [2024-07-15 16:09:35.005753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.112 [2024-07-15 16:09:35.013980] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f3a28 00:28:06.112 [2024-07-15 16:09:35.014849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.112 [2024-07-15 16:09:35.014868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.112 [2024-07-15 16:09:35.023161] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f4b08 00:28:06.112 [2024-07-15 16:09:35.024042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.112 [2024-07-15 16:09:35.024062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.112 [2024-07-15 16:09:35.032433] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e6fa8 00:28:06.112 [2024-07-15 16:09:35.033227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.112 [2024-07-15 16:09:35.033247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.112 [2024-07-15 16:09:35.041668] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e5ec8 00:28:06.112 [2024-07-15 16:09:35.042541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.112 [2024-07-15 16:09:35.042560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.371 [2024-07-15 16:09:35.050967] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190dfdc0 00:28:06.371 [2024-07-15 16:09:35.051827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.371 [2024-07-15 16:09:35.051847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.371 [2024-07-15 16:09:35.060129] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e0ea0 00:28:06.371 [2024-07-15 16:09:35.060995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.371 [2024-07-15 16:09:35.061015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.371 [2024-07-15 16:09:35.069375] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e1f80 00:28:06.371 [2024-07-15 16:09:35.070215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.371 [2024-07-15 16:09:35.070238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.371 [2024-07-15 16:09:35.078602] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e3060 00:28:06.371 [2024-07-15 16:09:35.079381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.371 [2024-07-15 16:09:35.079401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.371 [2024-07-15 16:09:35.087796] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e4140 00:28:06.371 [2024-07-15 16:09:35.088573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.371 [2024-07-15 16:09:35.088592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.371 [2024-07-15 16:09:35.096924] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e5220 00:28:06.371 [2024-07-15 16:09:35.097735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.371 [2024-07-15 16:09:35.097754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.371 [2024-07-15 16:09:35.106064] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190ec840 00:28:06.371 [2024-07-15 16:09:35.106957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.371 [2024-07-15 16:09:35.106976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.371 [2024-07-15 16:09:35.115153] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190ed920 00:28:06.371 [2024-07-15 16:09:35.116079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.371 [2024-07-15 16:09:35.116098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.371 [2024-07-15 16:09:35.124271] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190eea00 00:28:06.371 [2024-07-15 16:09:35.125041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.371 [2024-07-15 16:09:35.125059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.371 [2024-07-15 16:09:35.133490] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190efae0 00:28:06.371 [2024-07-15 16:09:35.134374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.371 [2024-07-15 16:09:35.134393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.371 [2024-07-15 16:09:35.142623] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f0bc0 00:28:06.371 [2024-07-15 16:09:35.143555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.371 [2024-07-15 16:09:35.143575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.371 [2024-07-15 16:09:35.151708] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f1ca0 00:28:06.371 [2024-07-15 16:09:35.152569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.371 [2024-07-15 16:09:35.152588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.371 [2024-07-15 16:09:35.160846] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f2d80 00:28:06.371 [2024-07-15 16:09:35.161704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.371 [2024-07-15 16:09:35.161723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.371 [2024-07-15 16:09:35.170155] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f3e60 00:28:06.371 [2024-07-15 16:09:35.170954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.371 [2024-07-15 16:09:35.170973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.371 [2024-07-15 16:09:35.179296] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e73e0 00:28:06.371 [2024-07-15 16:09:35.180067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.371 [2024-07-15 16:09:35.180092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.371 [2024-07-15 16:09:35.188418] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e6300 00:28:06.371 [2024-07-15 16:09:35.189196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.371 [2024-07-15 16:09:35.189215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.371 [2024-07-15 16:09:35.197521] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190df988 00:28:06.371 [2024-07-15 16:09:35.198387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.371 [2024-07-15 16:09:35.198406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.371 [2024-07-15 16:09:35.206846] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e0a68 00:28:06.371 [2024-07-15 16:09:35.207713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.371 [2024-07-15 16:09:35.207733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.371 [2024-07-15 16:09:35.215952] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e1b48 00:28:06.371 [2024-07-15 16:09:35.216863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.371 [2024-07-15 16:09:35.216882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.371 [2024-07-15 16:09:35.225065] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e2c28 00:28:06.372 [2024-07-15 16:09:35.225954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.372 [2024-07-15 16:09:35.225973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.372 [2024-07-15 16:09:35.233617] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f9f68 00:28:06.372 [2024-07-15 16:09:35.234385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.372 [2024-07-15 16:09:35.234403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:06.372 [2024-07-15 16:09:35.243205] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e5ec8 00:28:06.372 [2024-07-15 16:09:35.244158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.372 [2024-07-15 16:09:35.244177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:06.372 [2024-07-15 16:09:35.253348] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e4578 00:28:06.372 [2024-07-15 16:09:35.254438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.372 [2024-07-15 16:09:35.254458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:06.372 [2024-07-15 16:09:35.262493] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e5658 00:28:06.372 [2024-07-15 16:09:35.263519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:17277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.372 [2024-07-15 16:09:35.263538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:06.372 [2024-07-15 16:09:35.271747] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190efae0 00:28:06.372 [2024-07-15 16:09:35.272772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:9414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.372 [2024-07-15 16:09:35.272792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:06.372 [2024-07-15 16:09:35.280033] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f3e60 00:28:06.372 [2024-07-15 16:09:35.281512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.372 [2024-07-15 16:09:35.281532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:06.372 [2024-07-15 16:09:35.288193] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190eee38 00:28:06.372 [2024-07-15 16:09:35.288830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.372 [2024-07-15 16:09:35.288849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:06.372 [2024-07-15 16:09:35.297930] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e0a68 00:28:06.372 [2024-07-15 16:09:35.298698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:17994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.372 [2024-07-15 16:09:35.298718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:06.631 [2024-07-15 16:09:35.308503] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e2c28 00:28:06.631 [2024-07-15 16:09:35.309495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.631 [2024-07-15 16:09:35.309515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:06.631 [2024-07-15 16:09:35.317795] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e3d08 00:28:06.631 [2024-07-15 16:09:35.318687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.631 [2024-07-15 16:09:35.318706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:06.631 [2024-07-15 16:09:35.326841] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190fb480 00:28:06.631 [2024-07-15 16:09:35.327728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.631 [2024-07-15 16:09:35.327747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:06.631 [2024-07-15 16:09:35.335973] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190fa3a0 00:28:06.631 [2024-07-15 16:09:35.336949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:14924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.631 [2024-07-15 16:09:35.336968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:06.631 [2024-07-15 16:09:35.345084] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f92c0 00:28:06.631 [2024-07-15 16:09:35.346050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.631 [2024-07-15 16:09:35.346069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:06.631 [2024-07-15 16:09:35.354196] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f81e0 00:28:06.631 [2024-07-15 16:09:35.355178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:3580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.631 [2024-07-15 16:09:35.355198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:06.631 [2024-07-15 16:09:35.363266] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f7100 00:28:06.631 [2024-07-15 16:09:35.364240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:6414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.631 [2024-07-15 16:09:35.364260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:06.631 [2024-07-15 16:09:35.372354] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f6020 00:28:06.631 [2024-07-15 16:09:35.373331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.631 [2024-07-15 16:09:35.373349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:06.631 [2024-07-15 16:09:35.381496] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f4f40 00:28:06.631 [2024-07-15 16:09:35.382502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.631 [2024-07-15 16:09:35.382520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:06.631 [2024-07-15 16:09:35.390651] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e5220 00:28:06.631 [2024-07-15 16:09:35.391659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:15968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.631 [2024-07-15 16:09:35.391677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:06.631 [2024-07-15 16:09:35.399805] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190eff18 00:28:06.631 [2024-07-15 16:09:35.400766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.631 [2024-07-15 16:09:35.400785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:06.631 [2024-07-15 16:09:35.408924] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190df988 00:28:06.631 [2024-07-15 16:09:35.409897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:17021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.631 [2024-07-15 16:09:35.409916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:06.631 [2024-07-15 16:09:35.418093] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e0a68 00:28:06.631 [2024-07-15 16:09:35.419069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.631 [2024-07-15 16:09:35.419090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:06.631 [2024-07-15 16:09:35.427175] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e1b48 00:28:06.631 [2024-07-15 16:09:35.428178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:3919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.631 [2024-07-15 16:09:35.428197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:06.631 [2024-07-15 16:09:35.436342] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190fbcf0 00:28:06.631 [2024-07-15 16:09:35.437235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.631 [2024-07-15 16:09:35.437254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:06.631 [2024-07-15 16:09:35.445506] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190de470 00:28:06.632 [2024-07-15 16:09:35.446487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.632 [2024-07-15 16:09:35.446507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:06.632 [2024-07-15 16:09:35.454657] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e1f80 00:28:06.632 [2024-07-15 16:09:35.455658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.632 [2024-07-15 16:09:35.455677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:06.632 [2024-07-15 16:09:35.463769] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e3060 00:28:06.632 [2024-07-15 16:09:35.464726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.632 [2024-07-15 16:09:35.464746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:06.632 [2024-07-15 16:09:35.472904] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e4140 00:28:06.632 [2024-07-15 16:09:35.473890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.632 [2024-07-15 16:09:35.473909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:06.632 [2024-07-15 16:09:35.482048] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190fb048 00:28:06.632 [2024-07-15 16:09:35.483005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.632 [2024-07-15 16:09:35.483025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:06.632 [2024-07-15 16:09:35.491169] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f9f68 00:28:06.632 [2024-07-15 16:09:35.492164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.632 [2024-07-15 16:09:35.492183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:06.632 [2024-07-15 16:09:35.500361] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f8e88 00:28:06.632 [2024-07-15 16:09:35.501357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.632 [2024-07-15 16:09:35.501376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:06.632 [2024-07-15 16:09:35.509457] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f7da8 00:28:06.632 [2024-07-15 16:09:35.510458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.632 [2024-07-15 16:09:35.510477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:06.632 [2024-07-15 16:09:35.518642] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f6cc8 00:28:06.632 [2024-07-15 16:09:35.519622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.632 [2024-07-15 16:09:35.519642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:06.632 [2024-07-15 16:09:35.527730] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f5be8 00:28:06.632 [2024-07-15 16:09:35.528726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.632 [2024-07-15 16:09:35.528745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:06.632 [2024-07-15 16:09:35.536919] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190ec408 00:28:06.632 [2024-07-15 16:09:35.537964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.632 [2024-07-15 16:09:35.537983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:06.632 [2024-07-15 16:09:35.546204] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f0350 00:28:06.632 [2024-07-15 16:09:35.547191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.632 [2024-07-15 16:09:35.547209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:06.632 [2024-07-15 16:09:35.555341] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e5ec8 00:28:06.632 [2024-07-15 16:09:35.556309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.632 [2024-07-15 16:09:35.556327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:06.632 [2024-07-15 16:09:35.563985] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190ee190 00:28:06.632 [2024-07-15 16:09:35.564966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.632 [2024-07-15 16:09:35.564985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:06.891 [2024-07-15 16:09:35.573707] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190ed4e8 00:28:06.891 [2024-07-15 16:09:35.574830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:10860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.891 [2024-07-15 16:09:35.574848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:06.891 [2024-07-15 16:09:35.583272] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e2c28 00:28:06.891 [2024-07-15 16:09:35.584481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.891 [2024-07-15 16:09:35.584500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:06.891 [2024-07-15 16:09:35.591725] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f6020 00:28:06.891 [2024-07-15 16:09:35.592487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.891 [2024-07-15 16:09:35.592506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.891 [2024-07-15 16:09:35.601916] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f7100 00:28:06.891 [2024-07-15 16:09:35.603243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.891 [2024-07-15 16:09:35.603262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.891 [2024-07-15 16:09:35.610376] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190ea248 00:28:06.891 [2024-07-15 16:09:35.611250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.891 [2024-07-15 16:09:35.611269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:06.891 [2024-07-15 16:09:35.619692] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e3060 00:28:06.891 [2024-07-15 16:09:35.620450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.891 [2024-07-15 16:09:35.620469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:06.891 [2024-07-15 16:09:35.630158] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e73e0 00:28:06.891 [2024-07-15 16:09:35.631719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.891 [2024-07-15 16:09:35.631738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:06.891 [2024-07-15 16:09:35.636828] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190de038 00:28:06.891 [2024-07-15 16:09:35.637562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.891 [2024-07-15 16:09:35.637581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:06.891 [2024-07-15 16:09:35.645394] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e6fa8 00:28:06.891 [2024-07-15 16:09:35.646115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.891 [2024-07-15 16:09:35.646134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:06.891 [2024-07-15 16:09:35.655541] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190ff3c8 00:28:06.891 [2024-07-15 16:09:35.656312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.891 [2024-07-15 16:09:35.656339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:06.891 [2024-07-15 16:09:35.664970] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f31b8 00:28:06.891 [2024-07-15 16:09:35.665970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.891 [2024-07-15 16:09:35.665990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:06.891 [2024-07-15 16:09:35.673644] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190fe2e8 00:28:06.891 [2024-07-15 16:09:35.674633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.891 [2024-07-15 16:09:35.674652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:06.891 [2024-07-15 16:09:35.683205] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190ed0b0 00:28:06.891 [2024-07-15 16:09:35.684279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.891 [2024-07-15 16:09:35.684298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:06.891 [2024-07-15 16:09:35.691645] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190fb8b8 00:28:06.891 [2024-07-15 16:09:35.692275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.891 [2024-07-15 16:09:35.692293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:06.891 [2024-07-15 16:09:35.700911] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f2948 00:28:06.891 [2024-07-15 16:09:35.701427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.891 [2024-07-15 16:09:35.701446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.891 [2024-07-15 16:09:35.711464] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190df988 00:28:06.891 [2024-07-15 16:09:35.712826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.891 [2024-07-15 16:09:35.712845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:06.891 [2024-07-15 16:09:35.721151] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e9e10 00:28:06.891 [2024-07-15 16:09:35.722518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.891 [2024-07-15 16:09:35.722536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:06.891 [2024-07-15 16:09:35.729606] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190fda78 00:28:06.891 [2024-07-15 16:09:35.730603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.891 [2024-07-15 16:09:35.730621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:06.891 [2024-07-15 16:09:35.738026] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f3a28 00:28:06.891 [2024-07-15 16:09:35.739331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:18323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.891 [2024-07-15 16:09:35.739350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:06.891 [2024-07-15 16:09:35.746442] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190dece0 00:28:06.891 [2024-07-15 16:09:35.747081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.891 [2024-07-15 16:09:35.747100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:06.891 [2024-07-15 16:09:35.755874] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e7c50 00:28:06.892 [2024-07-15 16:09:35.756729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.892 [2024-07-15 16:09:35.756748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:06.892 [2024-07-15 16:09:35.764529] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f2d80 00:28:06.892 [2024-07-15 16:09:35.765368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:24843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.892 [2024-07-15 16:09:35.765386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:06.892 [2024-07-15 16:09:35.774085] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190fc998 00:28:06.892 [2024-07-15 16:09:35.775076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.892 [2024-07-15 16:09:35.775095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:06.892 [2024-07-15 16:09:35.784305] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f4f40 00:28:06.892 [2024-07-15 16:09:35.785318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.892 [2024-07-15 16:09:35.785338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:06.892 [2024-07-15 16:09:35.793716] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190df118 00:28:06.892 [2024-07-15 16:09:35.794958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.892 [2024-07-15 16:09:35.794978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:06.892 [2024-07-15 16:09:35.801316] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190ef270 00:28:06.892 [2024-07-15 16:09:35.801944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.892 [2024-07-15 16:09:35.801962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:06.892 [2024-07-15 16:09:35.810697] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e3060 00:28:06.892 [2024-07-15 16:09:35.811548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:3011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.892 [2024-07-15 16:09:35.811567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:06.892 [2024-07-15 16:09:35.819378] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e6fa8 00:28:06.892 [2024-07-15 16:09:35.820213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:25171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.892 [2024-07-15 16:09:35.820235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:07.150 [2024-07-15 16:09:35.829804] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190ebfd0 00:28:07.150 [2024-07-15 16:09:35.830702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.150 [2024-07-15 16:09:35.830721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:07.150 [2024-07-15 16:09:35.839330] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f6890 00:28:07.150 [2024-07-15 16:09:35.840333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.150 [2024-07-15 16:09:35.840352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:07.150 [2024-07-15 16:09:35.848831] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e5220 00:28:07.150 [2024-07-15 16:09:35.850045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.150 [2024-07-15 16:09:35.850064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:07.150 [2024-07-15 16:09:35.857486] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e6300 00:28:07.150 [2024-07-15 16:09:35.858695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.150 [2024-07-15 16:09:35.858713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:07.150 [2024-07-15 16:09:35.867047] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190fe720 00:28:07.150 [2024-07-15 16:09:35.868379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.150 [2024-07-15 16:09:35.868398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:07.150 [2024-07-15 16:09:35.875496] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f8e88 00:28:07.150 [2024-07-15 16:09:35.876366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.151 [2024-07-15 16:09:35.876384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:07.151 [2024-07-15 16:09:35.884786] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f1430 00:28:07.151 [2024-07-15 16:09:35.885532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.151 [2024-07-15 16:09:35.885551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:07.151 [2024-07-15 16:09:35.895317] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f4f40 00:28:07.151 [2024-07-15 16:09:35.896883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.151 [2024-07-15 16:09:35.896905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:07.151 [2024-07-15 16:09:35.901805] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e9e10 00:28:07.151 [2024-07-15 16:09:35.902528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.151 [2024-07-15 16:09:35.902547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:07.151 [2024-07-15 16:09:35.910514] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f0788 00:28:07.151 [2024-07-15 16:09:35.911233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:6541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.151 [2024-07-15 16:09:35.911252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:07.151 [2024-07-15 16:09:35.920688] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f1ca0 00:28:07.151 [2024-07-15 16:09:35.921450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.151 [2024-07-15 16:09:35.921469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:07.151 [2024-07-15 16:09:35.930121] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190ea248 00:28:07.151 [2024-07-15 16:09:35.931153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.151 [2024-07-15 16:09:35.931172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:07.151 [2024-07-15 16:09:35.939072] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f20d8 00:28:07.151 [2024-07-15 16:09:35.939699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.151 [2024-07-15 16:09:35.939718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:07.151 [2024-07-15 16:09:35.948632] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f7970 00:28:07.151 [2024-07-15 16:09:35.949373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:9178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.151 [2024-07-15 16:09:35.949393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:07.151 [2024-07-15 16:09:35.957261] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190ebfd0 00:28:07.151 [2024-07-15 16:09:35.958575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.151 [2024-07-15 16:09:35.958594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:07.151 [2024-07-15 16:09:35.965705] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f6458 00:28:07.151 [2024-07-15 16:09:35.966336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.151 [2024-07-15 16:09:35.966355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:07.151 [2024-07-15 16:09:35.975172] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f0ff8 00:28:07.151 [2024-07-15 16:09:35.975921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.151 [2024-07-15 16:09:35.975942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:07.151 [2024-07-15 16:09:35.984721] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f7100 00:28:07.151 [2024-07-15 16:09:35.985701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:8980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.151 [2024-07-15 16:09:35.985720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:07.151 [2024-07-15 16:09:35.993980] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e0a68 00:28:07.151 [2024-07-15 16:09:35.994870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.151 [2024-07-15 16:09:35.994889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:07.151 [2024-07-15 16:09:36.003425] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f8e88 00:28:07.151 [2024-07-15 16:09:36.004503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.151 [2024-07-15 16:09:36.004521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:07.151 [2024-07-15 16:09:36.012066] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f1ca0 00:28:07.151 [2024-07-15 16:09:36.013119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.151 [2024-07-15 16:09:36.013137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:07.151 [2024-07-15 16:09:36.020514] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f2d80 00:28:07.151 [2024-07-15 16:09:36.021129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.151 [2024-07-15 16:09:36.021148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:07.151 [2024-07-15 16:09:36.029825] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190fa3a0 00:28:07.151 [2024-07-15 16:09:36.030323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.151 [2024-07-15 16:09:36.030342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:07.151 [2024-07-15 16:09:36.040346] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f0788 00:28:07.151 [2024-07-15 16:09:36.041664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:18740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.151 [2024-07-15 16:09:36.041683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:07.151 [2024-07-15 16:09:36.048781] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190ed0b0 00:28:07.151 [2024-07-15 16:09:36.049647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.151 [2024-07-15 16:09:36.049666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:07.151 [2024-07-15 16:09:36.058232] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f7da8 00:28:07.151 [2024-07-15 16:09:36.058966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.151 [2024-07-15 16:09:36.058985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:07.151 [2024-07-15 16:09:36.066658] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190de8a8 00:28:07.151 [2024-07-15 16:09:36.068157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.151 [2024-07-15 16:09:36.068176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:07.151 [2024-07-15 16:09:36.074708] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e12d8 00:28:07.151 [2024-07-15 16:09:36.075415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.151 [2024-07-15 16:09:36.075434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:07.410 [2024-07-15 16:09:36.084411] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190fb480 00:28:07.410 [2024-07-15 16:09:36.085240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-07-15 16:09:36.085260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:07.410 [2024-07-15 16:09:36.094138] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f9f68 00:28:07.410 [2024-07-15 16:09:36.095067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-07-15 16:09:36.095085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:07.410 [2024-07-15 16:09:36.103789] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f9b30 00:28:07.410 [2024-07-15 16:09:36.104864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-07-15 16:09:36.104883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:07.410 [2024-07-15 16:09:36.114642] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f5be8 00:28:07.410 [2024-07-15 16:09:36.116180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-07-15 16:09:36.116199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:07.410 [2024-07-15 16:09:36.121100] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e6b70 00:28:07.410 [2024-07-15 16:09:36.121807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-07-15 16:09:36.121825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:07.410 [2024-07-15 16:09:36.131591] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f4b08 00:28:07.410 [2024-07-15 16:09:36.132348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:25401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-07-15 16:09:36.132367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:07.410 [2024-07-15 16:09:36.139800] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190eaab8 00:28:07.410 [2024-07-15 16:09:36.140772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-07-15 16:09:36.140791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:07.410 [2024-07-15 16:09:36.149964] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190fd208 00:28:07.410 [2024-07-15 16:09:36.150953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-07-15 16:09:36.150973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:07.410 [2024-07-15 16:09:36.159403] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f9b30 00:28:07.410 [2024-07-15 16:09:36.160593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-07-15 16:09:36.160612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:07.410 [2024-07-15 16:09:36.168016] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e4578 00:28:07.410 [2024-07-15 16:09:36.169243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-07-15 16:09:36.169262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:07.410 [2024-07-15 16:09:36.176116] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190fb480 00:28:07.410 [2024-07-15 16:09:36.176643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:15087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-07-15 16:09:36.176663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:07.410 [2024-07-15 16:09:36.185414] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190de8a8 00:28:07.410 [2024-07-15 16:09:36.186233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-07-15 16:09:36.186252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:07.410 [2024-07-15 16:09:36.194003] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f8618 00:28:07.410 [2024-07-15 16:09:36.194821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-07-15 16:09:36.194840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:07.410 [2024-07-15 16:09:36.204158] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190eaef0 00:28:07.410 [2024-07-15 16:09:36.205018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-07-15 16:09:36.205036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:07.410 [2024-07-15 16:09:36.213665] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e84c0 00:28:07.410 [2024-07-15 16:09:36.214738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-07-15 16:09:36.214761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:07.410 [2024-07-15 16:09:36.222368] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f5378 00:28:07.410 [2024-07-15 16:09:36.223424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-07-15 16:09:36.223443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:07.410 [2024-07-15 16:09:36.230809] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190fdeb0 00:28:07.410 [2024-07-15 16:09:36.231417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-07-15 16:09:36.231437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:07.410 [2024-07-15 16:09:36.240113] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f7538 00:28:07.410 [2024-07-15 16:09:36.240599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.410 [2024-07-15 16:09:36.240618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:07.411 [2024-07-15 16:09:36.248317] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190fd640 00:28:07.411 [2024-07-15 16:09:36.248996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.411 [2024-07-15 16:09:36.249014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:07.411 [2024-07-15 16:09:36.258467] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f0350 00:28:07.411 [2024-07-15 16:09:36.259189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:8314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.411 [2024-07-15 16:09:36.259208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:07.411 [2024-07-15 16:09:36.267819] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f6cc8 00:28:07.411 [2024-07-15 16:09:36.268804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.411 [2024-07-15 16:09:36.268823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:07.411 [2024-07-15 16:09:36.276876] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190dece0 00:28:07.411 [2024-07-15 16:09:36.277463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.411 [2024-07-15 16:09:36.277482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:07.411 [2024-07-15 16:09:36.286465] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f8618 00:28:07.411 [2024-07-15 16:09:36.287173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:11701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.411 [2024-07-15 16:09:36.287193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:07.411 [2024-07-15 16:09:36.296977] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e49b0 00:28:07.411 [2024-07-15 16:09:36.298514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.411 [2024-07-15 16:09:36.298533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:07.411 [2024-07-15 16:09:36.303421] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e3498 00:28:07.411 [2024-07-15 16:09:36.304093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.411 [2024-07-15 16:09:36.304111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:07.411 [2024-07-15 16:09:36.312330] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190fb480 00:28:07.411 [2024-07-15 16:09:36.313016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:24244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.411 [2024-07-15 16:09:36.313036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:07.411 [2024-07-15 16:09:36.322073] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190fd208 00:28:07.411 [2024-07-15 16:09:36.322869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.411 [2024-07-15 16:09:36.322887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:07.411 [2024-07-15 16:09:36.331784] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190df550 00:28:07.411 [2024-07-15 16:09:36.332706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.411 [2024-07-15 16:09:36.332726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:07.411 [2024-07-15 16:09:36.341540] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e27f0 00:28:07.411 [2024-07-15 16:09:36.342587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.411 [2024-07-15 16:09:36.342606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:07.669 [2024-07-15 16:09:36.351307] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f4298 00:28:07.669 [2024-07-15 16:09:36.352488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:9677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.669 [2024-07-15 16:09:36.352507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:07.669 [2024-07-15 16:09:36.360866] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190fd208 00:28:07.669 [2024-07-15 16:09:36.362132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:18538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.669 [2024-07-15 16:09:36.362151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:07.669 [2024-07-15 16:09:36.369323] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190fdeb0 00:28:07.669 [2024-07-15 16:09:36.370167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.669 [2024-07-15 16:09:36.370185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:07.669 [2024-07-15 16:09:36.378488] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190eaab8 00:28:07.669 [2024-07-15 16:09:36.379329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.669 [2024-07-15 16:09:36.379348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:07.669 [2024-07-15 16:09:36.387938] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f96f8 00:28:07.669 [2024-07-15 16:09:36.388658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:17938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.669 [2024-07-15 16:09:36.388677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:07.669 [2024-07-15 16:09:36.396524] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190edd58 00:28:07.669 [2024-07-15 16:09:36.397930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.669 [2024-07-15 16:09:36.397950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:07.669 [2024-07-15 16:09:36.404610] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f3a28 00:28:07.669 [2024-07-15 16:09:36.405296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.670 [2024-07-15 16:09:36.405315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:07.670 [2024-07-15 16:09:36.414312] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f8e88 00:28:07.670 [2024-07-15 16:09:36.415112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.670 [2024-07-15 16:09:36.415131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:07.670 [2024-07-15 16:09:36.424036] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190df988 00:28:07.670 [2024-07-15 16:09:36.424971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.670 [2024-07-15 16:09:36.424990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:07.670 [2024-07-15 16:09:36.433890] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e6300 00:28:07.670 [2024-07-15 16:09:36.434892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.670 [2024-07-15 16:09:36.434911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:07.670 [2024-07-15 16:09:36.443802] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190fbcf0 00:28:07.670 [2024-07-15 16:09:36.444886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.670 [2024-07-15 16:09:36.444905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:07.670 [2024-07-15 16:09:36.453449] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f8e88 00:28:07.670 [2024-07-15 16:09:36.454714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.670 [2024-07-15 16:09:36.454737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:07.670 [2024-07-15 16:09:36.462989] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f46d0 00:28:07.670 [2024-07-15 16:09:36.464382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:14846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.670 [2024-07-15 16:09:36.464401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:07.670 [2024-07-15 16:09:36.472547] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190feb58 00:28:07.670 [2024-07-15 16:09:36.474065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.670 [2024-07-15 16:09:36.474083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:07.670 [2024-07-15 16:09:36.479003] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190e4140 00:28:07.670 [2024-07-15 16:09:36.479688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.670 [2024-07-15 16:09:36.479707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:07.670 [2024-07-15 16:09:36.487650] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f0788 00:28:07.670 [2024-07-15 16:09:36.488321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.670 [2024-07-15 16:09:36.488340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:07.670 [2024-07-15 16:09:36.497928] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd4d0) with pdu=0x2000190f46d0 00:28:07.670 [2024-07-15 16:09:36.498653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.670 [2024-07-15 16:09:36.498672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:07.670 00:28:07.670 Latency(us) 00:28:07.670 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:07.670 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:07.670 nvme0n1 : 2.00 27841.51 108.76 0.00 0.00 4593.75 1809.36 11454.55 00:28:07.670 =================================================================================================================== 00:28:07.670 Total : 27841.51 108.76 0.00 0.00 4593.75 1809.36 11454.55 00:28:07.670 0 00:28:07.670 16:09:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:07.670 16:09:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:07.670 16:09:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:07.670 | .driver_specific 00:28:07.670 | .nvme_error 00:28:07.670 | .status_code 00:28:07.670 | .command_transient_transport_error' 00:28:07.670 16:09:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:07.929 16:09:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 218 > 0 )) 00:28:07.929 16:09:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3913731 00:28:07.929 16:09:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3913731 ']' 00:28:07.929 16:09:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3913731 00:28:07.929 16:09:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:07.929 16:09:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:07.929 16:09:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3913731 00:28:07.929 16:09:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:07.929 16:09:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:07.929 16:09:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3913731' 00:28:07.929 killing process with pid 3913731 00:28:07.929 16:09:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3913731 00:28:07.929 Received shutdown signal, test time was about 2.000000 seconds 00:28:07.929 00:28:07.929 Latency(us) 00:28:07.929 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:07.929 =================================================================================================================== 00:28:07.929 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:07.929 16:09:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3913731 00:28:08.187 16:09:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:08.187 16:09:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:08.187 16:09:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:08.187 16:09:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:08.187 16:09:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:08.187 16:09:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3914424 00:28:08.187 16:09:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3914424 /var/tmp/bperf.sock 00:28:08.187 16:09:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:08.187 16:09:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3914424 ']' 00:28:08.187 16:09:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:08.187 16:09:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:08.187 16:09:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:08.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:08.187 16:09:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:08.187 16:09:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:08.187 [2024-07-15 16:09:36.984312] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:28:08.187 [2024-07-15 16:09:36.984360] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3914424 ] 00:28:08.187 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:08.187 Zero copy mechanism will not be used. 00:28:08.187 EAL: No free 2048 kB hugepages reported on node 1 00:28:08.187 [2024-07-15 16:09:37.038372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:08.187 [2024-07-15 16:09:37.109829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:09.121 16:09:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:09.121 16:09:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:09.121 16:09:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:09.121 16:09:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:09.121 16:09:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:09.121 16:09:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.121 16:09:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:09.121 16:09:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.121 16:09:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:09.121 16:09:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:09.380 nvme0n1 00:28:09.380 16:09:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:09.380 16:09:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.380 16:09:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:09.380 16:09:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.380 16:09:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:09.380 16:09:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:09.639 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:09.639 Zero copy mechanism will not be used. 00:28:09.639 Running I/O for 2 seconds... 00:28:09.639 [2024-07-15 16:09:38.345039] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.639 [2024-07-15 16:09:38.345465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.639 [2024-07-15 16:09:38.345493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:09.639 [2024-07-15 16:09:38.352698] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.639 [2024-07-15 16:09:38.353079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.639 [2024-07-15 16:09:38.353102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:09.639 [2024-07-15 16:09:38.359372] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.639 [2024-07-15 16:09:38.359745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.639 [2024-07-15 16:09:38.359766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:09.639 [2024-07-15 16:09:38.365767] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.639 [2024-07-15 16:09:38.366139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.639 [2024-07-15 16:09:38.366161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.639 [2024-07-15 16:09:38.372153] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.639 [2024-07-15 16:09:38.372229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.639 [2024-07-15 16:09:38.372249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:09.639 [2024-07-15 16:09:38.379077] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.639 [2024-07-15 16:09:38.379452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.639 [2024-07-15 16:09:38.379473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:09.639 [2024-07-15 16:09:38.385754] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.639 [2024-07-15 16:09:38.386132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.639 [2024-07-15 16:09:38.386152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:09.639 [2024-07-15 16:09:38.391573] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.639 [2024-07-15 16:09:38.391642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.639 [2024-07-15 16:09:38.391661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.639 [2024-07-15 16:09:38.398043] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.639 [2024-07-15 16:09:38.398438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.639 [2024-07-15 16:09:38.398458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:09.639 [2024-07-15 16:09:38.404874] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.639 [2024-07-15 16:09:38.405264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.639 [2024-07-15 16:09:38.405284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:09.639 [2024-07-15 16:09:38.410967] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.639 [2024-07-15 16:09:38.411345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.640 [2024-07-15 16:09:38.411365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:09.640 [2024-07-15 16:09:38.417470] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.640 [2024-07-15 16:09:38.417844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.640 [2024-07-15 16:09:38.417864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.640 [2024-07-15 16:09:38.423724] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.640 [2024-07-15 16:09:38.424086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.640 [2024-07-15 16:09:38.424110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:09.640 [2024-07-15 16:09:38.429881] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.640 [2024-07-15 16:09:38.430250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.640 [2024-07-15 16:09:38.430272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:09.640 [2024-07-15 16:09:38.436218] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.640 [2024-07-15 16:09:38.436602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.640 [2024-07-15 16:09:38.436622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:09.640 [2024-07-15 16:09:38.442767] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.640 [2024-07-15 16:09:38.443141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.640 [2024-07-15 16:09:38.443161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.640 [2024-07-15 16:09:38.449015] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.640 [2024-07-15 16:09:38.449395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.640 [2024-07-15 16:09:38.449414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:09.640 [2024-07-15 16:09:38.455843] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.640 [2024-07-15 16:09:38.456215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.640 [2024-07-15 16:09:38.456240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:09.640 [2024-07-15 16:09:38.462541] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.640 [2024-07-15 16:09:38.462911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.640 [2024-07-15 16:09:38.462931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:09.640 [2024-07-15 16:09:38.469136] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.640 [2024-07-15 16:09:38.469511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.640 [2024-07-15 16:09:38.469532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.640 [2024-07-15 16:09:38.475982] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.640 [2024-07-15 16:09:38.476363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.640 [2024-07-15 16:09:38.476383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:09.640 [2024-07-15 16:09:38.482662] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.640 [2024-07-15 16:09:38.483057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.640 [2024-07-15 16:09:38.483077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:09.640 [2024-07-15 16:09:38.489363] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.640 [2024-07-15 16:09:38.489758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.640 [2024-07-15 16:09:38.489778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:09.640 [2024-07-15 16:09:38.496451] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.640 [2024-07-15 16:09:38.496843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.640 [2024-07-15 16:09:38.496862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.640 [2024-07-15 16:09:38.503294] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.640 [2024-07-15 16:09:38.503687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.640 [2024-07-15 16:09:38.503707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:09.640 [2024-07-15 16:09:38.509806] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.640 [2024-07-15 16:09:38.510195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.640 [2024-07-15 16:09:38.510214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:09.640 [2024-07-15 16:09:38.515817] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.640 [2024-07-15 16:09:38.516184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.640 [2024-07-15 16:09:38.516204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:09.640 [2024-07-15 16:09:38.521447] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.640 [2024-07-15 16:09:38.521824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.640 [2024-07-15 16:09:38.521844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.640 [2024-07-15 16:09:38.527312] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.640 [2024-07-15 16:09:38.527689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.640 [2024-07-15 16:09:38.527708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:09.640 [2024-07-15 16:09:38.533437] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.640 [2024-07-15 16:09:38.533824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.640 [2024-07-15 16:09:38.533844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:09.640 [2024-07-15 16:09:38.538989] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.640 [2024-07-15 16:09:38.539376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.640 [2024-07-15 16:09:38.539395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:09.640 [2024-07-15 16:09:38.544536] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.640 [2024-07-15 16:09:38.544914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.640 [2024-07-15 16:09:38.544934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.640 [2024-07-15 16:09:38.549796] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.640 [2024-07-15 16:09:38.550170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.640 [2024-07-15 16:09:38.550190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:09.640 [2024-07-15 16:09:38.555300] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.640 [2024-07-15 16:09:38.555684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.640 [2024-07-15 16:09:38.555703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:09.640 [2024-07-15 16:09:38.560832] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.640 [2024-07-15 16:09:38.561197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.640 [2024-07-15 16:09:38.561218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:09.640 [2024-07-15 16:09:38.566055] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.640 [2024-07-15 16:09:38.566432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.640 [2024-07-15 16:09:38.566451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.640 [2024-07-15 16:09:38.571209] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.640 [2024-07-15 16:09:38.571592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.640 [2024-07-15 16:09:38.571613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:09.900 [2024-07-15 16:09:38.576467] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.900 [2024-07-15 16:09:38.576834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.900 [2024-07-15 16:09:38.576854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:09.900 [2024-07-15 16:09:38.581769] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.900 [2024-07-15 16:09:38.582162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.900 [2024-07-15 16:09:38.582186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:09.900 [2024-07-15 16:09:38.587431] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.900 [2024-07-15 16:09:38.587807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.900 [2024-07-15 16:09:38.587827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.901 [2024-07-15 16:09:38.593095] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.901 [2024-07-15 16:09:38.593480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.901 [2024-07-15 16:09:38.593500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:09.901 [2024-07-15 16:09:38.599592] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.901 [2024-07-15 16:09:38.599982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.901 [2024-07-15 16:09:38.600001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:09.901 [2024-07-15 16:09:38.605908] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.901 [2024-07-15 16:09:38.606280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.901 [2024-07-15 16:09:38.606300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:09.901 [2024-07-15 16:09:38.612303] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.901 [2024-07-15 16:09:38.612697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.901 [2024-07-15 16:09:38.612717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.901 [2024-07-15 16:09:38.619199] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.901 [2024-07-15 16:09:38.619600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.901 [2024-07-15 16:09:38.619621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:09.901 [2024-07-15 16:09:38.625787] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.901 [2024-07-15 16:09:38.626182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.901 [2024-07-15 16:09:38.626202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:09.901 [2024-07-15 16:09:38.633346] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.901 [2024-07-15 16:09:38.633717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.901 [2024-07-15 16:09:38.633737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:09.901 [2024-07-15 16:09:38.640250] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.901 [2024-07-15 16:09:38.640654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.901 [2024-07-15 16:09:38.640674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.901 [2024-07-15 16:09:38.646338] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.901 [2024-07-15 16:09:38.646717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.901 [2024-07-15 16:09:38.646737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:09.901 [2024-07-15 16:09:38.652596] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.901 [2024-07-15 16:09:38.652982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.901 [2024-07-15 16:09:38.653001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:09.901 [2024-07-15 16:09:38.659565] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.901 [2024-07-15 16:09:38.659932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.901 [2024-07-15 16:09:38.659951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:09.901 [2024-07-15 16:09:38.665827] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.901 [2024-07-15 16:09:38.666207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.901 [2024-07-15 16:09:38.666233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.901 [2024-07-15 16:09:38.672455] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.901 [2024-07-15 16:09:38.672862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.901 [2024-07-15 16:09:38.672882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:09.901 [2024-07-15 16:09:38.678668] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.901 [2024-07-15 16:09:38.678905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.901 [2024-07-15 16:09:38.678925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:09.901 [2024-07-15 16:09:38.684926] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.901 [2024-07-15 16:09:38.685286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.901 [2024-07-15 16:09:38.685306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:09.901 [2024-07-15 16:09:38.691167] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.901 [2024-07-15 16:09:38.691527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.901 [2024-07-15 16:09:38.691547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.901 [2024-07-15 16:09:38.697388] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.901 [2024-07-15 16:09:38.697742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.901 [2024-07-15 16:09:38.697762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:09.901 [2024-07-15 16:09:38.702964] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.901 [2024-07-15 16:09:38.703350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.901 [2024-07-15 16:09:38.703371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:09.901 [2024-07-15 16:09:38.709149] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.901 [2024-07-15 16:09:38.709510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.901 [2024-07-15 16:09:38.709529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:09.901 [2024-07-15 16:09:38.715297] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.901 [2024-07-15 16:09:38.715650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.901 [2024-07-15 16:09:38.715670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.901 [2024-07-15 16:09:38.721567] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.901 [2024-07-15 16:09:38.721917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.901 [2024-07-15 16:09:38.721936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:09.901 [2024-07-15 16:09:38.727860] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.901 [2024-07-15 16:09:38.728205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.901 [2024-07-15 16:09:38.728230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:09.901 [2024-07-15 16:09:38.733697] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.901 [2024-07-15 16:09:38.734054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.901 [2024-07-15 16:09:38.734074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:09.901 [2024-07-15 16:09:38.739349] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.901 [2024-07-15 16:09:38.739693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.901 [2024-07-15 16:09:38.739712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.901 [2024-07-15 16:09:38.744627] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.901 [2024-07-15 16:09:38.744986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.901 [2024-07-15 16:09:38.745006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:09.901 [2024-07-15 16:09:38.749914] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.901 [2024-07-15 16:09:38.750264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.901 [2024-07-15 16:09:38.750283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:09.901 [2024-07-15 16:09:38.755375] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.901 [2024-07-15 16:09:38.755731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.901 [2024-07-15 16:09:38.755751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:09.901 [2024-07-15 16:09:38.760658] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.901 [2024-07-15 16:09:38.761007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.901 [2024-07-15 16:09:38.761027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.901 [2024-07-15 16:09:38.765947] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.901 [2024-07-15 16:09:38.766287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.901 [2024-07-15 16:09:38.766306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:09.901 [2024-07-15 16:09:38.770825] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.901 [2024-07-15 16:09:38.771185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.901 [2024-07-15 16:09:38.771204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:09.901 [2024-07-15 16:09:38.775460] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.901 [2024-07-15 16:09:38.775809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.901 [2024-07-15 16:09:38.775828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:09.902 [2024-07-15 16:09:38.780193] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.902 [2024-07-15 16:09:38.780536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.902 [2024-07-15 16:09:38.780556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.902 [2024-07-15 16:09:38.784995] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.902 [2024-07-15 16:09:38.785362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.902 [2024-07-15 16:09:38.785382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:09.902 [2024-07-15 16:09:38.789772] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.902 [2024-07-15 16:09:38.790117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.902 [2024-07-15 16:09:38.790137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:09.902 [2024-07-15 16:09:38.796858] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.902 [2024-07-15 16:09:38.797352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.902 [2024-07-15 16:09:38.797372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:09.902 [2024-07-15 16:09:38.804848] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.902 [2024-07-15 16:09:38.805209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.902 [2024-07-15 16:09:38.805235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.902 [2024-07-15 16:09:38.811294] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.902 [2024-07-15 16:09:38.811655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.902 [2024-07-15 16:09:38.811674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:09.902 [2024-07-15 16:09:38.817035] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.902 [2024-07-15 16:09:38.817380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.902 [2024-07-15 16:09:38.817400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:09.902 [2024-07-15 16:09:38.822471] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.902 [2024-07-15 16:09:38.822827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.902 [2024-07-15 16:09:38.822848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:09.902 [2024-07-15 16:09:38.828250] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:09.902 [2024-07-15 16:09:38.828599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.902 [2024-07-15 16:09:38.828619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.162 [2024-07-15 16:09:38.834103] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.162 [2024-07-15 16:09:38.834452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.162 [2024-07-15 16:09:38.834472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:10.162 [2024-07-15 16:09:38.840732] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.162 [2024-07-15 16:09:38.841099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.162 [2024-07-15 16:09:38.841121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:10.162 [2024-07-15 16:09:38.846672] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.162 [2024-07-15 16:09:38.847024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.162 [2024-07-15 16:09:38.847043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:10.162 [2024-07-15 16:09:38.852959] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.162 [2024-07-15 16:09:38.853305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.162 [2024-07-15 16:09:38.853325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.162 [2024-07-15 16:09:38.859307] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.162 [2024-07-15 16:09:38.859697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.162 [2024-07-15 16:09:38.859716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:10.162 [2024-07-15 16:09:38.867168] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.162 [2024-07-15 16:09:38.867642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.162 [2024-07-15 16:09:38.867662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:10.162 [2024-07-15 16:09:38.876398] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.162 [2024-07-15 16:09:38.876771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.162 [2024-07-15 16:09:38.876790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:10.162 [2024-07-15 16:09:38.884046] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.162 [2024-07-15 16:09:38.884409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.162 [2024-07-15 16:09:38.884429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.162 [2024-07-15 16:09:38.891161] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.162 [2024-07-15 16:09:38.891535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.162 [2024-07-15 16:09:38.891554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:10.162 [2024-07-15 16:09:38.898055] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.162 [2024-07-15 16:09:38.898450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.162 [2024-07-15 16:09:38.898469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:10.162 [2024-07-15 16:09:38.904947] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.162 [2024-07-15 16:09:38.905304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.162 [2024-07-15 16:09:38.905324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:10.162 [2024-07-15 16:09:38.911570] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.162 [2024-07-15 16:09:38.911917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.162 [2024-07-15 16:09:38.911936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.162 [2024-07-15 16:09:38.918943] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.162 [2024-07-15 16:09:38.919336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.162 [2024-07-15 16:09:38.919356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:10.162 [2024-07-15 16:09:38.924867] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.162 [2024-07-15 16:09:38.925218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.162 [2024-07-15 16:09:38.925242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:10.162 [2024-07-15 16:09:38.930572] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.162 [2024-07-15 16:09:38.930916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.162 [2024-07-15 16:09:38.930936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:10.162 [2024-07-15 16:09:38.936746] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.162 [2024-07-15 16:09:38.937116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.162 [2024-07-15 16:09:38.937135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.162 [2024-07-15 16:09:38.942893] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.162 [2024-07-15 16:09:38.943249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.162 [2024-07-15 16:09:38.943269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:10.162 [2024-07-15 16:09:38.950738] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.162 [2024-07-15 16:09:38.951206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.162 [2024-07-15 16:09:38.951232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:10.162 [2024-07-15 16:09:38.957802] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.162 [2024-07-15 16:09:38.958145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.162 [2024-07-15 16:09:38.958164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:10.162 [2024-07-15 16:09:38.964339] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.162 [2024-07-15 16:09:38.964731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.162 [2024-07-15 16:09:38.964751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.162 [2024-07-15 16:09:38.970501] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.162 [2024-07-15 16:09:38.970857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.162 [2024-07-15 16:09:38.970877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:10.162 [2024-07-15 16:09:38.976569] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.162 [2024-07-15 16:09:38.976919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.162 [2024-07-15 16:09:38.976938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:10.162 [2024-07-15 16:09:38.981419] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.162 [2024-07-15 16:09:38.981753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.162 [2024-07-15 16:09:38.981772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:10.162 [2024-07-15 16:09:38.988647] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.162 [2024-07-15 16:09:38.989140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.162 [2024-07-15 16:09:38.989159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.162 [2024-07-15 16:09:38.996294] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.163 [2024-07-15 16:09:38.996667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.163 [2024-07-15 16:09:38.996688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:10.163 [2024-07-15 16:09:39.002753] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.163 [2024-07-15 16:09:39.003115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.163 [2024-07-15 16:09:39.003134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:10.163 [2024-07-15 16:09:39.009386] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.163 [2024-07-15 16:09:39.009751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.163 [2024-07-15 16:09:39.009771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:10.163 [2024-07-15 16:09:39.015733] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.163 [2024-07-15 16:09:39.016079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.163 [2024-07-15 16:09:39.016103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.163 [2024-07-15 16:09:39.021476] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.163 [2024-07-15 16:09:39.021820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.163 [2024-07-15 16:09:39.021840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:10.163 [2024-07-15 16:09:39.028729] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.163 [2024-07-15 16:09:39.029186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.163 [2024-07-15 16:09:39.029205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:10.163 [2024-07-15 16:09:39.036598] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.163 [2024-07-15 16:09:39.037013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.163 [2024-07-15 16:09:39.037032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:10.163 [2024-07-15 16:09:39.042954] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.163 [2024-07-15 16:09:39.043334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.163 [2024-07-15 16:09:39.043353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.163 [2024-07-15 16:09:39.049681] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.163 [2024-07-15 16:09:39.050016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.163 [2024-07-15 16:09:39.050036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:10.163 [2024-07-15 16:09:39.056636] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.163 [2024-07-15 16:09:39.056990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.163 [2024-07-15 16:09:39.057008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:10.163 [2024-07-15 16:09:39.062548] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.163 [2024-07-15 16:09:39.062889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.163 [2024-07-15 16:09:39.062908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:10.163 [2024-07-15 16:09:39.067813] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.163 [2024-07-15 16:09:39.068162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.163 [2024-07-15 16:09:39.068181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.163 [2024-07-15 16:09:39.073181] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.163 [2024-07-15 16:09:39.073535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.163 [2024-07-15 16:09:39.073556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:10.163 [2024-07-15 16:09:39.078910] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.163 [2024-07-15 16:09:39.079249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.163 [2024-07-15 16:09:39.079268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:10.163 [2024-07-15 16:09:39.084085] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.163 [2024-07-15 16:09:39.084438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.163 [2024-07-15 16:09:39.084457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:10.163 [2024-07-15 16:09:39.089719] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.163 [2024-07-15 16:09:39.090105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.163 [2024-07-15 16:09:39.090125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.423 [2024-07-15 16:09:39.095567] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.423 [2024-07-15 16:09:39.095924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.423 [2024-07-15 16:09:39.095944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:10.423 [2024-07-15 16:09:39.101111] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.423 [2024-07-15 16:09:39.101467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.423 [2024-07-15 16:09:39.101487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:10.423 [2024-07-15 16:09:39.106801] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.423 [2024-07-15 16:09:39.107155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.423 [2024-07-15 16:09:39.107175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:10.423 [2024-07-15 16:09:39.112980] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.423 [2024-07-15 16:09:39.113338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.423 [2024-07-15 16:09:39.113358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.423 [2024-07-15 16:09:39.121173] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.423 [2024-07-15 16:09:39.121628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.423 [2024-07-15 16:09:39.121648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:10.423 [2024-07-15 16:09:39.128714] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.423 [2024-07-15 16:09:39.129082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.423 [2024-07-15 16:09:39.129102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:10.423 [2024-07-15 16:09:39.134955] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.423 [2024-07-15 16:09:39.135413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.423 [2024-07-15 16:09:39.135433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:10.423 [2024-07-15 16:09:39.140502] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.423 [2024-07-15 16:09:39.140857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.423 [2024-07-15 16:09:39.140877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.423 [2024-07-15 16:09:39.146129] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.423 [2024-07-15 16:09:39.146484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.423 [2024-07-15 16:09:39.146503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:10.423 [2024-07-15 16:09:39.151242] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.423 [2024-07-15 16:09:39.151586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.423 [2024-07-15 16:09:39.151605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:10.423 [2024-07-15 16:09:39.156926] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.423 [2024-07-15 16:09:39.157282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.423 [2024-07-15 16:09:39.157302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:10.424 [2024-07-15 16:09:39.163568] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.424 [2024-07-15 16:09:39.163943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.424 [2024-07-15 16:09:39.163962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.424 [2024-07-15 16:09:39.170104] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.424 [2024-07-15 16:09:39.170464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.424 [2024-07-15 16:09:39.170483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:10.424 [2024-07-15 16:09:39.175562] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.424 [2024-07-15 16:09:39.175917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.424 [2024-07-15 16:09:39.175941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:10.424 [2024-07-15 16:09:39.180737] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.424 [2024-07-15 16:09:39.181089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.424 [2024-07-15 16:09:39.181109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:10.424 [2024-07-15 16:09:39.186754] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.424 [2024-07-15 16:09:39.187139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.424 [2024-07-15 16:09:39.187158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.424 [2024-07-15 16:09:39.192058] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.424 [2024-07-15 16:09:39.192426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.424 [2024-07-15 16:09:39.192446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:10.424 [2024-07-15 16:09:39.196979] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.424 [2024-07-15 16:09:39.197328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.424 [2024-07-15 16:09:39.197347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:10.424 [2024-07-15 16:09:39.201586] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.424 [2024-07-15 16:09:39.201934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.424 [2024-07-15 16:09:39.201953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:10.424 [2024-07-15 16:09:39.206601] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.424 [2024-07-15 16:09:39.206936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.424 [2024-07-15 16:09:39.206956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.424 [2024-07-15 16:09:39.211191] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.424 [2024-07-15 16:09:39.211523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.424 [2024-07-15 16:09:39.211543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:10.424 [2024-07-15 16:09:39.216890] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.424 [2024-07-15 16:09:39.217317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.424 [2024-07-15 16:09:39.217337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:10.424 [2024-07-15 16:09:39.223774] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.424 [2024-07-15 16:09:39.224212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.424 [2024-07-15 16:09:39.224237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:10.424 [2024-07-15 16:09:39.229855] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.424 [2024-07-15 16:09:39.230206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.424 [2024-07-15 16:09:39.230231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.424 [2024-07-15 16:09:39.236056] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.424 [2024-07-15 16:09:39.236420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.424 [2024-07-15 16:09:39.236439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:10.424 [2024-07-15 16:09:39.243269] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.424 [2024-07-15 16:09:39.243692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.424 [2024-07-15 16:09:39.243713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:10.424 [2024-07-15 16:09:39.251257] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.424 [2024-07-15 16:09:39.251664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.424 [2024-07-15 16:09:39.251685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:10.424 [2024-07-15 16:09:39.259793] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.424 [2024-07-15 16:09:39.260185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.424 [2024-07-15 16:09:39.260205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.424 [2024-07-15 16:09:39.268491] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.424 [2024-07-15 16:09:39.268893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.424 [2024-07-15 16:09:39.268913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:10.424 [2024-07-15 16:09:39.274773] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.424 [2024-07-15 16:09:39.275108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.424 [2024-07-15 16:09:39.275129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:10.424 [2024-07-15 16:09:39.281099] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.424 [2024-07-15 16:09:39.281462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.424 [2024-07-15 16:09:39.281486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:10.424 [2024-07-15 16:09:39.287540] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.424 [2024-07-15 16:09:39.287860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.424 [2024-07-15 16:09:39.287880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.424 [2024-07-15 16:09:39.293536] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.424 [2024-07-15 16:09:39.293865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.424 [2024-07-15 16:09:39.293885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:10.424 [2024-07-15 16:09:39.299176] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.424 [2024-07-15 16:09:39.299517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.424 [2024-07-15 16:09:39.299538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:10.424 [2024-07-15 16:09:39.304479] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.424 [2024-07-15 16:09:39.304817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.424 [2024-07-15 16:09:39.304837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:10.424 [2024-07-15 16:09:39.310049] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.424 [2024-07-15 16:09:39.310384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.424 [2024-07-15 16:09:39.310405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.424 [2024-07-15 16:09:39.315639] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.424 [2024-07-15 16:09:39.315971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.424 [2024-07-15 16:09:39.315991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:10.424 [2024-07-15 16:09:39.320975] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.424 [2024-07-15 16:09:39.321312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.424 [2024-07-15 16:09:39.321331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:10.424 [2024-07-15 16:09:39.326183] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.424 [2024-07-15 16:09:39.326515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.424 [2024-07-15 16:09:39.326534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:10.424 [2024-07-15 16:09:39.331088] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.424 [2024-07-15 16:09:39.331431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.425 [2024-07-15 16:09:39.331451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.425 [2024-07-15 16:09:39.336264] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.425 [2024-07-15 16:09:39.336594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.425 [2024-07-15 16:09:39.336614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:10.425 [2024-07-15 16:09:39.341699] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.425 [2024-07-15 16:09:39.342038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.425 [2024-07-15 16:09:39.342058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:10.425 [2024-07-15 16:09:39.347768] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.425 [2024-07-15 16:09:39.348130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.425 [2024-07-15 16:09:39.348148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:10.425 [2024-07-15 16:09:39.353396] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.425 [2024-07-15 16:09:39.353722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.425 [2024-07-15 16:09:39.353741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.685 [2024-07-15 16:09:39.358635] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.685 [2024-07-15 16:09:39.358980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.685 [2024-07-15 16:09:39.358999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:10.685 [2024-07-15 16:09:39.364391] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.685 [2024-07-15 16:09:39.364716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.685 [2024-07-15 16:09:39.364735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:10.685 [2024-07-15 16:09:39.369409] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.685 [2024-07-15 16:09:39.369728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.685 [2024-07-15 16:09:39.369748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:10.685 [2024-07-15 16:09:39.374669] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.685 [2024-07-15 16:09:39.375002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.685 [2024-07-15 16:09:39.375021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.685 [2024-07-15 16:09:39.380240] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.685 [2024-07-15 16:09:39.380574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.685 [2024-07-15 16:09:39.380593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:10.685 [2024-07-15 16:09:39.386036] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.685 [2024-07-15 16:09:39.386375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.685 [2024-07-15 16:09:39.386395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:10.685 [2024-07-15 16:09:39.392050] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.685 [2024-07-15 16:09:39.392387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.685 [2024-07-15 16:09:39.392408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:10.685 [2024-07-15 16:09:39.398580] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.685 [2024-07-15 16:09:39.398932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.685 [2024-07-15 16:09:39.398952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.685 [2024-07-15 16:09:39.404121] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.685 [2024-07-15 16:09:39.404453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.685 [2024-07-15 16:09:39.404473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:10.685 [2024-07-15 16:09:39.409822] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.685 [2024-07-15 16:09:39.410148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.685 [2024-07-15 16:09:39.410168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:10.685 [2024-07-15 16:09:39.414793] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.685 [2024-07-15 16:09:39.415127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.685 [2024-07-15 16:09:39.415147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:10.685 [2024-07-15 16:09:39.420271] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.685 [2024-07-15 16:09:39.420595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.685 [2024-07-15 16:09:39.420615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.685 [2024-07-15 16:09:39.425482] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.685 [2024-07-15 16:09:39.425813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.685 [2024-07-15 16:09:39.425837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:10.685 [2024-07-15 16:09:39.431103] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.685 [2024-07-15 16:09:39.431439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.685 [2024-07-15 16:09:39.431459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:10.685 [2024-07-15 16:09:39.436592] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.685 [2024-07-15 16:09:39.436937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.685 [2024-07-15 16:09:39.436958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:10.685 [2024-07-15 16:09:39.441553] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.685 [2024-07-15 16:09:39.441881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.685 [2024-07-15 16:09:39.441900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.685 [2024-07-15 16:09:39.446710] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.685 [2024-07-15 16:09:39.447034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.686 [2024-07-15 16:09:39.447053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:10.686 [2024-07-15 16:09:39.452252] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.686 [2024-07-15 16:09:39.452620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.686 [2024-07-15 16:09:39.452639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:10.686 [2024-07-15 16:09:39.459279] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.686 [2024-07-15 16:09:39.459718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.686 [2024-07-15 16:09:39.459738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:10.686 [2024-07-15 16:09:39.467584] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.686 [2024-07-15 16:09:39.467963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.686 [2024-07-15 16:09:39.467983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.686 [2024-07-15 16:09:39.474680] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.686 [2024-07-15 16:09:39.475119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.686 [2024-07-15 16:09:39.475139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:10.686 [2024-07-15 16:09:39.481471] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.686 [2024-07-15 16:09:39.481855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.686 [2024-07-15 16:09:39.481874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:10.686 [2024-07-15 16:09:39.488188] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.686 [2024-07-15 16:09:39.488523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.686 [2024-07-15 16:09:39.488543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:10.686 [2024-07-15 16:09:39.493668] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.686 [2024-07-15 16:09:39.493991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.686 [2024-07-15 16:09:39.494010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.686 [2024-07-15 16:09:39.498752] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.686 [2024-07-15 16:09:39.499095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.686 [2024-07-15 16:09:39.499115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:10.686 [2024-07-15 16:09:39.503909] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.686 [2024-07-15 16:09:39.504240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.686 [2024-07-15 16:09:39.504260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:10.686 [2024-07-15 16:09:39.508560] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.686 [2024-07-15 16:09:39.508881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.686 [2024-07-15 16:09:39.508900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:10.686 [2024-07-15 16:09:39.513541] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.686 [2024-07-15 16:09:39.513868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.686 [2024-07-15 16:09:39.513888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.686 [2024-07-15 16:09:39.518981] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.686 [2024-07-15 16:09:39.519307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.686 [2024-07-15 16:09:39.519326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:10.686 [2024-07-15 16:09:39.524990] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.686 [2024-07-15 16:09:39.525315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.686 [2024-07-15 16:09:39.525335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:10.686 [2024-07-15 16:09:39.530910] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.686 [2024-07-15 16:09:39.531244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.686 [2024-07-15 16:09:39.531263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:10.686 [2024-07-15 16:09:39.536312] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.686 [2024-07-15 16:09:39.536642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.686 [2024-07-15 16:09:39.536662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.686 [2024-07-15 16:09:39.542546] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.686 [2024-07-15 16:09:39.542940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.686 [2024-07-15 16:09:39.542959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:10.686 [2024-07-15 16:09:39.549928] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.686 [2024-07-15 16:09:39.550373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.686 [2024-07-15 16:09:39.550393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:10.686 [2024-07-15 16:09:39.557256] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.686 [2024-07-15 16:09:39.557633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.686 [2024-07-15 16:09:39.557652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:10.686 [2024-07-15 16:09:39.565475] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.686 [2024-07-15 16:09:39.565897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.686 [2024-07-15 16:09:39.565917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.686 [2024-07-15 16:09:39.573993] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.686 [2024-07-15 16:09:39.574389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.686 [2024-07-15 16:09:39.574408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:10.686 [2024-07-15 16:09:39.582260] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.686 [2024-07-15 16:09:39.582743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.686 [2024-07-15 16:09:39.582762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:10.686 [2024-07-15 16:09:39.590051] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.686 [2024-07-15 16:09:39.590279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.686 [2024-07-15 16:09:39.590302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:10.686 [2024-07-15 16:09:39.598231] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.686 [2024-07-15 16:09:39.598642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.686 [2024-07-15 16:09:39.598662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.686 [2024-07-15 16:09:39.606084] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.686 [2024-07-15 16:09:39.606507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.686 [2024-07-15 16:09:39.606526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:10.686 [2024-07-15 16:09:39.614032] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.686 [2024-07-15 16:09:39.614520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.686 [2024-07-15 16:09:39.614541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:10.947 [2024-07-15 16:09:39.622097] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.947 [2024-07-15 16:09:39.622554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.947 [2024-07-15 16:09:39.622574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:10.947 [2024-07-15 16:09:39.630120] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.947 [2024-07-15 16:09:39.630590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.947 [2024-07-15 16:09:39.630610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.947 [2024-07-15 16:09:39.637048] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.947 [2024-07-15 16:09:39.637393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.947 [2024-07-15 16:09:39.637414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:10.947 [2024-07-15 16:09:39.644219] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.947 [2024-07-15 16:09:39.644726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.947 [2024-07-15 16:09:39.644745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:10.947 [2024-07-15 16:09:39.651578] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.947 [2024-07-15 16:09:39.651968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.947 [2024-07-15 16:09:39.651988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:10.947 [2024-07-15 16:09:39.656668] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.947 [2024-07-15 16:09:39.657000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.947 [2024-07-15 16:09:39.657021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.947 [2024-07-15 16:09:39.661322] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.947 [2024-07-15 16:09:39.661647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.947 [2024-07-15 16:09:39.661666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:10.947 [2024-07-15 16:09:39.666007] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.947 [2024-07-15 16:09:39.666346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.947 [2024-07-15 16:09:39.666366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:10.947 [2024-07-15 16:09:39.670693] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.947 [2024-07-15 16:09:39.671023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.947 [2024-07-15 16:09:39.671042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:10.947 [2024-07-15 16:09:39.675283] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.947 [2024-07-15 16:09:39.675613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.947 [2024-07-15 16:09:39.675632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.947 [2024-07-15 16:09:39.679928] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.947 [2024-07-15 16:09:39.680263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.947 [2024-07-15 16:09:39.680282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:10.947 [2024-07-15 16:09:39.685321] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.947 [2024-07-15 16:09:39.685654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.947 [2024-07-15 16:09:39.685674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:10.947 [2024-07-15 16:09:39.690001] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.947 [2024-07-15 16:09:39.690336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.947 [2024-07-15 16:09:39.690356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:10.947 [2024-07-15 16:09:39.694915] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.947 [2024-07-15 16:09:39.695247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.947 [2024-07-15 16:09:39.695267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.947 [2024-07-15 16:09:39.699870] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.947 [2024-07-15 16:09:39.700199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.947 [2024-07-15 16:09:39.700219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:10.947 [2024-07-15 16:09:39.704480] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.947 [2024-07-15 16:09:39.704798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.947 [2024-07-15 16:09:39.704818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:10.947 [2024-07-15 16:09:39.709474] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.947 [2024-07-15 16:09:39.709795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.947 [2024-07-15 16:09:39.709815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:10.947 [2024-07-15 16:09:39.714314] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.947 [2024-07-15 16:09:39.714642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.947 [2024-07-15 16:09:39.714662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.947 [2024-07-15 16:09:39.718815] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.947 [2024-07-15 16:09:39.719141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.948 [2024-07-15 16:09:39.719161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:10.948 [2024-07-15 16:09:39.723597] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.948 [2024-07-15 16:09:39.723921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.948 [2024-07-15 16:09:39.723941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:10.948 [2024-07-15 16:09:39.728706] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.948 [2024-07-15 16:09:39.729043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.948 [2024-07-15 16:09:39.729063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:10.948 [2024-07-15 16:09:39.734065] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.948 [2024-07-15 16:09:39.734393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.948 [2024-07-15 16:09:39.734413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.948 [2024-07-15 16:09:39.739158] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.948 [2024-07-15 16:09:39.739505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.948 [2024-07-15 16:09:39.739524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:10.948 [2024-07-15 16:09:39.744725] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.948 [2024-07-15 16:09:39.745054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.948 [2024-07-15 16:09:39.745073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:10.948 [2024-07-15 16:09:39.749445] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.948 [2024-07-15 16:09:39.749775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.948 [2024-07-15 16:09:39.749794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:10.948 [2024-07-15 16:09:39.754190] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.948 [2024-07-15 16:09:39.754509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.948 [2024-07-15 16:09:39.754529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.948 [2024-07-15 16:09:39.759548] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.948 [2024-07-15 16:09:39.760027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.948 [2024-07-15 16:09:39.760047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:10.948 [2024-07-15 16:09:39.767965] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.948 [2024-07-15 16:09:39.768285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.948 [2024-07-15 16:09:39.768305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:10.948 [2024-07-15 16:09:39.774215] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.948 [2024-07-15 16:09:39.774605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.948 [2024-07-15 16:09:39.774624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:10.948 [2024-07-15 16:09:39.781114] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.948 [2024-07-15 16:09:39.781528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.948 [2024-07-15 16:09:39.781547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.948 [2024-07-15 16:09:39.789092] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.948 [2024-07-15 16:09:39.789522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.948 [2024-07-15 16:09:39.789541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:10.948 [2024-07-15 16:09:39.796674] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.948 [2024-07-15 16:09:39.797097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.948 [2024-07-15 16:09:39.797117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:10.948 [2024-07-15 16:09:39.804483] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.948 [2024-07-15 16:09:39.804875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.948 [2024-07-15 16:09:39.804896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:10.948 [2024-07-15 16:09:39.812383] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.948 [2024-07-15 16:09:39.812799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.948 [2024-07-15 16:09:39.812819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.948 [2024-07-15 16:09:39.820594] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.948 [2024-07-15 16:09:39.821020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.948 [2024-07-15 16:09:39.821040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:10.948 [2024-07-15 16:09:39.829126] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.948 [2024-07-15 16:09:39.829522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.948 [2024-07-15 16:09:39.829543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:10.948 [2024-07-15 16:09:39.837305] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.948 [2024-07-15 16:09:39.837712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.948 [2024-07-15 16:09:39.837731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:10.948 [2024-07-15 16:09:39.845355] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.948 [2024-07-15 16:09:39.845780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.948 [2024-07-15 16:09:39.845799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.948 [2024-07-15 16:09:39.853571] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.948 [2024-07-15 16:09:39.854004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.949 [2024-07-15 16:09:39.854024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:10.949 [2024-07-15 16:09:39.861665] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.949 [2024-07-15 16:09:39.862088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.949 [2024-07-15 16:09:39.862112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:10.949 [2024-07-15 16:09:39.869867] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.949 [2024-07-15 16:09:39.870282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.949 [2024-07-15 16:09:39.870301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:10.949 [2024-07-15 16:09:39.877996] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:10.949 [2024-07-15 16:09:39.878440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.949 [2024-07-15 16:09:39.878461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.209 [2024-07-15 16:09:39.886015] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.209 [2024-07-15 16:09:39.886385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.209 [2024-07-15 16:09:39.886405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:11.209 [2024-07-15 16:09:39.893930] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.209 [2024-07-15 16:09:39.894368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.209 [2024-07-15 16:09:39.894388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:11.209 [2024-07-15 16:09:39.901392] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.209 [2024-07-15 16:09:39.901800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.209 [2024-07-15 16:09:39.901820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:11.209 [2024-07-15 16:09:39.908939] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.209 [2024-07-15 16:09:39.909318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.209 [2024-07-15 16:09:39.909338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.209 [2024-07-15 16:09:39.915619] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.210 [2024-07-15 16:09:39.916045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.210 [2024-07-15 16:09:39.916064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:11.210 [2024-07-15 16:09:39.922748] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.210 [2024-07-15 16:09:39.923141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.210 [2024-07-15 16:09:39.923160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:11.210 [2024-07-15 16:09:39.929695] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.210 [2024-07-15 16:09:39.930061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.210 [2024-07-15 16:09:39.930081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:11.210 [2024-07-15 16:09:39.936781] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.210 [2024-07-15 16:09:39.937196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.210 [2024-07-15 16:09:39.937215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.210 [2024-07-15 16:09:39.942796] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.210 [2024-07-15 16:09:39.943199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.210 [2024-07-15 16:09:39.943218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:11.210 [2024-07-15 16:09:39.949591] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.210 [2024-07-15 16:09:39.949957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.210 [2024-07-15 16:09:39.949977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:11.210 [2024-07-15 16:09:39.956340] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.210 [2024-07-15 16:09:39.956748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.210 [2024-07-15 16:09:39.956768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:11.210 [2024-07-15 16:09:39.963434] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.210 [2024-07-15 16:09:39.963871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.210 [2024-07-15 16:09:39.963891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.210 [2024-07-15 16:09:39.971051] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.210 [2024-07-15 16:09:39.971480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.210 [2024-07-15 16:09:39.971500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:11.210 [2024-07-15 16:09:39.978712] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.210 [2024-07-15 16:09:39.979014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.210 [2024-07-15 16:09:39.979035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:11.210 [2024-07-15 16:09:39.986882] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.210 [2024-07-15 16:09:39.987232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.210 [2024-07-15 16:09:39.987252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:11.210 [2024-07-15 16:09:39.994446] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.210 [2024-07-15 16:09:39.994803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.210 [2024-07-15 16:09:39.994823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.210 [2024-07-15 16:09:40.000932] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.210 [2024-07-15 16:09:40.001295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.210 [2024-07-15 16:09:40.001315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:11.210 [2024-07-15 16:09:40.007856] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.210 [2024-07-15 16:09:40.008241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.210 [2024-07-15 16:09:40.008260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:11.210 [2024-07-15 16:09:40.014419] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.210 [2024-07-15 16:09:40.014735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.210 [2024-07-15 16:09:40.014755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:11.210 [2024-07-15 16:09:40.021135] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.210 [2024-07-15 16:09:40.021484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.210 [2024-07-15 16:09:40.021504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.210 [2024-07-15 16:09:40.027539] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.210 [2024-07-15 16:09:40.027897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.210 [2024-07-15 16:09:40.027917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:11.210 [2024-07-15 16:09:40.035884] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.210 [2024-07-15 16:09:40.036386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.210 [2024-07-15 16:09:40.036410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:11.210 [2024-07-15 16:09:40.043057] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.210 [2024-07-15 16:09:40.043411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.210 [2024-07-15 16:09:40.043432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:11.210 [2024-07-15 16:09:40.049457] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.210 [2024-07-15 16:09:40.049772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.210 [2024-07-15 16:09:40.049795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.210 [2024-07-15 16:09:40.055359] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.210 [2024-07-15 16:09:40.055668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.210 [2024-07-15 16:09:40.055688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:11.210 [2024-07-15 16:09:40.060824] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.210 [2024-07-15 16:09:40.061132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.210 [2024-07-15 16:09:40.061152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:11.210 [2024-07-15 16:09:40.066074] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.210 [2024-07-15 16:09:40.066391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.210 [2024-07-15 16:09:40.066410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:11.210 [2024-07-15 16:09:40.071605] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.210 [2024-07-15 16:09:40.071915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.210 [2024-07-15 16:09:40.071934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.210 [2024-07-15 16:09:40.076860] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.210 [2024-07-15 16:09:40.077180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.210 [2024-07-15 16:09:40.077199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:11.210 [2024-07-15 16:09:40.081643] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.211 [2024-07-15 16:09:40.081951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.211 [2024-07-15 16:09:40.081971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:11.211 [2024-07-15 16:09:40.086073] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.211 [2024-07-15 16:09:40.086377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.211 [2024-07-15 16:09:40.086396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:11.211 [2024-07-15 16:09:40.090547] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.211 [2024-07-15 16:09:40.090857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.211 [2024-07-15 16:09:40.090876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.211 [2024-07-15 16:09:40.094947] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.211 [2024-07-15 16:09:40.095265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.211 [2024-07-15 16:09:40.095285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:11.211 [2024-07-15 16:09:40.099364] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.211 [2024-07-15 16:09:40.099682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.211 [2024-07-15 16:09:40.099702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:11.211 [2024-07-15 16:09:40.103708] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.211 [2024-07-15 16:09:40.104020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.211 [2024-07-15 16:09:40.104039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:11.211 [2024-07-15 16:09:40.108049] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.211 [2024-07-15 16:09:40.108360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.211 [2024-07-15 16:09:40.108380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.211 [2024-07-15 16:09:40.112362] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.211 [2024-07-15 16:09:40.112655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.211 [2024-07-15 16:09:40.112674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:11.211 [2024-07-15 16:09:40.116654] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.211 [2024-07-15 16:09:40.116966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.211 [2024-07-15 16:09:40.116986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:11.211 [2024-07-15 16:09:40.120926] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.211 [2024-07-15 16:09:40.121247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.211 [2024-07-15 16:09:40.121266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:11.211 [2024-07-15 16:09:40.125271] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.211 [2024-07-15 16:09:40.125580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.211 [2024-07-15 16:09:40.125599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.211 [2024-07-15 16:09:40.129645] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.211 [2024-07-15 16:09:40.129946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.211 [2024-07-15 16:09:40.129965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:11.211 [2024-07-15 16:09:40.133906] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.211 [2024-07-15 16:09:40.134218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.211 [2024-07-15 16:09:40.134243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:11.211 [2024-07-15 16:09:40.138447] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.211 [2024-07-15 16:09:40.138753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.211 [2024-07-15 16:09:40.138773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:11.472 [2024-07-15 16:09:40.144005] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.472 [2024-07-15 16:09:40.144401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.472 [2024-07-15 16:09:40.144421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.472 [2024-07-15 16:09:40.150079] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.472 [2024-07-15 16:09:40.150428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.472 [2024-07-15 16:09:40.150450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:11.472 [2024-07-15 16:09:40.155363] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.472 [2024-07-15 16:09:40.155678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.472 [2024-07-15 16:09:40.155698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:11.472 [2024-07-15 16:09:40.160707] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.472 [2024-07-15 16:09:40.161052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.472 [2024-07-15 16:09:40.161071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:11.472 [2024-07-15 16:09:40.165509] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.472 [2024-07-15 16:09:40.165822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.472 [2024-07-15 16:09:40.165841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.472 [2024-07-15 16:09:40.170417] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.472 [2024-07-15 16:09:40.170721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.472 [2024-07-15 16:09:40.170740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:11.472 [2024-07-15 16:09:40.175128] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.472 [2024-07-15 16:09:40.175445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.472 [2024-07-15 16:09:40.175469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:11.472 [2024-07-15 16:09:40.180291] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.472 [2024-07-15 16:09:40.180604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.472 [2024-07-15 16:09:40.180624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:11.472 [2024-07-15 16:09:40.184598] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.472 [2024-07-15 16:09:40.184901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.472 [2024-07-15 16:09:40.184920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.472 [2024-07-15 16:09:40.188865] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.472 [2024-07-15 16:09:40.189165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.472 [2024-07-15 16:09:40.189184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:11.472 [2024-07-15 16:09:40.193118] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.472 [2024-07-15 16:09:40.193433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.472 [2024-07-15 16:09:40.193453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:11.472 [2024-07-15 16:09:40.197374] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.472 [2024-07-15 16:09:40.197672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.472 [2024-07-15 16:09:40.197692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:11.472 [2024-07-15 16:09:40.202000] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.472 [2024-07-15 16:09:40.202318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.472 [2024-07-15 16:09:40.202337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.472 [2024-07-15 16:09:40.206772] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.472 [2024-07-15 16:09:40.207079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.472 [2024-07-15 16:09:40.207098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:11.472 [2024-07-15 16:09:40.211018] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.472 [2024-07-15 16:09:40.211321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.472 [2024-07-15 16:09:40.211342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:11.472 [2024-07-15 16:09:40.215193] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.472 [2024-07-15 16:09:40.215500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.472 [2024-07-15 16:09:40.215519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:11.472 [2024-07-15 16:09:40.219445] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.472 [2024-07-15 16:09:40.219761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.472 [2024-07-15 16:09:40.219780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.472 [2024-07-15 16:09:40.223692] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.472 [2024-07-15 16:09:40.223989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.472 [2024-07-15 16:09:40.224008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:11.472 [2024-07-15 16:09:40.227889] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.472 [2024-07-15 16:09:40.228202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.472 [2024-07-15 16:09:40.228221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:11.472 [2024-07-15 16:09:40.232089] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.472 [2024-07-15 16:09:40.232396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.472 [2024-07-15 16:09:40.232416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:11.472 [2024-07-15 16:09:40.236281] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.472 [2024-07-15 16:09:40.236586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.472 [2024-07-15 16:09:40.236605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.472 [2024-07-15 16:09:40.240431] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.472 [2024-07-15 16:09:40.240731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.472 [2024-07-15 16:09:40.240750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:11.472 [2024-07-15 16:09:40.244536] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.472 [2024-07-15 16:09:40.244834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.472 [2024-07-15 16:09:40.244854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:11.472 [2024-07-15 16:09:40.248655] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.472 [2024-07-15 16:09:40.248963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.472 [2024-07-15 16:09:40.248985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:11.473 [2024-07-15 16:09:40.252775] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.473 [2024-07-15 16:09:40.253068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.473 [2024-07-15 16:09:40.253087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.473 [2024-07-15 16:09:40.256930] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.473 [2024-07-15 16:09:40.257239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.473 [2024-07-15 16:09:40.257258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:11.473 [2024-07-15 16:09:40.261047] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.473 [2024-07-15 16:09:40.261351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.473 [2024-07-15 16:09:40.261370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:11.473 [2024-07-15 16:09:40.265132] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.473 [2024-07-15 16:09:40.265425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.473 [2024-07-15 16:09:40.265444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:11.473 [2024-07-15 16:09:40.269266] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.473 [2024-07-15 16:09:40.269567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.473 [2024-07-15 16:09:40.269586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.473 [2024-07-15 16:09:40.273416] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.473 [2024-07-15 16:09:40.273716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.473 [2024-07-15 16:09:40.273735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:11.473 [2024-07-15 16:09:40.277571] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.473 [2024-07-15 16:09:40.277863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.473 [2024-07-15 16:09:40.277882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:11.473 [2024-07-15 16:09:40.281696] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.473 [2024-07-15 16:09:40.282000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.473 [2024-07-15 16:09:40.282020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:11.473 [2024-07-15 16:09:40.285821] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.473 [2024-07-15 16:09:40.286130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.473 [2024-07-15 16:09:40.286150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.473 [2024-07-15 16:09:40.289930] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.473 [2024-07-15 16:09:40.290230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.473 [2024-07-15 16:09:40.290250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:11.473 [2024-07-15 16:09:40.293896] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.473 [2024-07-15 16:09:40.294162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.473 [2024-07-15 16:09:40.294182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:11.473 [2024-07-15 16:09:40.297698] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.473 [2024-07-15 16:09:40.297962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.473 [2024-07-15 16:09:40.297981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:11.473 [2024-07-15 16:09:40.301442] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.473 [2024-07-15 16:09:40.301691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.473 [2024-07-15 16:09:40.301710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.473 [2024-07-15 16:09:40.305173] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.473 [2024-07-15 16:09:40.305429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.473 [2024-07-15 16:09:40.305449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:11.473 [2024-07-15 16:09:40.308938] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.473 [2024-07-15 16:09:40.309199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.473 [2024-07-15 16:09:40.309218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:11.473 [2024-07-15 16:09:40.312671] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.473 [2024-07-15 16:09:40.312926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.473 [2024-07-15 16:09:40.312946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:11.473 [2024-07-15 16:09:40.316585] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.473 [2024-07-15 16:09:40.316831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.473 [2024-07-15 16:09:40.316850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.473 [2024-07-15 16:09:40.320319] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.473 [2024-07-15 16:09:40.320582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.473 [2024-07-15 16:09:40.320602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:11.473 [2024-07-15 16:09:40.324021] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.473 [2024-07-15 16:09:40.324277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.473 [2024-07-15 16:09:40.324297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:11.473 [2024-07-15 16:09:40.327733] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.473 [2024-07-15 16:09:40.327992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.473 [2024-07-15 16:09:40.328010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:11.473 [2024-07-15 16:09:40.331475] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.473 [2024-07-15 16:09:40.331734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.473 [2024-07-15 16:09:40.331753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.473 [2024-07-15 16:09:40.335326] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.473 [2024-07-15 16:09:40.335582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.473 [2024-07-15 16:09:40.335601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:11.473 [2024-07-15 16:09:40.339835] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17bd810) with pdu=0x2000190fef90 00:28:11.473 [2024-07-15 16:09:40.340098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.473 [2024-07-15 16:09:40.340117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:11.473 00:28:11.473 Latency(us) 00:28:11.473 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:11.473 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:11.473 nvme0n1 : 2.00 5244.59 655.57 0.00 0.00 3046.30 1524.42 9175.04 00:28:11.473 =================================================================================================================== 00:28:11.473 Total : 5244.59 655.57 0.00 0.00 3046.30 1524.42 9175.04 00:28:11.473 0 00:28:11.473 16:09:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:11.473 16:09:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:11.473 16:09:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:11.473 | .driver_specific 00:28:11.473 | .nvme_error 00:28:11.473 | .status_code 00:28:11.473 | .command_transient_transport_error' 00:28:11.473 16:09:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:11.732 16:09:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 338 > 0 )) 00:28:11.732 16:09:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3914424 00:28:11.732 16:09:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3914424 ']' 00:28:11.732 16:09:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3914424 00:28:11.732 16:09:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:11.732 16:09:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:11.732 16:09:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3914424 00:28:11.732 16:09:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:11.732 16:09:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:11.732 16:09:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3914424' 00:28:11.732 killing process with pid 3914424 00:28:11.732 16:09:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3914424 00:28:11.732 Received shutdown signal, test time was about 2.000000 seconds 00:28:11.732 00:28:11.732 Latency(us) 00:28:11.732 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:11.732 =================================================================================================================== 00:28:11.732 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:11.732 16:09:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3914424 00:28:11.991 16:09:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3912305 00:28:11.991 16:09:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3912305 ']' 00:28:11.991 16:09:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3912305 00:28:11.991 16:09:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:11.991 16:09:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:11.991 16:09:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3912305 00:28:11.991 16:09:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:11.991 16:09:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:11.991 16:09:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3912305' 00:28:11.991 killing process with pid 3912305 00:28:11.991 16:09:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3912305 00:28:11.991 16:09:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3912305 00:28:12.251 00:28:12.251 real 0m16.873s 00:28:12.251 user 0m32.464s 00:28:12.251 sys 0m4.314s 00:28:12.251 16:09:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:12.251 16:09:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:12.251 ************************************ 00:28:12.251 END TEST nvmf_digest_error 00:28:12.251 ************************************ 00:28:12.251 16:09:41 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:28:12.251 16:09:41 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:12.251 16:09:41 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:12.251 16:09:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:12.251 16:09:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:28:12.251 16:09:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:12.251 16:09:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:28:12.251 16:09:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:12.251 16:09:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:12.251 rmmod nvme_tcp 00:28:12.251 rmmod nvme_fabrics 00:28:12.251 rmmod nvme_keyring 00:28:12.251 16:09:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:12.251 16:09:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:28:12.251 16:09:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:28:12.251 16:09:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 3912305 ']' 00:28:12.251 16:09:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 3912305 00:28:12.251 16:09:41 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 3912305 ']' 00:28:12.251 16:09:41 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 3912305 00:28:12.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3912305) - No such process 00:28:12.251 16:09:41 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 3912305 is not found' 00:28:12.251 Process with pid 3912305 is not found 00:28:12.251 16:09:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:12.251 16:09:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:12.251 16:09:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:12.251 16:09:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:12.251 16:09:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:12.251 16:09:41 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.251 16:09:41 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:12.251 16:09:41 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:14.785 16:09:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:14.785 00:28:14.786 real 0m41.401s 00:28:14.786 user 1m6.351s 00:28:14.786 sys 0m12.738s 00:28:14.786 16:09:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:14.786 16:09:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:14.786 ************************************ 00:28:14.786 END TEST nvmf_digest 00:28:14.786 ************************************ 00:28:14.786 16:09:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:14.786 16:09:43 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:28:14.786 16:09:43 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:28:14.786 16:09:43 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:28:14.786 16:09:43 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:14.786 16:09:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:14.786 16:09:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:14.786 16:09:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:14.786 ************************************ 00:28:14.786 START TEST nvmf_bdevperf 00:28:14.786 ************************************ 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:14.786 * Looking for test storage... 00:28:14.786 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:14.786 16:09:43 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:20.146 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:20.146 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:20.146 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:20.146 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:20.146 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:20.146 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:20.146 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:20.146 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:28:20.146 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:20.146 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:28:20.146 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:28:20.146 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:28:20.146 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:28:20.146 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:28:20.146 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:20.146 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:20.146 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:20.146 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:20.146 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:20.146 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:20.146 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:20.147 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:20.147 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:20.147 Found net devices under 0000:86:00.0: cvl_0_0 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:20.147 Found net devices under 0000:86:00.1: cvl_0_1 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:20.147 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:20.147 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:28:20.147 00:28:20.147 --- 10.0.0.2 ping statistics --- 00:28:20.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.147 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:20.147 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:20.147 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:28:20.147 00:28:20.147 --- 10.0.0.1 ping statistics --- 00:28:20.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.147 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3918435 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3918435 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 3918435 ']' 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:20.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:20.147 16:09:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:20.147 [2024-07-15 16:09:48.767470] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:28:20.147 [2024-07-15 16:09:48.767511] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:20.147 EAL: No free 2048 kB hugepages reported on node 1 00:28:20.147 [2024-07-15 16:09:48.826735] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:20.147 [2024-07-15 16:09:48.904846] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:20.147 [2024-07-15 16:09:48.904887] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:20.147 [2024-07-15 16:09:48.904895] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:20.147 [2024-07-15 16:09:48.904902] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:20.147 [2024-07-15 16:09:48.904907] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:20.147 [2024-07-15 16:09:48.905005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:20.147 [2024-07-15 16:09:48.905027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:20.147 [2024-07-15 16:09:48.905031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:20.714 16:09:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:20.714 16:09:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:28:20.714 16:09:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:20.714 16:09:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:20.714 16:09:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:20.714 16:09:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:20.714 16:09:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:20.714 16:09:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.714 16:09:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:20.714 [2024-07-15 16:09:49.620608] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:20.714 16:09:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.714 16:09:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:20.714 16:09:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.714 16:09:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:20.972 Malloc0 00:28:20.972 16:09:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.972 16:09:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:20.972 16:09:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.972 16:09:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:20.972 16:09:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.972 16:09:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:20.972 16:09:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.972 16:09:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:20.972 16:09:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.972 16:09:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:20.972 16:09:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.972 16:09:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:20.972 [2024-07-15 16:09:49.675347] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:20.972 16:09:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.972 16:09:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:20.972 16:09:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:20.972 16:09:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:28:20.972 16:09:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:28:20.972 16:09:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:20.972 16:09:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:20.972 { 00:28:20.972 "params": { 00:28:20.972 "name": "Nvme$subsystem", 00:28:20.972 "trtype": "$TEST_TRANSPORT", 00:28:20.972 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.972 "adrfam": "ipv4", 00:28:20.972 "trsvcid": "$NVMF_PORT", 00:28:20.972 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.972 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.972 "hdgst": ${hdgst:-false}, 00:28:20.972 "ddgst": ${ddgst:-false} 00:28:20.972 }, 00:28:20.972 "method": "bdev_nvme_attach_controller" 00:28:20.972 } 00:28:20.972 EOF 00:28:20.972 )") 00:28:20.972 16:09:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:28:20.972 16:09:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:28:20.972 16:09:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:28:20.972 16:09:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:20.972 "params": { 00:28:20.972 "name": "Nvme1", 00:28:20.972 "trtype": "tcp", 00:28:20.972 "traddr": "10.0.0.2", 00:28:20.972 "adrfam": "ipv4", 00:28:20.972 "trsvcid": "4420", 00:28:20.972 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:20.972 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:20.972 "hdgst": false, 00:28:20.972 "ddgst": false 00:28:20.972 }, 00:28:20.972 "method": "bdev_nvme_attach_controller" 00:28:20.972 }' 00:28:20.972 [2024-07-15 16:09:49.725667] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:28:20.972 [2024-07-15 16:09:49.725708] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3918680 ] 00:28:20.972 EAL: No free 2048 kB hugepages reported on node 1 00:28:20.972 [2024-07-15 16:09:49.780610] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.972 [2024-07-15 16:09:49.854084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:21.230 Running I/O for 1 seconds... 00:28:22.608 00:28:22.608 Latency(us) 00:28:22.608 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:22.608 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:22.608 Verification LBA range: start 0x0 length 0x4000 00:28:22.608 Nvme1n1 : 1.01 10948.17 42.77 0.00 0.00 11646.31 2350.75 15272.74 00:28:22.608 =================================================================================================================== 00:28:22.608 Total : 10948.17 42.77 0.00 0.00 11646.31 2350.75 15272.74 00:28:22.608 16:09:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3918911 00:28:22.608 16:09:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:22.608 16:09:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:22.608 16:09:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:22.608 16:09:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:28:22.608 16:09:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:28:22.608 16:09:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:22.608 16:09:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:22.608 { 00:28:22.608 "params": { 00:28:22.608 "name": "Nvme$subsystem", 00:28:22.608 "trtype": "$TEST_TRANSPORT", 00:28:22.608 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:22.608 "adrfam": "ipv4", 00:28:22.608 "trsvcid": "$NVMF_PORT", 00:28:22.608 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:22.608 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:22.608 "hdgst": ${hdgst:-false}, 00:28:22.608 "ddgst": ${ddgst:-false} 00:28:22.608 }, 00:28:22.608 "method": "bdev_nvme_attach_controller" 00:28:22.608 } 00:28:22.608 EOF 00:28:22.608 )") 00:28:22.608 16:09:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:28:22.608 16:09:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:28:22.608 16:09:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:28:22.608 16:09:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:22.608 "params": { 00:28:22.608 "name": "Nvme1", 00:28:22.608 "trtype": "tcp", 00:28:22.608 "traddr": "10.0.0.2", 00:28:22.608 "adrfam": "ipv4", 00:28:22.608 "trsvcid": "4420", 00:28:22.608 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:22.608 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:22.608 "hdgst": false, 00:28:22.608 "ddgst": false 00:28:22.608 }, 00:28:22.608 "method": "bdev_nvme_attach_controller" 00:28:22.608 }' 00:28:22.608 [2024-07-15 16:09:51.376374] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:28:22.608 [2024-07-15 16:09:51.376426] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3918911 ] 00:28:22.608 EAL: No free 2048 kB hugepages reported on node 1 00:28:22.608 [2024-07-15 16:09:51.431340] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:22.608 [2024-07-15 16:09:51.504090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:22.867 Running I/O for 15 seconds... 00:28:26.169 16:09:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3918435 00:28:26.169 16:09:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:28:26.169 [2024-07-15 16:09:54.352271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:101688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.169 [2024-07-15 16:09:54.352306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.169 [2024-07-15 16:09:54.352324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:101696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.169 [2024-07-15 16:09:54.352332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.169 [2024-07-15 16:09:54.352342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:101704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.169 [2024-07-15 16:09:54.352350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.169 [2024-07-15 16:09:54.352359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:101712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.169 [2024-07-15 16:09:54.352366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.169 [2024-07-15 16:09:54.352376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:101720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.169 [2024-07-15 16:09:54.352383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.169 [2024-07-15 16:09:54.352392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:101728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.169 [2024-07-15 16:09:54.352400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.169 [2024-07-15 16:09:54.352408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:101736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.169 [2024-07-15 16:09:54.352415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.169 [2024-07-15 16:09:54.352424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:101744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.169 [2024-07-15 16:09:54.352431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.169 [2024-07-15 16:09:54.352440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:101752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.169 [2024-07-15 16:09:54.352452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.169 [2024-07-15 16:09:54.352461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:101760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.169 [2024-07-15 16:09:54.352467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.169 [2024-07-15 16:09:54.352475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:101768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.169 [2024-07-15 16:09:54.352482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.169 [2024-07-15 16:09:54.352492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:101776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.169 [2024-07-15 16:09:54.352500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.169 [2024-07-15 16:09:54.352510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:101784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.169 [2024-07-15 16:09:54.352518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.169 [2024-07-15 16:09:54.352526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:101792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.169 [2024-07-15 16:09:54.352534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.169 [2024-07-15 16:09:54.352542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.169 [2024-07-15 16:09:54.352553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.169 [2024-07-15 16:09:54.352561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:101808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.169 [2024-07-15 16:09:54.352571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.169 [2024-07-15 16:09:54.352580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:101816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.169 [2024-07-15 16:09:54.352588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.169 [2024-07-15 16:09:54.352598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:101824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.169 [2024-07-15 16:09:54.352606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.169 [2024-07-15 16:09:54.352615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:101832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.169 [2024-07-15 16:09:54.352622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.169 [2024-07-15 16:09:54.352630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:101840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.169 [2024-07-15 16:09:54.352640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.169 [2024-07-15 16:09:54.352649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:101848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.169 [2024-07-15 16:09:54.352656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.169 [2024-07-15 16:09:54.352668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:101856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.169 [2024-07-15 16:09:54.352674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.169 [2024-07-15 16:09:54.352686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:101864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.169 [2024-07-15 16:09:54.352693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.169 [2024-07-15 16:09:54.352707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:101872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.169 [2024-07-15 16:09:54.352715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.169 [2024-07-15 16:09:54.352727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.169 [2024-07-15 16:09:54.352735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.169 [2024-07-15 16:09:54.352744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:101888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.169 [2024-07-15 16:09:54.352754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.169 [2024-07-15 16:09:54.352764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:101896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.169 [2024-07-15 16:09:54.352772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.169 [2024-07-15 16:09:54.352781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.169 [2024-07-15 16:09:54.352788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.169 [2024-07-15 16:09:54.352796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:101912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.170 [2024-07-15 16:09:54.352804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.170 [2024-07-15 16:09:54.352817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:101920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.170 [2024-07-15 16:09:54.352824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.170 [2024-07-15 16:09:54.352832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:101928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.170 [2024-07-15 16:09:54.352839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.170 [2024-07-15 16:09:54.352849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:101936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.170 [2024-07-15 16:09:54.352857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.170 [2024-07-15 16:09:54.352869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:101944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.170 [2024-07-15 16:09:54.352876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.170 [2024-07-15 16:09:54.352889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:101952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.170 [2024-07-15 16:09:54.352899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.170 [2024-07-15 16:09:54.352909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:101960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.170 [2024-07-15 16:09:54.352919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.170 [2024-07-15 16:09:54.352929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:101968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.170 [2024-07-15 16:09:54.352938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.170 [2024-07-15 16:09:54.352949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:101976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.170 [2024-07-15 16:09:54.352959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.170 [2024-07-15 16:09:54.352972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:101984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.170 [2024-07-15 16:09:54.352981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.170 [2024-07-15 16:09:54.352993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:101992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.170 [2024-07-15 16:09:54.353003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.170 [2024-07-15 16:09:54.353015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:102000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.170 [2024-07-15 16:09:54.353027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.170 [2024-07-15 16:09:54.353039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:102008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.170 [2024-07-15 16:09:54.353047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.170 [2024-07-15 16:09:54.353057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:102016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.170 [2024-07-15 16:09:54.353067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.170 [2024-07-15 16:09:54.353079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:102024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.170 [2024-07-15 16:09:54.353090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.170 [2024-07-15 16:09:54.353101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:102032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.170 [2024-07-15 16:09:54.353111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.170 [2024-07-15 16:09:54.353119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:102040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.170 [2024-07-15 16:09:54.353126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.170 [2024-07-15 16:09:54.353137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:102048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.170 [2024-07-15 16:09:54.353144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.170 [2024-07-15 16:09:54.353154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:102056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.170 [2024-07-15 16:09:54.353161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.170 [2024-07-15 16:09:54.353169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:102064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.170 [2024-07-15 16:09:54.353176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.170 [2024-07-15 16:09:54.353183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:102072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.170 [2024-07-15 16:09:54.353189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.170 [2024-07-15 16:09:54.353198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:102080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.170 [2024-07-15 16:09:54.353205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.170 [2024-07-15 16:09:54.353214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:102088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.170 [2024-07-15 16:09:54.353221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.170 [2024-07-15 16:09:54.353233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:102096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.170 [2024-07-15 16:09:54.353240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.170 [2024-07-15 16:09:54.353248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:102104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.170 [2024-07-15 16:09:54.353254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.170 [2024-07-15 16:09:54.353263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:102112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.170 [2024-07-15 16:09:54.353269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.170 [2024-07-15 16:09:54.353278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:102120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.170 [2024-07-15 16:09:54.353284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.170 [2024-07-15 16:09:54.353292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.170 [2024-07-15 16:09:54.353300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.170 [2024-07-15 16:09:54.353308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:102136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.170 [2024-07-15 16:09:54.353314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.170 [2024-07-15 16:09:54.353322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:102144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.170 [2024-07-15 16:09:54.353329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.170 [2024-07-15 16:09:54.353338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:102152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.170 [2024-07-15 16:09:54.353344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.170 [2024-07-15 16:09:54.353355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.170 [2024-07-15 16:09:54.353362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.170 [2024-07-15 16:09:54.353370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.170 [2024-07-15 16:09:54.353377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.170 [2024-07-15 16:09:54.353385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:102176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.170 [2024-07-15 16:09:54.353391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.170 [2024-07-15 16:09:54.353399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:102184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.170 [2024-07-15 16:09:54.353407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.170 [2024-07-15 16:09:54.353415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:102192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.170 [2024-07-15 16:09:54.353421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.170 [2024-07-15 16:09:54.353430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:102200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.170 [2024-07-15 16:09:54.353436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.170 [2024-07-15 16:09:54.353445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:102208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.170 [2024-07-15 16:09:54.353451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.170 [2024-07-15 16:09:54.353460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:102216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.170 [2024-07-15 16:09:54.353467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.170 [2024-07-15 16:09:54.353475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.170 [2024-07-15 16:09:54.353481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.170 [2024-07-15 16:09:54.353490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:102232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.170 [2024-07-15 16:09:54.353496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.170 [2024-07-15 16:09:54.353504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:102240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.170 [2024-07-15 16:09:54.353511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.170 [2024-07-15 16:09:54.353519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:102248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.170 [2024-07-15 16:09:54.353525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.171 [2024-07-15 16:09:54.353533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:102256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.171 [2024-07-15 16:09:54.353542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.171 [2024-07-15 16:09:54.353549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:102264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.171 [2024-07-15 16:09:54.353556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.171 [2024-07-15 16:09:54.353565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:102272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.171 [2024-07-15 16:09:54.353572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.171 [2024-07-15 16:09:54.353581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:102280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.171 [2024-07-15 16:09:54.353587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.171 [2024-07-15 16:09:54.353595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:102288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.171 [2024-07-15 16:09:54.353602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.171 [2024-07-15 16:09:54.353610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.171 [2024-07-15 16:09:54.353617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.171 [2024-07-15 16:09:54.353626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:102304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.171 [2024-07-15 16:09:54.353632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.171 [2024-07-15 16:09:54.353640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:102312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.171 [2024-07-15 16:09:54.353646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.171 [2024-07-15 16:09:54.353655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:101304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.171 [2024-07-15 16:09:54.353662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.171 [2024-07-15 16:09:54.353671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:101312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.171 [2024-07-15 16:09:54.353678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.171 [2024-07-15 16:09:54.353686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:101320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.171 [2024-07-15 16:09:54.353693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.171 [2024-07-15 16:09:54.353701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:101328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.171 [2024-07-15 16:09:54.353707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.171 [2024-07-15 16:09:54.353716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:101336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.171 [2024-07-15 16:09:54.353723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.171 [2024-07-15 16:09:54.353733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:101344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.171 [2024-07-15 16:09:54.353740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.171 [2024-07-15 16:09:54.353748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:101352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.171 [2024-07-15 16:09:54.353754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.171 [2024-07-15 16:09:54.353763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:101360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.171 [2024-07-15 16:09:54.353769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.171 [2024-07-15 16:09:54.353778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:101368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.171 [2024-07-15 16:09:54.353785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.171 [2024-07-15 16:09:54.353793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:101376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.171 [2024-07-15 16:09:54.353799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.171 [2024-07-15 16:09:54.353808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:101384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.171 [2024-07-15 16:09:54.353814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.171 [2024-07-15 16:09:54.353822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:101392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.171 [2024-07-15 16:09:54.353829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.171 [2024-07-15 16:09:54.353837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:101400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.171 [2024-07-15 16:09:54.353844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.171 [2024-07-15 16:09:54.353852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:101408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.171 [2024-07-15 16:09:54.353858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.171 [2024-07-15 16:09:54.353866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:101416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.171 [2024-07-15 16:09:54.353874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.171 [2024-07-15 16:09:54.353882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.171 [2024-07-15 16:09:54.353889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.171 [2024-07-15 16:09:54.353897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:101424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.171 [2024-07-15 16:09:54.353903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.171 [2024-07-15 16:09:54.353911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:101432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.171 [2024-07-15 16:09:54.353919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.171 [2024-07-15 16:09:54.353927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:101440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.171 [2024-07-15 16:09:54.353935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.171 [2024-07-15 16:09:54.353943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:101448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.171 [2024-07-15 16:09:54.353949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.171 [2024-07-15 16:09:54.353958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:101456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.171 [2024-07-15 16:09:54.353964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.171 [2024-07-15 16:09:54.353972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:101464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.171 [2024-07-15 16:09:54.353980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.171 [2024-07-15 16:09:54.353988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:101472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.171 [2024-07-15 16:09:54.353995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.171 [2024-07-15 16:09:54.354003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:101480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.171 [2024-07-15 16:09:54.354009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.171 [2024-07-15 16:09:54.354017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:101488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.171 [2024-07-15 16:09:54.354023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.171 [2024-07-15 16:09:54.354032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:101496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.171 [2024-07-15 16:09:54.354039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.171 [2024-07-15 16:09:54.354047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:101504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.171 [2024-07-15 16:09:54.354053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.171 [2024-07-15 16:09:54.354061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:101512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.171 [2024-07-15 16:09:54.354067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.171 [2024-07-15 16:09:54.354076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:101520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.171 [2024-07-15 16:09:54.354083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.171 [2024-07-15 16:09:54.354091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:101528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.171 [2024-07-15 16:09:54.354098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.171 [2024-07-15 16:09:54.354110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:101536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.171 [2024-07-15 16:09:54.354117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.171 [2024-07-15 16:09:54.354125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:101544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.171 [2024-07-15 16:09:54.354131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.171 [2024-07-15 16:09:54.354140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:101552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.171 [2024-07-15 16:09:54.354147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.171 [2024-07-15 16:09:54.354155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:101560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.171 [2024-07-15 16:09:54.354161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.171 [2024-07-15 16:09:54.354172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:101568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.171 [2024-07-15 16:09:54.354178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.172 [2024-07-15 16:09:54.354187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:101576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.172 [2024-07-15 16:09:54.354194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.172 [2024-07-15 16:09:54.354202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:101584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.172 [2024-07-15 16:09:54.354209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.172 [2024-07-15 16:09:54.354217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:101592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.172 [2024-07-15 16:09:54.354227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.172 [2024-07-15 16:09:54.354235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:101600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.172 [2024-07-15 16:09:54.354242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.172 [2024-07-15 16:09:54.354250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:101608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.172 [2024-07-15 16:09:54.354257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.172 [2024-07-15 16:09:54.354265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:101616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.172 [2024-07-15 16:09:54.354271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.172 [2024-07-15 16:09:54.354279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:101624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.172 [2024-07-15 16:09:54.354286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.172 [2024-07-15 16:09:54.354294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:101632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.172 [2024-07-15 16:09:54.354303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.172 [2024-07-15 16:09:54.354311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:101640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.172 [2024-07-15 16:09:54.354317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.172 [2024-07-15 16:09:54.354325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:101648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.172 [2024-07-15 16:09:54.354331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.172 [2024-07-15 16:09:54.354339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:101656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.172 [2024-07-15 16:09:54.354345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.172 [2024-07-15 16:09:54.354353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.172 [2024-07-15 16:09:54.354361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.172 [2024-07-15 16:09:54.354369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:101672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.172 [2024-07-15 16:09:54.354375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.172 [2024-07-15 16:09:54.354382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d1c70 is same with the state(5) to be set 00:28:26.172 [2024-07-15 16:09:54.354390] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:26.172 [2024-07-15 16:09:54.354395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:26.172 [2024-07-15 16:09:54.354401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101680 len:8 PRP1 0x0 PRP2 0x0 00:28:26.172 [2024-07-15 16:09:54.354409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:26.172 [2024-07-15 16:09:54.354451] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16d1c70 was disconnected and freed. reset controller. 00:28:26.172 [2024-07-15 16:09:54.357283] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.172 [2024-07-15 16:09:54.357333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.172 [2024-07-15 16:09:54.357913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.172 [2024-07-15 16:09:54.357928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.172 [2024-07-15 16:09:54.357936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.172 [2024-07-15 16:09:54.358114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.172 [2024-07-15 16:09:54.358301] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.172 [2024-07-15 16:09:54.358310] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.172 [2024-07-15 16:09:54.358318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.172 [2024-07-15 16:09:54.361152] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.172 [2024-07-15 16:09:54.370520] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.172 [2024-07-15 16:09:54.370994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.172 [2024-07-15 16:09:54.371038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.172 [2024-07-15 16:09:54.371062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.172 [2024-07-15 16:09:54.371580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.172 [2024-07-15 16:09:54.371756] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.172 [2024-07-15 16:09:54.371766] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.172 [2024-07-15 16:09:54.371772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.172 [2024-07-15 16:09:54.374535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.172 [2024-07-15 16:09:54.383434] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.172 [2024-07-15 16:09:54.383812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.172 [2024-07-15 16:09:54.383828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.172 [2024-07-15 16:09:54.383835] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.172 [2024-07-15 16:09:54.383999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.172 [2024-07-15 16:09:54.384162] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.172 [2024-07-15 16:09:54.384171] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.172 [2024-07-15 16:09:54.384177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.172 [2024-07-15 16:09:54.386782] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.172 [2024-07-15 16:09:54.396291] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.172 [2024-07-15 16:09:54.396754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.172 [2024-07-15 16:09:54.396801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.172 [2024-07-15 16:09:54.396824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.172 [2024-07-15 16:09:54.397398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.172 [2024-07-15 16:09:54.397654] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.172 [2024-07-15 16:09:54.397667] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.172 [2024-07-15 16:09:54.397676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.172 [2024-07-15 16:09:54.401744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.172 [2024-07-15 16:09:54.409694] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.172 [2024-07-15 16:09:54.410138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.172 [2024-07-15 16:09:54.410154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.172 [2024-07-15 16:09:54.410162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.172 [2024-07-15 16:09:54.410337] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.172 [2024-07-15 16:09:54.410505] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.172 [2024-07-15 16:09:54.410514] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.172 [2024-07-15 16:09:54.410520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.172 [2024-07-15 16:09:54.413185] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.172 [2024-07-15 16:09:54.422668] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.172 [2024-07-15 16:09:54.423131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.172 [2024-07-15 16:09:54.423172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.172 [2024-07-15 16:09:54.423194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.172 [2024-07-15 16:09:54.423791] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.172 [2024-07-15 16:09:54.424138] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.172 [2024-07-15 16:09:54.424147] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.172 [2024-07-15 16:09:54.424153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.172 [2024-07-15 16:09:54.426759] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.172 [2024-07-15 16:09:54.435592] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.172 [2024-07-15 16:09:54.436012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.172 [2024-07-15 16:09:54.436064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.172 [2024-07-15 16:09:54.436086] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.172 [2024-07-15 16:09:54.436683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.172 [2024-07-15 16:09:54.436927] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.172 [2024-07-15 16:09:54.436936] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.173 [2024-07-15 16:09:54.436942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.173 [2024-07-15 16:09:54.439610] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.173 [2024-07-15 16:09:54.448520] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.173 [2024-07-15 16:09:54.448983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-07-15 16:09:54.449025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.173 [2024-07-15 16:09:54.449046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.173 [2024-07-15 16:09:54.449639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.173 [2024-07-15 16:09:54.450243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.173 [2024-07-15 16:09:54.450252] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.173 [2024-07-15 16:09:54.450261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.173 [2024-07-15 16:09:54.452860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.173 [2024-07-15 16:09:54.461446] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.173 [2024-07-15 16:09:54.461842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-07-15 16:09:54.461884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.173 [2024-07-15 16:09:54.461906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.173 [2024-07-15 16:09:54.462502] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.173 [2024-07-15 16:09:54.463071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.173 [2024-07-15 16:09:54.463080] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.173 [2024-07-15 16:09:54.463086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.173 [2024-07-15 16:09:54.465687] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.173 [2024-07-15 16:09:54.474361] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.173 [2024-07-15 16:09:54.474803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-07-15 16:09:54.474819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.173 [2024-07-15 16:09:54.474826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.173 [2024-07-15 16:09:54.474989] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.173 [2024-07-15 16:09:54.475152] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.173 [2024-07-15 16:09:54.475160] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.173 [2024-07-15 16:09:54.475167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.173 [2024-07-15 16:09:54.477768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.173 [2024-07-15 16:09:54.487217] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.173 [2024-07-15 16:09:54.487677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-07-15 16:09:54.487693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.173 [2024-07-15 16:09:54.487701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.173 [2024-07-15 16:09:54.487865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.173 [2024-07-15 16:09:54.488029] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.173 [2024-07-15 16:09:54.488038] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.173 [2024-07-15 16:09:54.488045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.173 [2024-07-15 16:09:54.490648] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.173 [2024-07-15 16:09:54.500149] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.173 [2024-07-15 16:09:54.500611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-07-15 16:09:54.500661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.173 [2024-07-15 16:09:54.500683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.173 [2024-07-15 16:09:54.501186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.173 [2024-07-15 16:09:54.501357] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.173 [2024-07-15 16:09:54.501367] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.173 [2024-07-15 16:09:54.501373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.173 [2024-07-15 16:09:54.503969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.173 [2024-07-15 16:09:54.513015] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.173 [2024-07-15 16:09:54.513457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-07-15 16:09:54.513473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.173 [2024-07-15 16:09:54.513480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.173 [2024-07-15 16:09:54.513644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.173 [2024-07-15 16:09:54.513809] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.173 [2024-07-15 16:09:54.513818] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.173 [2024-07-15 16:09:54.513824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.173 [2024-07-15 16:09:54.516456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.173 [2024-07-15 16:09:54.525954] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.173 [2024-07-15 16:09:54.526394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-07-15 16:09:54.526446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.173 [2024-07-15 16:09:54.526467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.173 [2024-07-15 16:09:54.527056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.173 [2024-07-15 16:09:54.527221] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.173 [2024-07-15 16:09:54.527237] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.173 [2024-07-15 16:09:54.527243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.173 [2024-07-15 16:09:54.529838] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.173 [2024-07-15 16:09:54.538876] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.173 [2024-07-15 16:09:54.539337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-07-15 16:09:54.539379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.173 [2024-07-15 16:09:54.539401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.173 [2024-07-15 16:09:54.539980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.173 [2024-07-15 16:09:54.540508] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.173 [2024-07-15 16:09:54.540517] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.173 [2024-07-15 16:09:54.540523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.173 [2024-07-15 16:09:54.543122] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.173 [2024-07-15 16:09:54.551709] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.173 [2024-07-15 16:09:54.552134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.173 [2024-07-15 16:09:54.552150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.173 [2024-07-15 16:09:54.552157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.173 [2024-07-15 16:09:54.552327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.173 [2024-07-15 16:09:54.552490] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.173 [2024-07-15 16:09:54.552499] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.174 [2024-07-15 16:09:54.552505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.174 [2024-07-15 16:09:54.555103] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.174 [2024-07-15 16:09:54.564615] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.174 [2024-07-15 16:09:54.565064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-07-15 16:09:54.565080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.174 [2024-07-15 16:09:54.565088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.174 [2024-07-15 16:09:54.565257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.174 [2024-07-15 16:09:54.565420] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.174 [2024-07-15 16:09:54.565429] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.174 [2024-07-15 16:09:54.565435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.174 [2024-07-15 16:09:54.568030] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.174 [2024-07-15 16:09:54.577471] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.174 [2024-07-15 16:09:54.577924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-07-15 16:09:54.577966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.174 [2024-07-15 16:09:54.577988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.174 [2024-07-15 16:09:54.578377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.174 [2024-07-15 16:09:54.578542] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.174 [2024-07-15 16:09:54.578552] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.174 [2024-07-15 16:09:54.578558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.174 [2024-07-15 16:09:54.581167] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.174 [2024-07-15 16:09:54.590305] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.174 [2024-07-15 16:09:54.590766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-07-15 16:09:54.590808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.174 [2024-07-15 16:09:54.590829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.174 [2024-07-15 16:09:54.591329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.174 [2024-07-15 16:09:54.591494] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.174 [2024-07-15 16:09:54.591503] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.174 [2024-07-15 16:09:54.591509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.174 [2024-07-15 16:09:54.594197] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.174 [2024-07-15 16:09:54.603238] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.174 [2024-07-15 16:09:54.603635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-07-15 16:09:54.603651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.174 [2024-07-15 16:09:54.603658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.174 [2024-07-15 16:09:54.603822] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.174 [2024-07-15 16:09:54.603986] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.174 [2024-07-15 16:09:54.603995] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.174 [2024-07-15 16:09:54.604001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.174 [2024-07-15 16:09:54.606733] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.174 [2024-07-15 16:09:54.616286] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.174 [2024-07-15 16:09:54.616676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-07-15 16:09:54.616692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.174 [2024-07-15 16:09:54.616700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.174 [2024-07-15 16:09:54.616878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.174 [2024-07-15 16:09:54.617058] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.174 [2024-07-15 16:09:54.617067] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.174 [2024-07-15 16:09:54.617075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.174 [2024-07-15 16:09:54.619938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.174 [2024-07-15 16:09:54.629317] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.174 [2024-07-15 16:09:54.629758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-07-15 16:09:54.629774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.174 [2024-07-15 16:09:54.629785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.174 [2024-07-15 16:09:54.629949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.174 [2024-07-15 16:09:54.630112] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.174 [2024-07-15 16:09:54.630121] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.174 [2024-07-15 16:09:54.630127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.174 [2024-07-15 16:09:54.632726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.174 [2024-07-15 16:09:54.642219] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.174 [2024-07-15 16:09:54.642685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-07-15 16:09:54.642730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.174 [2024-07-15 16:09:54.642751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.174 [2024-07-15 16:09:54.643304] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.174 [2024-07-15 16:09:54.643470] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.174 [2024-07-15 16:09:54.643479] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.174 [2024-07-15 16:09:54.643485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.174 [2024-07-15 16:09:54.646083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.174 [2024-07-15 16:09:54.655125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.174 [2024-07-15 16:09:54.655572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-07-15 16:09:54.655589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.174 [2024-07-15 16:09:54.655596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.174 [2024-07-15 16:09:54.655759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.174 [2024-07-15 16:09:54.655922] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.174 [2024-07-15 16:09:54.655931] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.174 [2024-07-15 16:09:54.655937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.174 [2024-07-15 16:09:54.658650] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.174 [2024-07-15 16:09:54.667996] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.174 [2024-07-15 16:09:54.668422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-07-15 16:09:54.668439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.174 [2024-07-15 16:09:54.668445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.174 [2024-07-15 16:09:54.668608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.174 [2024-07-15 16:09:54.668771] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.174 [2024-07-15 16:09:54.668783] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.174 [2024-07-15 16:09:54.668789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.174 [2024-07-15 16:09:54.671487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.174 [2024-07-15 16:09:54.680845] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.174 [2024-07-15 16:09:54.681267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-07-15 16:09:54.681284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.174 [2024-07-15 16:09:54.681291] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.174 [2024-07-15 16:09:54.681454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.174 [2024-07-15 16:09:54.681617] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.174 [2024-07-15 16:09:54.681627] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.174 [2024-07-15 16:09:54.681632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.174 [2024-07-15 16:09:54.684235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.174 [2024-07-15 16:09:54.693677] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.174 [2024-07-15 16:09:54.694056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-07-15 16:09:54.694071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.174 [2024-07-15 16:09:54.694079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.174 [2024-07-15 16:09:54.694248] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.174 [2024-07-15 16:09:54.694413] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.174 [2024-07-15 16:09:54.694422] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.175 [2024-07-15 16:09:54.694428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.175 [2024-07-15 16:09:54.697024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.175 [2024-07-15 16:09:54.706527] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.175 [2024-07-15 16:09:54.706981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-07-15 16:09:54.707022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.175 [2024-07-15 16:09:54.707043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.175 [2024-07-15 16:09:54.707481] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.175 [2024-07-15 16:09:54.707645] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.175 [2024-07-15 16:09:54.707654] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.175 [2024-07-15 16:09:54.707661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.175 [2024-07-15 16:09:54.710260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.175 [2024-07-15 16:09:54.719475] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.175 [2024-07-15 16:09:54.719934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-07-15 16:09:54.719951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.175 [2024-07-15 16:09:54.719958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.175 [2024-07-15 16:09:54.720132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.175 [2024-07-15 16:09:54.720314] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.175 [2024-07-15 16:09:54.720324] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.175 [2024-07-15 16:09:54.720330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.175 [2024-07-15 16:09:54.723102] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.175 [2024-07-15 16:09:54.732354] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.175 [2024-07-15 16:09:54.732814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-07-15 16:09:54.732856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.175 [2024-07-15 16:09:54.732877] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.175 [2024-07-15 16:09:54.733470] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.175 [2024-07-15 16:09:54.733942] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.175 [2024-07-15 16:09:54.733952] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.175 [2024-07-15 16:09:54.733957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.175 [2024-07-15 16:09:54.736555] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.175 [2024-07-15 16:09:54.745235] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.175 [2024-07-15 16:09:54.745660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-07-15 16:09:54.745676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.175 [2024-07-15 16:09:54.745683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.175 [2024-07-15 16:09:54.745847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.175 [2024-07-15 16:09:54.746010] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.175 [2024-07-15 16:09:54.746020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.175 [2024-07-15 16:09:54.746026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.175 [2024-07-15 16:09:54.748629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.175 [2024-07-15 16:09:54.758289] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.175 [2024-07-15 16:09:54.758753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-07-15 16:09:54.758794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.175 [2024-07-15 16:09:54.758815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.175 [2024-07-15 16:09:54.759415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.175 [2024-07-15 16:09:54.759908] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.175 [2024-07-15 16:09:54.759918] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.175 [2024-07-15 16:09:54.759923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.175 [2024-07-15 16:09:54.762522] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.175 [2024-07-15 16:09:54.771104] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.175 [2024-07-15 16:09:54.771554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-07-15 16:09:54.771597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.175 [2024-07-15 16:09:54.771619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.175 [2024-07-15 16:09:54.772051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.175 [2024-07-15 16:09:54.772216] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.175 [2024-07-15 16:09:54.772223] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.175 [2024-07-15 16:09:54.772236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.175 [2024-07-15 16:09:54.774833] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.175 [2024-07-15 16:09:54.784029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.175 [2024-07-15 16:09:54.784450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-07-15 16:09:54.784468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.175 [2024-07-15 16:09:54.784475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.175 [2024-07-15 16:09:54.784640] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.175 [2024-07-15 16:09:54.784803] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.175 [2024-07-15 16:09:54.784812] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.175 [2024-07-15 16:09:54.784818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.175 [2024-07-15 16:09:54.787422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.175 [2024-07-15 16:09:54.796884] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.175 [2024-07-15 16:09:54.797341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-07-15 16:09:54.797387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.175 [2024-07-15 16:09:54.797410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.175 [2024-07-15 16:09:54.797962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.175 [2024-07-15 16:09:54.798128] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.175 [2024-07-15 16:09:54.798137] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.175 [2024-07-15 16:09:54.798147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.175 [2024-07-15 16:09:54.800751] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.175 [2024-07-15 16:09:54.809801] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.175 [2024-07-15 16:09:54.810223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-07-15 16:09:54.810290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.175 [2024-07-15 16:09:54.810313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.175 [2024-07-15 16:09:54.810850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.175 [2024-07-15 16:09:54.811015] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.175 [2024-07-15 16:09:54.811024] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.175 [2024-07-15 16:09:54.811030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.175 [2024-07-15 16:09:54.813636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.175 [2024-07-15 16:09:54.822741] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.175 [2024-07-15 16:09:54.823160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-07-15 16:09:54.823177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.175 [2024-07-15 16:09:54.823185] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.175 [2024-07-15 16:09:54.823364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.175 [2024-07-15 16:09:54.823538] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.175 [2024-07-15 16:09:54.823548] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.175 [2024-07-15 16:09:54.823555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.175 [2024-07-15 16:09:54.826191] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.175 [2024-07-15 16:09:54.835785] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.175 [2024-07-15 16:09:54.836235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-07-15 16:09:54.836256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.175 [2024-07-15 16:09:54.836263] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.175 [2024-07-15 16:09:54.836428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.176 [2024-07-15 16:09:54.836593] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.176 [2024-07-15 16:09:54.836602] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.176 [2024-07-15 16:09:54.836611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.176 [2024-07-15 16:09:54.839342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.176 [2024-07-15 16:09:54.848995] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.176 [2024-07-15 16:09:54.849461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-07-15 16:09:54.849479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.176 [2024-07-15 16:09:54.849487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.176 [2024-07-15 16:09:54.849666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.176 [2024-07-15 16:09:54.849845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.176 [2024-07-15 16:09:54.849855] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.176 [2024-07-15 16:09:54.849861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.176 [2024-07-15 16:09:54.852700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.176 [2024-07-15 16:09:54.862104] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.176 [2024-07-15 16:09:54.862519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-07-15 16:09:54.862537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.176 [2024-07-15 16:09:54.862545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.176 [2024-07-15 16:09:54.862723] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.176 [2024-07-15 16:09:54.862901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.176 [2024-07-15 16:09:54.862911] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.176 [2024-07-15 16:09:54.862917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.176 [2024-07-15 16:09:54.865752] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.176 [2024-07-15 16:09:54.875301] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.176 [2024-07-15 16:09:54.875683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-07-15 16:09:54.875700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.176 [2024-07-15 16:09:54.875708] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.176 [2024-07-15 16:09:54.875886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.176 [2024-07-15 16:09:54.876064] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.176 [2024-07-15 16:09:54.876074] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.176 [2024-07-15 16:09:54.876080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.176 [2024-07-15 16:09:54.878922] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.176 [2024-07-15 16:09:54.888475] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.176 [2024-07-15 16:09:54.888843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-07-15 16:09:54.888860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.176 [2024-07-15 16:09:54.888868] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.176 [2024-07-15 16:09:54.889051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.176 [2024-07-15 16:09:54.889237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.176 [2024-07-15 16:09:54.889248] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.176 [2024-07-15 16:09:54.889256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.176 [2024-07-15 16:09:54.892089] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.176 [2024-07-15 16:09:54.901643] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.176 [2024-07-15 16:09:54.902102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-07-15 16:09:54.902119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.176 [2024-07-15 16:09:54.902126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.176 [2024-07-15 16:09:54.902311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.176 [2024-07-15 16:09:54.902490] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.176 [2024-07-15 16:09:54.902499] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.176 [2024-07-15 16:09:54.902506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.176 [2024-07-15 16:09:54.905346] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.176 [2024-07-15 16:09:54.914723] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.176 [2024-07-15 16:09:54.915198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-07-15 16:09:54.915216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.176 [2024-07-15 16:09:54.915230] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.176 [2024-07-15 16:09:54.915408] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.176 [2024-07-15 16:09:54.915586] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.176 [2024-07-15 16:09:54.915596] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.176 [2024-07-15 16:09:54.915603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.176 [2024-07-15 16:09:54.918441] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.176 [2024-07-15 16:09:54.927823] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.176 [2024-07-15 16:09:54.928262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-07-15 16:09:54.928280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.176 [2024-07-15 16:09:54.928287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.176 [2024-07-15 16:09:54.928464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.176 [2024-07-15 16:09:54.928643] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.176 [2024-07-15 16:09:54.928652] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.176 [2024-07-15 16:09:54.928662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.176 [2024-07-15 16:09:54.931499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.176 [2024-07-15 16:09:54.940872] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.176 [2024-07-15 16:09:54.941311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-07-15 16:09:54.941329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.176 [2024-07-15 16:09:54.941336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.176 [2024-07-15 16:09:54.941513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.176 [2024-07-15 16:09:54.941693] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.176 [2024-07-15 16:09:54.941704] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.176 [2024-07-15 16:09:54.941711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.176 [2024-07-15 16:09:54.944549] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.176 [2024-07-15 16:09:54.953922] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.176 [2024-07-15 16:09:54.954286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-07-15 16:09:54.954303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.176 [2024-07-15 16:09:54.954311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.176 [2024-07-15 16:09:54.954488] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.176 [2024-07-15 16:09:54.954666] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.176 [2024-07-15 16:09:54.954676] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.176 [2024-07-15 16:09:54.954682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.176 [2024-07-15 16:09:54.957548] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.176 [2024-07-15 16:09:54.967206] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.176 [2024-07-15 16:09:54.967679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-07-15 16:09:54.967697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.176 [2024-07-15 16:09:54.967705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.176 [2024-07-15 16:09:54.967888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.176 [2024-07-15 16:09:54.968072] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.176 [2024-07-15 16:09:54.968082] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.176 [2024-07-15 16:09:54.968088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.176 [2024-07-15 16:09:54.970940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.177 [2024-07-15 16:09:54.980346] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.177 [2024-07-15 16:09:54.980794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.177 [2024-07-15 16:09:54.980844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.177 [2024-07-15 16:09:54.980867] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.177 [2024-07-15 16:09:54.981336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.177 [2024-07-15 16:09:54.981515] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.177 [2024-07-15 16:09:54.981526] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.177 [2024-07-15 16:09:54.981532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.177 [2024-07-15 16:09:54.984370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.177 [2024-07-15 16:09:54.993410] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.177 [2024-07-15 16:09:54.993847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.177 [2024-07-15 16:09:54.993864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.177 [2024-07-15 16:09:54.993871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.177 [2024-07-15 16:09:54.994049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.177 [2024-07-15 16:09:54.994232] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.177 [2024-07-15 16:09:54.994242] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.177 [2024-07-15 16:09:54.994249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.177 [2024-07-15 16:09:54.997073] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.177 [2024-07-15 16:09:55.006405] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.177 [2024-07-15 16:09:55.006785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.177 [2024-07-15 16:09:55.006802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.177 [2024-07-15 16:09:55.006809] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.177 [2024-07-15 16:09:55.006982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.177 [2024-07-15 16:09:55.007154] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.177 [2024-07-15 16:09:55.007163] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.177 [2024-07-15 16:09:55.007169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.177 [2024-07-15 16:09:55.009925] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.177 [2024-07-15 16:09:55.019246] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.177 [2024-07-15 16:09:55.019547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.177 [2024-07-15 16:09:55.019564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.177 [2024-07-15 16:09:55.019571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.177 [2024-07-15 16:09:55.019734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.177 [2024-07-15 16:09:55.019900] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.177 [2024-07-15 16:09:55.019910] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.177 [2024-07-15 16:09:55.019916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.177 [2024-07-15 16:09:55.022613] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.177 [2024-07-15 16:09:55.032128] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.177 [2024-07-15 16:09:55.032607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.177 [2024-07-15 16:09:55.032651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.177 [2024-07-15 16:09:55.032674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.177 [2024-07-15 16:09:55.033279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.177 [2024-07-15 16:09:55.033445] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.177 [2024-07-15 16:09:55.033455] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.177 [2024-07-15 16:09:55.033461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.177 [2024-07-15 16:09:55.036170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.177 [2024-07-15 16:09:55.045009] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.177 [2024-07-15 16:09:55.045317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.177 [2024-07-15 16:09:55.045334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.177 [2024-07-15 16:09:55.045341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.177 [2024-07-15 16:09:55.045505] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.177 [2024-07-15 16:09:55.045670] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.177 [2024-07-15 16:09:55.045679] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.177 [2024-07-15 16:09:55.045685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.177 [2024-07-15 16:09:55.048292] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.177 [2024-07-15 16:09:55.057940] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.177 [2024-07-15 16:09:55.058278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.177 [2024-07-15 16:09:55.058295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.177 [2024-07-15 16:09:55.058302] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.177 [2024-07-15 16:09:55.058465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.177 [2024-07-15 16:09:55.058629] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.177 [2024-07-15 16:09:55.058639] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.177 [2024-07-15 16:09:55.058645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.177 [2024-07-15 16:09:55.061253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.177 [2024-07-15 16:09:55.070769] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.177 [2024-07-15 16:09:55.071086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.177 [2024-07-15 16:09:55.071103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.177 [2024-07-15 16:09:55.071111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.177 [2024-07-15 16:09:55.071287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.177 [2024-07-15 16:09:55.071461] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.177 [2024-07-15 16:09:55.071470] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.177 [2024-07-15 16:09:55.071476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.177 [2024-07-15 16:09:55.074127] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.177 [2024-07-15 16:09:55.083661] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.177 [2024-07-15 16:09:55.084082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.177 [2024-07-15 16:09:55.084100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.177 [2024-07-15 16:09:55.084108] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.177 [2024-07-15 16:09:55.084273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.177 [2024-07-15 16:09:55.084438] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.177 [2024-07-15 16:09:55.084447] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.177 [2024-07-15 16:09:55.084452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.177 [2024-07-15 16:09:55.087052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.177 [2024-07-15 16:09:55.096825] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.177 [2024-07-15 16:09:55.097186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.178 [2024-07-15 16:09:55.097203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.178 [2024-07-15 16:09:55.097210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.178 [2024-07-15 16:09:55.097393] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.178 [2024-07-15 16:09:55.097572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.178 [2024-07-15 16:09:55.097582] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.178 [2024-07-15 16:09:55.097588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.437 [2024-07-15 16:09:55.100433] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.437 [2024-07-15 16:09:55.109653] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.437 [2024-07-15 16:09:55.110025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.437 [2024-07-15 16:09:55.110041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.437 [2024-07-15 16:09:55.110051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.437 [2024-07-15 16:09:55.110214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.437 [2024-07-15 16:09:55.110381] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.437 [2024-07-15 16:09:55.110392] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.437 [2024-07-15 16:09:55.110398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.437 [2024-07-15 16:09:55.112998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.437 [2024-07-15 16:09:55.122750] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.437 [2024-07-15 16:09:55.123143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.437 [2024-07-15 16:09:55.123159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.437 [2024-07-15 16:09:55.123167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.437 [2024-07-15 16:09:55.123351] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.437 [2024-07-15 16:09:55.123530] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.437 [2024-07-15 16:09:55.123539] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.437 [2024-07-15 16:09:55.123545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.437 [2024-07-15 16:09:55.126388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.437 [2024-07-15 16:09:55.135931] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.437 [2024-07-15 16:09:55.136392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.437 [2024-07-15 16:09:55.136409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.437 [2024-07-15 16:09:55.136417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.437 [2024-07-15 16:09:55.136594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.437 [2024-07-15 16:09:55.136774] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.437 [2024-07-15 16:09:55.136784] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.437 [2024-07-15 16:09:55.136790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.438 [2024-07-15 16:09:55.139628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.438 [2024-07-15 16:09:55.148995] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.438 [2024-07-15 16:09:55.149417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.438 [2024-07-15 16:09:55.149434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.438 [2024-07-15 16:09:55.149441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.438 [2024-07-15 16:09:55.149618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.438 [2024-07-15 16:09:55.149796] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.438 [2024-07-15 16:09:55.149809] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.438 [2024-07-15 16:09:55.149815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.438 [2024-07-15 16:09:55.152653] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.438 [2024-07-15 16:09:55.162193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.438 [2024-07-15 16:09:55.162583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.438 [2024-07-15 16:09:55.162599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.438 [2024-07-15 16:09:55.162607] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.438 [2024-07-15 16:09:55.162784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.438 [2024-07-15 16:09:55.162962] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.438 [2024-07-15 16:09:55.162972] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.438 [2024-07-15 16:09:55.162978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.438 [2024-07-15 16:09:55.165816] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.438 [2024-07-15 16:09:55.175392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.438 [2024-07-15 16:09:55.175830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.438 [2024-07-15 16:09:55.175880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.438 [2024-07-15 16:09:55.175902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.438 [2024-07-15 16:09:55.176495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.438 [2024-07-15 16:09:55.176793] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.438 [2024-07-15 16:09:55.176804] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.438 [2024-07-15 16:09:55.176810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.438 [2024-07-15 16:09:55.179648] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.438 [2024-07-15 16:09:55.188537] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.438 [2024-07-15 16:09:55.188937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.438 [2024-07-15 16:09:55.188979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.438 [2024-07-15 16:09:55.189002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.438 [2024-07-15 16:09:55.189591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.438 [2024-07-15 16:09:55.189989] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.438 [2024-07-15 16:09:55.190000] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.438 [2024-07-15 16:09:55.190007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.438 [2024-07-15 16:09:55.192785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.438 [2024-07-15 16:09:55.201559] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.438 [2024-07-15 16:09:55.201931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.438 [2024-07-15 16:09:55.201973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.438 [2024-07-15 16:09:55.201997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.438 [2024-07-15 16:09:55.202588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.438 [2024-07-15 16:09:55.203113] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.438 [2024-07-15 16:09:55.203123] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.438 [2024-07-15 16:09:55.203129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.438 [2024-07-15 16:09:55.205811] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.438 [2024-07-15 16:09:55.214498] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.438 [2024-07-15 16:09:55.214848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.438 [2024-07-15 16:09:55.214864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.438 [2024-07-15 16:09:55.214871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.438 [2024-07-15 16:09:55.215035] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.438 [2024-07-15 16:09:55.215199] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.438 [2024-07-15 16:09:55.215207] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.438 [2024-07-15 16:09:55.215213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.438 [2024-07-15 16:09:55.217816] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.438 [2024-07-15 16:09:55.227365] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.438 [2024-07-15 16:09:55.227688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.438 [2024-07-15 16:09:55.227729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.438 [2024-07-15 16:09:55.227751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.438 [2024-07-15 16:09:55.228345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.438 [2024-07-15 16:09:55.228928] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.438 [2024-07-15 16:09:55.228953] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.438 [2024-07-15 16:09:55.228978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.438 [2024-07-15 16:09:55.231574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.438 [2024-07-15 16:09:55.240202] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.438 [2024-07-15 16:09:55.240648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.438 [2024-07-15 16:09:55.240664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.438 [2024-07-15 16:09:55.240671] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.438 [2024-07-15 16:09:55.240838] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.438 [2024-07-15 16:09:55.241002] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.438 [2024-07-15 16:09:55.241011] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.438 [2024-07-15 16:09:55.241017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.438 [2024-07-15 16:09:55.243622] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.438 [2024-07-15 16:09:55.253123] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.438 [2024-07-15 16:09:55.253568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.438 [2024-07-15 16:09:55.253585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.438 [2024-07-15 16:09:55.253592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.438 [2024-07-15 16:09:55.253755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.438 [2024-07-15 16:09:55.253919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.438 [2024-07-15 16:09:55.253928] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.438 [2024-07-15 16:09:55.253933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.438 [2024-07-15 16:09:55.256576] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.438 [2024-07-15 16:09:55.265990] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.438 [2024-07-15 16:09:55.266363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.438 [2024-07-15 16:09:55.266379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.438 [2024-07-15 16:09:55.266386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.438 [2024-07-15 16:09:55.266549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.438 [2024-07-15 16:09:55.266712] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.438 [2024-07-15 16:09:55.266721] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.438 [2024-07-15 16:09:55.266727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.438 [2024-07-15 16:09:55.269332] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.438 [2024-07-15 16:09:55.278832] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.438 [2024-07-15 16:09:55.279266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.438 [2024-07-15 16:09:55.279283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.438 [2024-07-15 16:09:55.279289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.438 [2024-07-15 16:09:55.279452] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.438 [2024-07-15 16:09:55.279615] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.438 [2024-07-15 16:09:55.279624] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.438 [2024-07-15 16:09:55.279633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.438 [2024-07-15 16:09:55.282244] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.438 [2024-07-15 16:09:55.291748] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.438 [2024-07-15 16:09:55.292087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.438 [2024-07-15 16:09:55.292103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.438 [2024-07-15 16:09:55.292110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.438 [2024-07-15 16:09:55.292278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.438 [2024-07-15 16:09:55.292442] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.438 [2024-07-15 16:09:55.292450] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.438 [2024-07-15 16:09:55.292456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.438 [2024-07-15 16:09:55.295053] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.438 [2024-07-15 16:09:55.304641] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.438 [2024-07-15 16:09:55.305075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.438 [2024-07-15 16:09:55.305117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.438 [2024-07-15 16:09:55.305139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.438 [2024-07-15 16:09:55.305736] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.438 [2024-07-15 16:09:55.306328] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.438 [2024-07-15 16:09:55.306338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.438 [2024-07-15 16:09:55.306344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.438 [2024-07-15 16:09:55.309003] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.438 [2024-07-15 16:09:55.317617] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.438 [2024-07-15 16:09:55.318101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.438 [2024-07-15 16:09:55.318143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.438 [2024-07-15 16:09:55.318165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.438 [2024-07-15 16:09:55.318760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.438 [2024-07-15 16:09:55.319207] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.438 [2024-07-15 16:09:55.319216] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.438 [2024-07-15 16:09:55.319222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.438 [2024-07-15 16:09:55.321851] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.438 [2024-07-15 16:09:55.330478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.438 [2024-07-15 16:09:55.330852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.438 [2024-07-15 16:09:55.330893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.438 [2024-07-15 16:09:55.330915] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.438 [2024-07-15 16:09:55.331414] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.438 [2024-07-15 16:09:55.331579] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.438 [2024-07-15 16:09:55.331588] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.438 [2024-07-15 16:09:55.331595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.438 [2024-07-15 16:09:55.334192] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.438 [2024-07-15 16:09:55.343334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.438 [2024-07-15 16:09:55.343706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.438 [2024-07-15 16:09:55.343747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.438 [2024-07-15 16:09:55.343770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.438 [2024-07-15 16:09:55.344256] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.438 [2024-07-15 16:09:55.344421] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.438 [2024-07-15 16:09:55.344430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.438 [2024-07-15 16:09:55.344436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.438 [2024-07-15 16:09:55.347034] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.438 [2024-07-15 16:09:55.356233] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.438 [2024-07-15 16:09:55.356693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.438 [2024-07-15 16:09:55.356733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.438 [2024-07-15 16:09:55.356754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.438 [2024-07-15 16:09:55.357340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.438 [2024-07-15 16:09:55.357596] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.438 [2024-07-15 16:09:55.357608] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.438 [2024-07-15 16:09:55.357618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.438 [2024-07-15 16:09:55.361678] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.438 [2024-07-15 16:09:55.370106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.438 [2024-07-15 16:09:55.370587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.438 [2024-07-15 16:09:55.370605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.438 [2024-07-15 16:09:55.370614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.438 [2024-07-15 16:09:55.370793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.698 [2024-07-15 16:09:55.370976] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.698 [2024-07-15 16:09:55.370986] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.698 [2024-07-15 16:09:55.370992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.698 [2024-07-15 16:09:55.373856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.698 [2024-07-15 16:09:55.382952] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.698 [2024-07-15 16:09:55.383393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.698 [2024-07-15 16:09:55.383411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.698 [2024-07-15 16:09:55.383418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.698 [2024-07-15 16:09:55.383581] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.698 [2024-07-15 16:09:55.383744] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.698 [2024-07-15 16:09:55.383753] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.698 [2024-07-15 16:09:55.383759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.698 [2024-07-15 16:09:55.386361] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.698 [2024-07-15 16:09:55.395871] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.698 [2024-07-15 16:09:55.396332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.698 [2024-07-15 16:09:55.396374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.698 [2024-07-15 16:09:55.396397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.698 [2024-07-15 16:09:55.396975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.698 [2024-07-15 16:09:55.397208] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.698 [2024-07-15 16:09:55.397217] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.698 [2024-07-15 16:09:55.397223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.698 [2024-07-15 16:09:55.399825] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.698 [2024-07-15 16:09:55.408712] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.698 [2024-07-15 16:09:55.409155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.698 [2024-07-15 16:09:55.409171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.698 [2024-07-15 16:09:55.409177] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.698 [2024-07-15 16:09:55.409349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.698 [2024-07-15 16:09:55.409513] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.698 [2024-07-15 16:09:55.409522] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.698 [2024-07-15 16:09:55.409528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.698 [2024-07-15 16:09:55.412128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.698 [2024-07-15 16:09:55.421534] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.698 [2024-07-15 16:09:55.421939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.698 [2024-07-15 16:09:55.421982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.698 [2024-07-15 16:09:55.422005] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.698 [2024-07-15 16:09:55.422501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.698 [2024-07-15 16:09:55.422676] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.698 [2024-07-15 16:09:55.422685] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.698 [2024-07-15 16:09:55.422691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.698 [2024-07-15 16:09:55.425420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.698 [2024-07-15 16:09:55.434475] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.698 [2024-07-15 16:09:55.434853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.698 [2024-07-15 16:09:55.434869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.698 [2024-07-15 16:09:55.434875] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.698 [2024-07-15 16:09:55.435039] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.698 [2024-07-15 16:09:55.435202] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.698 [2024-07-15 16:09:55.435211] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.698 [2024-07-15 16:09:55.435217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.698 [2024-07-15 16:09:55.437821] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.698 [2024-07-15 16:09:55.447280] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.698 [2024-07-15 16:09:55.447664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.698 [2024-07-15 16:09:55.447705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.698 [2024-07-15 16:09:55.447727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.698 [2024-07-15 16:09:55.448284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.698 [2024-07-15 16:09:55.448540] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.698 [2024-07-15 16:09:55.448553] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.698 [2024-07-15 16:09:55.448562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.698 [2024-07-15 16:09:55.452623] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.698 [2024-07-15 16:09:55.460646] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.698 [2024-07-15 16:09:55.461104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.698 [2024-07-15 16:09:55.461145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.698 [2024-07-15 16:09:55.461175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.698 [2024-07-15 16:09:55.461588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.698 [2024-07-15 16:09:55.461758] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.698 [2024-07-15 16:09:55.461767] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.698 [2024-07-15 16:09:55.461773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.698 [2024-07-15 16:09:55.464442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.698 [2024-07-15 16:09:55.473541] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.698 [2024-07-15 16:09:55.473914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.698 [2024-07-15 16:09:55.473930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.698 [2024-07-15 16:09:55.473937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.698 [2024-07-15 16:09:55.474100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.698 [2024-07-15 16:09:55.474269] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.698 [2024-07-15 16:09:55.474278] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.698 [2024-07-15 16:09:55.474284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.698 [2024-07-15 16:09:55.476878] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.698 [2024-07-15 16:09:55.486392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.698 [2024-07-15 16:09:55.486842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.699 [2024-07-15 16:09:55.486884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.699 [2024-07-15 16:09:55.486906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.699 [2024-07-15 16:09:55.487499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.699 [2024-07-15 16:09:55.487857] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.699 [2024-07-15 16:09:55.487866] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.699 [2024-07-15 16:09:55.487871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.699 [2024-07-15 16:09:55.490468] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.699 [2024-07-15 16:09:55.499198] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.699 [2024-07-15 16:09:55.499637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.699 [2024-07-15 16:09:55.499680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.699 [2024-07-15 16:09:55.499702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.699 [2024-07-15 16:09:55.500297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.699 [2024-07-15 16:09:55.500859] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.699 [2024-07-15 16:09:55.500868] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.699 [2024-07-15 16:09:55.500874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.699 [2024-07-15 16:09:55.503471] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.699 [2024-07-15 16:09:55.512053] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.699 [2024-07-15 16:09:55.512502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.699 [2024-07-15 16:09:55.512518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.699 [2024-07-15 16:09:55.512525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.699 [2024-07-15 16:09:55.512688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.699 [2024-07-15 16:09:55.512852] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.699 [2024-07-15 16:09:55.512861] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.699 [2024-07-15 16:09:55.512867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.699 [2024-07-15 16:09:55.515472] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.699 [2024-07-15 16:09:55.525147] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.699 [2024-07-15 16:09:55.525602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.699 [2024-07-15 16:09:55.525618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.699 [2024-07-15 16:09:55.525625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.699 [2024-07-15 16:09:55.525787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.699 [2024-07-15 16:09:55.525950] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.699 [2024-07-15 16:09:55.525959] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.699 [2024-07-15 16:09:55.525965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.699 [2024-07-15 16:09:55.528568] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.699 [2024-07-15 16:09:55.538069] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.699 [2024-07-15 16:09:55.538531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.699 [2024-07-15 16:09:55.538573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.699 [2024-07-15 16:09:55.538596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.699 [2024-07-15 16:09:55.539176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.699 [2024-07-15 16:09:55.539714] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.699 [2024-07-15 16:09:55.539727] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.699 [2024-07-15 16:09:55.539736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.699 [2024-07-15 16:09:55.543797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.699 [2024-07-15 16:09:55.551549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.699 [2024-07-15 16:09:55.551991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.699 [2024-07-15 16:09:55.552007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.699 [2024-07-15 16:09:55.552015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.699 [2024-07-15 16:09:55.552182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.699 [2024-07-15 16:09:55.552354] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.699 [2024-07-15 16:09:55.552364] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.699 [2024-07-15 16:09:55.552370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.699 [2024-07-15 16:09:55.555039] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.699 [2024-07-15 16:09:55.564459] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.699 [2024-07-15 16:09:55.564916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.699 [2024-07-15 16:09:55.564957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.699 [2024-07-15 16:09:55.564979] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.699 [2024-07-15 16:09:55.565468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.699 [2024-07-15 16:09:55.565633] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.699 [2024-07-15 16:09:55.565642] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.699 [2024-07-15 16:09:55.565648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.699 [2024-07-15 16:09:55.568245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.699 [2024-07-15 16:09:55.577295] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.699 [2024-07-15 16:09:55.577738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.699 [2024-07-15 16:09:55.577755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.699 [2024-07-15 16:09:55.577762] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.699 [2024-07-15 16:09:55.577925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.699 [2024-07-15 16:09:55.578088] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.699 [2024-07-15 16:09:55.578096] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.699 [2024-07-15 16:09:55.578102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.699 [2024-07-15 16:09:55.580710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.699 [2024-07-15 16:09:55.590210] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.699 [2024-07-15 16:09:55.590661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.699 [2024-07-15 16:09:55.590702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.699 [2024-07-15 16:09:55.590732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.699 [2024-07-15 16:09:55.591109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.699 [2024-07-15 16:09:55.591279] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.699 [2024-07-15 16:09:55.591289] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.699 [2024-07-15 16:09:55.591294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.699 [2024-07-15 16:09:55.593890] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.699 [2024-07-15 16:09:55.603089] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.699 [2024-07-15 16:09:55.603541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.699 [2024-07-15 16:09:55.603583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.699 [2024-07-15 16:09:55.603605] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.699 [2024-07-15 16:09:55.604012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.699 [2024-07-15 16:09:55.604176] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.699 [2024-07-15 16:09:55.604185] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.699 [2024-07-15 16:09:55.604191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.699 [2024-07-15 16:09:55.606793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.699 [2024-07-15 16:09:55.615984] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.699 [2024-07-15 16:09:55.616427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.699 [2024-07-15 16:09:55.616444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.699 [2024-07-15 16:09:55.616451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.699 [2024-07-15 16:09:55.616612] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.699 [2024-07-15 16:09:55.616775] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.699 [2024-07-15 16:09:55.616784] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.699 [2024-07-15 16:09:55.616790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.699 [2024-07-15 16:09:55.619389] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.700 [2024-07-15 16:09:55.629005] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.700 [2024-07-15 16:09:55.629464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.700 [2024-07-15 16:09:55.629480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.700 [2024-07-15 16:09:55.629488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.700 [2024-07-15 16:09:55.629665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.700 [2024-07-15 16:09:55.629844] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.700 [2024-07-15 16:09:55.629857] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.700 [2024-07-15 16:09:55.629863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.959 [2024-07-15 16:09:55.632704] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.959 [2024-07-15 16:09:55.641947] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.959 [2024-07-15 16:09:55.642371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.959 [2024-07-15 16:09:55.642426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.959 [2024-07-15 16:09:55.642449] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.959 [2024-07-15 16:09:55.642979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.959 [2024-07-15 16:09:55.643144] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.959 [2024-07-15 16:09:55.643153] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.959 [2024-07-15 16:09:55.643158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.959 [2024-07-15 16:09:55.645755] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.959 [2024-07-15 16:09:55.654740] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.959 [2024-07-15 16:09:55.655185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.959 [2024-07-15 16:09:55.655201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.959 [2024-07-15 16:09:55.655209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.959 [2024-07-15 16:09:55.655379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.959 [2024-07-15 16:09:55.655543] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.959 [2024-07-15 16:09:55.655552] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.959 [2024-07-15 16:09:55.655558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.959 [2024-07-15 16:09:55.658255] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.959 [2024-07-15 16:09:55.667649] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.959 [2024-07-15 16:09:55.668097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.959 [2024-07-15 16:09:55.668138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.959 [2024-07-15 16:09:55.668160] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.959 [2024-07-15 16:09:55.668756] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.959 [2024-07-15 16:09:55.669197] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.959 [2024-07-15 16:09:55.669206] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.959 [2024-07-15 16:09:55.669212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.959 [2024-07-15 16:09:55.671811] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.959 [2024-07-15 16:09:55.680554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.959 [2024-07-15 16:09:55.681008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.959 [2024-07-15 16:09:55.681050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.959 [2024-07-15 16:09:55.681072] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.959 [2024-07-15 16:09:55.681610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.959 [2024-07-15 16:09:55.681865] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.959 [2024-07-15 16:09:55.681877] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.959 [2024-07-15 16:09:55.681887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.959 [2024-07-15 16:09:55.685950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.959 [2024-07-15 16:09:55.693758] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.959 [2024-07-15 16:09:55.694181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.959 [2024-07-15 16:09:55.694240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.959 [2024-07-15 16:09:55.694265] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.959 [2024-07-15 16:09:55.694818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.959 [2024-07-15 16:09:55.694988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.959 [2024-07-15 16:09:55.694998] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.959 [2024-07-15 16:09:55.695004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.959 [2024-07-15 16:09:55.697678] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.959 [2024-07-15 16:09:55.706687] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.959 [2024-07-15 16:09:55.707052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.959 [2024-07-15 16:09:55.707068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.959 [2024-07-15 16:09:55.707075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.959 [2024-07-15 16:09:55.707244] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.959 [2024-07-15 16:09:55.707407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.960 [2024-07-15 16:09:55.707416] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.960 [2024-07-15 16:09:55.707422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.960 [2024-07-15 16:09:55.710021] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.960 [2024-07-15 16:09:55.719663] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.960 [2024-07-15 16:09:55.720111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-07-15 16:09:55.720153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.960 [2024-07-15 16:09:55.720176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.960 [2024-07-15 16:09:55.720778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.960 [2024-07-15 16:09:55.721241] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.960 [2024-07-15 16:09:55.721250] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.960 [2024-07-15 16:09:55.721257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.960 [2024-07-15 16:09:55.724066] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.960 [2024-07-15 16:09:55.732481] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.960 [2024-07-15 16:09:55.732935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-07-15 16:09:55.732977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.960 [2024-07-15 16:09:55.732999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.960 [2024-07-15 16:09:55.733423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.960 [2024-07-15 16:09:55.733588] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.960 [2024-07-15 16:09:55.733597] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.960 [2024-07-15 16:09:55.733604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.960 [2024-07-15 16:09:55.736198] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.960 [2024-07-15 16:09:55.745396] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.960 [2024-07-15 16:09:55.745852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-07-15 16:09:55.745893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.960 [2024-07-15 16:09:55.745916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.960 [2024-07-15 16:09:55.746398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.960 [2024-07-15 16:09:55.746563] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.960 [2024-07-15 16:09:55.746572] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.960 [2024-07-15 16:09:55.746578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.960 [2024-07-15 16:09:55.749171] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.960 [2024-07-15 16:09:55.758252] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.960 [2024-07-15 16:09:55.758695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-07-15 16:09:55.758712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.960 [2024-07-15 16:09:55.758719] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.960 [2024-07-15 16:09:55.758882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.960 [2024-07-15 16:09:55.759045] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.960 [2024-07-15 16:09:55.759054] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.960 [2024-07-15 16:09:55.759064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.960 [2024-07-15 16:09:55.761671] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.960 [2024-07-15 16:09:55.771176] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.960 [2024-07-15 16:09:55.771522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-07-15 16:09:55.771539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.960 [2024-07-15 16:09:55.771546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.960 [2024-07-15 16:09:55.771708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.960 [2024-07-15 16:09:55.771872] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.960 [2024-07-15 16:09:55.771881] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.960 [2024-07-15 16:09:55.771887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.960 [2024-07-15 16:09:55.774585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.960 [2024-07-15 16:09:55.784092] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.960 [2024-07-15 16:09:55.784541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-07-15 16:09:55.784557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.960 [2024-07-15 16:09:55.784564] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.960 [2024-07-15 16:09:55.784727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.960 [2024-07-15 16:09:55.784891] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.960 [2024-07-15 16:09:55.784900] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.960 [2024-07-15 16:09:55.784906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.960 [2024-07-15 16:09:55.787508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.960 [2024-07-15 16:09:55.797007] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.960 [2024-07-15 16:09:55.797425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-07-15 16:09:55.797441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.960 [2024-07-15 16:09:55.797448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.960 [2024-07-15 16:09:55.797612] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.960 [2024-07-15 16:09:55.797776] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.960 [2024-07-15 16:09:55.797785] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.960 [2024-07-15 16:09:55.797791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.960 [2024-07-15 16:09:55.800395] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.960 [2024-07-15 16:09:55.810168] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.960 [2024-07-15 16:09:55.810629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-07-15 16:09:55.810649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.960 [2024-07-15 16:09:55.810657] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.960 [2024-07-15 16:09:55.810836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.960 [2024-07-15 16:09:55.811000] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.960 [2024-07-15 16:09:55.811010] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.960 [2024-07-15 16:09:55.811016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.960 [2024-07-15 16:09:55.813619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.960 [2024-07-15 16:09:55.822962] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.960 [2024-07-15 16:09:55.823393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-07-15 16:09:55.823436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.960 [2024-07-15 16:09:55.823458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.960 [2024-07-15 16:09:55.824038] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.960 [2024-07-15 16:09:55.824482] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.960 [2024-07-15 16:09:55.824491] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.960 [2024-07-15 16:09:55.824498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.960 [2024-07-15 16:09:55.827164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.960 [2024-07-15 16:09:55.835851] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.960 [2024-07-15 16:09:55.836296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-07-15 16:09:55.836312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.960 [2024-07-15 16:09:55.836319] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.960 [2024-07-15 16:09:55.836482] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.960 [2024-07-15 16:09:55.836645] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.960 [2024-07-15 16:09:55.836654] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.960 [2024-07-15 16:09:55.836660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.960 [2024-07-15 16:09:55.839265] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.960 [2024-07-15 16:09:55.848762] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.960 [2024-07-15 16:09:55.849094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-07-15 16:09:55.849110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.960 [2024-07-15 16:09:55.849117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.960 [2024-07-15 16:09:55.849285] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.960 [2024-07-15 16:09:55.849453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.960 [2024-07-15 16:09:55.849462] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.960 [2024-07-15 16:09:55.849468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.960 [2024-07-15 16:09:55.852066] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.960 [2024-07-15 16:09:55.861609] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.960 [2024-07-15 16:09:55.862062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-07-15 16:09:55.862104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.960 [2024-07-15 16:09:55.862127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.960 [2024-07-15 16:09:55.862559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.960 [2024-07-15 16:09:55.862734] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.960 [2024-07-15 16:09:55.862743] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.960 [2024-07-15 16:09:55.862749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.960 [2024-07-15 16:09:55.865397] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.960 [2024-07-15 16:09:55.874615] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.960 [2024-07-15 16:09:55.875051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-07-15 16:09:55.875068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.960 [2024-07-15 16:09:55.875075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.960 [2024-07-15 16:09:55.875252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.960 [2024-07-15 16:09:55.875426] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.960 [2024-07-15 16:09:55.875436] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.960 [2024-07-15 16:09:55.875442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.960 [2024-07-15 16:09:55.878191] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.960 [2024-07-15 16:09:55.887505] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.960 [2024-07-15 16:09:55.887856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-07-15 16:09:55.887872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:26.960 [2024-07-15 16:09:55.887880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:26.960 [2024-07-15 16:09:55.888069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:26.960 [2024-07-15 16:09:55.888254] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.960 [2024-07-15 16:09:55.888264] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.960 [2024-07-15 16:09:55.888273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.960 [2024-07-15 16:09:55.891109] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.219 [2024-07-15 16:09:55.900531] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.219 [2024-07-15 16:09:55.900820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.219 [2024-07-15 16:09:55.900836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.219 [2024-07-15 16:09:55.900844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.219 [2024-07-15 16:09:55.901007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.219 [2024-07-15 16:09:55.901171] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.219 [2024-07-15 16:09:55.901180] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.219 [2024-07-15 16:09:55.901186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.219 [2024-07-15 16:09:55.903881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.219 [2024-07-15 16:09:55.913398] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.219 [2024-07-15 16:09:55.913754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.219 [2024-07-15 16:09:55.913795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.219 [2024-07-15 16:09:55.913818] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.219 [2024-07-15 16:09:55.914373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.219 [2024-07-15 16:09:55.914537] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.219 [2024-07-15 16:09:55.914547] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.219 [2024-07-15 16:09:55.914552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.219 [2024-07-15 16:09:55.917263] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.219 [2024-07-15 16:09:55.926319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.219 [2024-07-15 16:09:55.926789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.219 [2024-07-15 16:09:55.926834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.219 [2024-07-15 16:09:55.926861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.219 [2024-07-15 16:09:55.927410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.219 [2024-07-15 16:09:55.927577] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.219 [2024-07-15 16:09:55.927587] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.219 [2024-07-15 16:09:55.927593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.219 [2024-07-15 16:09:55.930220] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.219 [2024-07-15 16:09:55.939287] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.219 [2024-07-15 16:09:55.939668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.219 [2024-07-15 16:09:55.939685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.219 [2024-07-15 16:09:55.939696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.219 [2024-07-15 16:09:55.939870] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.219 [2024-07-15 16:09:55.940044] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.219 [2024-07-15 16:09:55.940053] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.219 [2024-07-15 16:09:55.940059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.219 [2024-07-15 16:09:55.942711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.219 [2024-07-15 16:09:55.952220] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.219 [2024-07-15 16:09:55.952601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.219 [2024-07-15 16:09:55.952617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.219 [2024-07-15 16:09:55.952624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.219 [2024-07-15 16:09:55.952787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.219 [2024-07-15 16:09:55.952950] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.219 [2024-07-15 16:09:55.952959] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.219 [2024-07-15 16:09:55.952966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.219 [2024-07-15 16:09:55.955563] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.219 [2024-07-15 16:09:55.965082] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.219 [2024-07-15 16:09:55.965501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.219 [2024-07-15 16:09:55.965539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.219 [2024-07-15 16:09:55.965563] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.219 [2024-07-15 16:09:55.966142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.219 [2024-07-15 16:09:55.966729] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.219 [2024-07-15 16:09:55.966739] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.219 [2024-07-15 16:09:55.966745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.219 [2024-07-15 16:09:55.969348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.219 [2024-07-15 16:09:55.977877] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.219 [2024-07-15 16:09:55.978332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.219 [2024-07-15 16:09:55.978375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.219 [2024-07-15 16:09:55.978396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.219 [2024-07-15 16:09:55.978615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.219 [2024-07-15 16:09:55.978778] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.219 [2024-07-15 16:09:55.978789] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.219 [2024-07-15 16:09:55.978796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.219 [2024-07-15 16:09:55.981397] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.219 [2024-07-15 16:09:55.990747] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.219 [2024-07-15 16:09:55.991203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.219 [2024-07-15 16:09:55.991258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.219 [2024-07-15 16:09:55.991281] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.219 [2024-07-15 16:09:55.991769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.219 [2024-07-15 16:09:55.991933] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.219 [2024-07-15 16:09:55.991942] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.219 [2024-07-15 16:09:55.991948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.219 [2024-07-15 16:09:55.994546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.219 [2024-07-15 16:09:56.003585] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.219 [2024-07-15 16:09:56.004038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.219 [2024-07-15 16:09:56.004080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.219 [2024-07-15 16:09:56.004101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.219 [2024-07-15 16:09:56.004612] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.219 [2024-07-15 16:09:56.004777] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.219 [2024-07-15 16:09:56.004786] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.219 [2024-07-15 16:09:56.004792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.219 [2024-07-15 16:09:56.008602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.219 [2024-07-15 16:09:56.017164] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.219 [2024-07-15 16:09:56.017535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.220 [2024-07-15 16:09:56.017551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.220 [2024-07-15 16:09:56.017559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.220 [2024-07-15 16:09:56.017725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.220 [2024-07-15 16:09:56.017892] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.220 [2024-07-15 16:09:56.017902] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.220 [2024-07-15 16:09:56.017907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.220 [2024-07-15 16:09:56.020581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.220 [2024-07-15 16:09:56.029949] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.220 [2024-07-15 16:09:56.030368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.220 [2024-07-15 16:09:56.030386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.220 [2024-07-15 16:09:56.030393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.220 [2024-07-15 16:09:56.030555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.220 [2024-07-15 16:09:56.030718] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.220 [2024-07-15 16:09:56.030728] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.220 [2024-07-15 16:09:56.030734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.220 [2024-07-15 16:09:56.033335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.220 [2024-07-15 16:09:56.042872] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.220 [2024-07-15 16:09:56.043322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.220 [2024-07-15 16:09:56.043338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.220 [2024-07-15 16:09:56.043345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.220 [2024-07-15 16:09:56.043508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.220 [2024-07-15 16:09:56.043672] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.220 [2024-07-15 16:09:56.043681] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.220 [2024-07-15 16:09:56.043687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.220 [2024-07-15 16:09:56.046289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.220 [2024-07-15 16:09:56.055789] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.220 [2024-07-15 16:09:56.056214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.220 [2024-07-15 16:09:56.056235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.220 [2024-07-15 16:09:56.056243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.220 [2024-07-15 16:09:56.056406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.220 [2024-07-15 16:09:56.056569] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.220 [2024-07-15 16:09:56.056578] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.220 [2024-07-15 16:09:56.056584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.220 [2024-07-15 16:09:56.059314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.220 [2024-07-15 16:09:56.068624] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.220 [2024-07-15 16:09:56.068993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.220 [2024-07-15 16:09:56.069035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.220 [2024-07-15 16:09:56.069064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.220 [2024-07-15 16:09:56.069658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.220 [2024-07-15 16:09:56.070202] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.220 [2024-07-15 16:09:56.070210] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.220 [2024-07-15 16:09:56.070216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.220 [2024-07-15 16:09:56.072814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.220 [2024-07-15 16:09:56.081459] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.220 [2024-07-15 16:09:56.081906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.220 [2024-07-15 16:09:56.081922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.220 [2024-07-15 16:09:56.081929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.220 [2024-07-15 16:09:56.082092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.220 [2024-07-15 16:09:56.082262] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.220 [2024-07-15 16:09:56.082272] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.220 [2024-07-15 16:09:56.082278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.220 [2024-07-15 16:09:56.084872] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.220 [2024-07-15 16:09:56.094378] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.220 [2024-07-15 16:09:56.094802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.220 [2024-07-15 16:09:56.094845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.220 [2024-07-15 16:09:56.094869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.220 [2024-07-15 16:09:56.095460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.220 [2024-07-15 16:09:56.096043] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.220 [2024-07-15 16:09:56.096056] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.220 [2024-07-15 16:09:56.096065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.220 [2024-07-15 16:09:56.100128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.220 [2024-07-15 16:09:56.108020] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.220 [2024-07-15 16:09:56.108474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.220 [2024-07-15 16:09:56.108516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.220 [2024-07-15 16:09:56.108538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.220 [2024-07-15 16:09:56.108922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.220 [2024-07-15 16:09:56.109091] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.220 [2024-07-15 16:09:56.109100] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.220 [2024-07-15 16:09:56.109109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.220 [2024-07-15 16:09:56.111785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.220 [2024-07-15 16:09:56.120817] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.220 [2024-07-15 16:09:56.121274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.220 [2024-07-15 16:09:56.121316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.220 [2024-07-15 16:09:56.121337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.220 [2024-07-15 16:09:56.121885] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.220 [2024-07-15 16:09:56.122049] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.220 [2024-07-15 16:09:56.122058] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.220 [2024-07-15 16:09:56.122064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.220 [2024-07-15 16:09:56.124730] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.220 [2024-07-15 16:09:56.133702] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.220 [2024-07-15 16:09:56.134139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.220 [2024-07-15 16:09:56.134155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.220 [2024-07-15 16:09:56.134162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.220 [2024-07-15 16:09:56.134332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.220 [2024-07-15 16:09:56.134497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.220 [2024-07-15 16:09:56.134506] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.220 [2024-07-15 16:09:56.134511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.220 [2024-07-15 16:09:56.137107] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.220 [2024-07-15 16:09:56.146711] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.220 [2024-07-15 16:09:56.147031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.220 [2024-07-15 16:09:56.147047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.220 [2024-07-15 16:09:56.147054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.220 [2024-07-15 16:09:56.147216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.220 [2024-07-15 16:09:56.147387] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.220 [2024-07-15 16:09:56.147397] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.220 [2024-07-15 16:09:56.147404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.220 [2024-07-15 16:09:56.150203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.479 [2024-07-15 16:09:56.159764] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.479 [2024-07-15 16:09:56.160212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.479 [2024-07-15 16:09:56.160232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.479 [2024-07-15 16:09:56.160240] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.479 [2024-07-15 16:09:56.160428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.479 [2024-07-15 16:09:56.160603] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.480 [2024-07-15 16:09:56.160612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.480 [2024-07-15 16:09:56.160618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.480 [2024-07-15 16:09:56.163276] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.480 [2024-07-15 16:09:56.172625] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.480 [2024-07-15 16:09:56.173070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.480 [2024-07-15 16:09:56.173087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.480 [2024-07-15 16:09:56.173094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.480 [2024-07-15 16:09:56.173262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.480 [2024-07-15 16:09:56.173426] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.480 [2024-07-15 16:09:56.173435] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.480 [2024-07-15 16:09:56.173441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.480 [2024-07-15 16:09:56.176119] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.480 [2024-07-15 16:09:56.185474] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.480 [2024-07-15 16:09:56.185926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.480 [2024-07-15 16:09:56.185967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.480 [2024-07-15 16:09:56.185988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.480 [2024-07-15 16:09:56.186347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.480 [2024-07-15 16:09:56.186513] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.480 [2024-07-15 16:09:56.186523] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.480 [2024-07-15 16:09:56.186529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.480 [2024-07-15 16:09:56.189126] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.480 [2024-07-15 16:09:56.198330] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.480 [2024-07-15 16:09:56.198781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.480 [2024-07-15 16:09:56.198821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.480 [2024-07-15 16:09:56.198842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.480 [2024-07-15 16:09:56.199342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.480 [2024-07-15 16:09:56.199508] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.480 [2024-07-15 16:09:56.199517] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.480 [2024-07-15 16:09:56.199523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.480 [2024-07-15 16:09:56.202118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.480 [2024-07-15 16:09:56.211183] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.480 [2024-07-15 16:09:56.211625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.480 [2024-07-15 16:09:56.211642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.480 [2024-07-15 16:09:56.211649] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.480 [2024-07-15 16:09:56.211812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.480 [2024-07-15 16:09:56.211975] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.480 [2024-07-15 16:09:56.211984] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.480 [2024-07-15 16:09:56.211990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.480 [2024-07-15 16:09:56.214598] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.480 [2024-07-15 16:09:56.224125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.480 [2024-07-15 16:09:56.224591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.480 [2024-07-15 16:09:56.224608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.480 [2024-07-15 16:09:56.224616] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.480 [2024-07-15 16:09:56.224789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.480 [2024-07-15 16:09:56.224964] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.480 [2024-07-15 16:09:56.224973] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.480 [2024-07-15 16:09:56.224979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.480 [2024-07-15 16:09:56.227661] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.480 [2024-07-15 16:09:56.237070] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.480 [2024-07-15 16:09:56.237381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.480 [2024-07-15 16:09:56.237397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.480 [2024-07-15 16:09:56.237404] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.480 [2024-07-15 16:09:56.237567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.480 [2024-07-15 16:09:56.237730] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.480 [2024-07-15 16:09:56.237740] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.480 [2024-07-15 16:09:56.237748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.480 [2024-07-15 16:09:56.240416] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.480 [2024-07-15 16:09:56.250015] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.480 [2024-07-15 16:09:56.250429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.480 [2024-07-15 16:09:56.250472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.480 [2024-07-15 16:09:56.250494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.480 [2024-07-15 16:09:56.251076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.480 [2024-07-15 16:09:56.251581] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.480 [2024-07-15 16:09:56.251592] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.480 [2024-07-15 16:09:56.251598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.480 [2024-07-15 16:09:56.254196] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.480 [2024-07-15 16:09:56.263081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.480 [2024-07-15 16:09:56.263454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.480 [2024-07-15 16:09:56.263499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.480 [2024-07-15 16:09:56.263522] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.480 [2024-07-15 16:09:56.264101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.480 [2024-07-15 16:09:56.264673] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.480 [2024-07-15 16:09:56.264684] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.480 [2024-07-15 16:09:56.264690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.480 [2024-07-15 16:09:56.267545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.480 [2024-07-15 16:09:56.276254] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.480 [2024-07-15 16:09:56.276688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.480 [2024-07-15 16:09:56.276729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.480 [2024-07-15 16:09:56.276752] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.480 [2024-07-15 16:09:56.277352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.480 [2024-07-15 16:09:56.277799] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.480 [2024-07-15 16:09:56.277809] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.480 [2024-07-15 16:09:56.277815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.480 [2024-07-15 16:09:56.281644] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.480 [2024-07-15 16:09:56.289936] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.480 [2024-07-15 16:09:56.290391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.480 [2024-07-15 16:09:56.290442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.480 [2024-07-15 16:09:56.290465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.480 [2024-07-15 16:09:56.291046] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.480 [2024-07-15 16:09:56.291645] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.480 [2024-07-15 16:09:56.291671] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.480 [2024-07-15 16:09:56.291690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.480 [2024-07-15 16:09:56.294394] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.480 [2024-07-15 16:09:56.302809] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.480 [2024-07-15 16:09:56.303260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.480 [2024-07-15 16:09:56.303278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.480 [2024-07-15 16:09:56.303285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.480 [2024-07-15 16:09:56.303448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.480 [2024-07-15 16:09:56.303612] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.481 [2024-07-15 16:09:56.303622] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.481 [2024-07-15 16:09:56.303628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.481 [2024-07-15 16:09:56.306230] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.481 [2024-07-15 16:09:56.315737] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.481 [2024-07-15 16:09:56.316111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.481 [2024-07-15 16:09:56.316128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.481 [2024-07-15 16:09:56.316135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.481 [2024-07-15 16:09:56.316325] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.481 [2024-07-15 16:09:56.316499] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.481 [2024-07-15 16:09:56.316509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.481 [2024-07-15 16:09:56.316516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.481 [2024-07-15 16:09:56.319222] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.481 [2024-07-15 16:09:56.328576] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.481 [2024-07-15 16:09:56.328933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.481 [2024-07-15 16:09:56.328950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.481 [2024-07-15 16:09:56.328957] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.481 [2024-07-15 16:09:56.329120] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.481 [2024-07-15 16:09:56.329292] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.481 [2024-07-15 16:09:56.329302] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.481 [2024-07-15 16:09:56.329308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.481 [2024-07-15 16:09:56.331909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.481 [2024-07-15 16:09:56.341425] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.481 [2024-07-15 16:09:56.341730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.481 [2024-07-15 16:09:56.341746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.481 [2024-07-15 16:09:56.341753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.481 [2024-07-15 16:09:56.341919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.481 [2024-07-15 16:09:56.342082] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.481 [2024-07-15 16:09:56.342092] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.481 [2024-07-15 16:09:56.342098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.481 [2024-07-15 16:09:56.344699] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.481 [2024-07-15 16:09:56.354320] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.481 [2024-07-15 16:09:56.354618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.481 [2024-07-15 16:09:56.354634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.481 [2024-07-15 16:09:56.354641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.481 [2024-07-15 16:09:56.354802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.481 [2024-07-15 16:09:56.354967] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.481 [2024-07-15 16:09:56.354976] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.481 [2024-07-15 16:09:56.354982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.481 [2024-07-15 16:09:56.357590] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.481 [2024-07-15 16:09:56.367120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.481 [2024-07-15 16:09:56.367455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.481 [2024-07-15 16:09:56.367498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.481 [2024-07-15 16:09:56.367521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.481 [2024-07-15 16:09:56.367999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.481 [2024-07-15 16:09:56.368164] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.481 [2024-07-15 16:09:56.368173] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.481 [2024-07-15 16:09:56.368180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.481 [2024-07-15 16:09:56.370793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.481 [2024-07-15 16:09:56.379957] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.481 [2024-07-15 16:09:56.380384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.481 [2024-07-15 16:09:56.380427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.481 [2024-07-15 16:09:56.380450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.481 [2024-07-15 16:09:56.380648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.481 [2024-07-15 16:09:56.380813] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.481 [2024-07-15 16:09:56.380822] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.481 [2024-07-15 16:09:56.380828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.481 [2024-07-15 16:09:56.383440] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.481 [2024-07-15 16:09:56.392906] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.481 [2024-07-15 16:09:56.393378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.481 [2024-07-15 16:09:56.393423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.481 [2024-07-15 16:09:56.393446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.481 [2024-07-15 16:09:56.393897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.481 [2024-07-15 16:09:56.394063] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.481 [2024-07-15 16:09:56.394072] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.481 [2024-07-15 16:09:56.394078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.481 [2024-07-15 16:09:56.396676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.481 [2024-07-15 16:09:56.405729] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.481 [2024-07-15 16:09:56.406179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.481 [2024-07-15 16:09:56.406222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.481 [2024-07-15 16:09:56.406259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.481 [2024-07-15 16:09:56.406788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.481 [2024-07-15 16:09:56.406952] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.481 [2024-07-15 16:09:56.406961] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.481 [2024-07-15 16:09:56.406967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.481 [2024-07-15 16:09:56.409786] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.741 [2024-07-15 16:09:56.418895] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.741 [2024-07-15 16:09:56.419329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.741 [2024-07-15 16:09:56.419379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.741 [2024-07-15 16:09:56.419410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.741 [2024-07-15 16:09:56.419941] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.741 [2024-07-15 16:09:56.420121] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.741 [2024-07-15 16:09:56.420130] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.741 [2024-07-15 16:09:56.420136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.741 [2024-07-15 16:09:56.422738] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.741 [2024-07-15 16:09:56.431706] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.741 [2024-07-15 16:09:56.432133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.741 [2024-07-15 16:09:56.432150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.741 [2024-07-15 16:09:56.432157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.741 [2024-07-15 16:09:56.432327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.741 [2024-07-15 16:09:56.432493] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.741 [2024-07-15 16:09:56.432504] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.741 [2024-07-15 16:09:56.432510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.741 [2024-07-15 16:09:56.435108] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.741 [2024-07-15 16:09:56.444569] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.741 [2024-07-15 16:09:56.444990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.741 [2024-07-15 16:09:56.445032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.741 [2024-07-15 16:09:56.445055] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.741 [2024-07-15 16:09:56.445648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.741 [2024-07-15 16:09:56.446124] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.741 [2024-07-15 16:09:56.446134] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.741 [2024-07-15 16:09:56.446140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.741 [2024-07-15 16:09:56.448738] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.741 [2024-07-15 16:09:56.457589] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.741 [2024-07-15 16:09:56.458047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.741 [2024-07-15 16:09:56.458089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.741 [2024-07-15 16:09:56.458111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.741 [2024-07-15 16:09:56.458668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.741 [2024-07-15 16:09:56.458844] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.741 [2024-07-15 16:09:56.458857] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.741 [2024-07-15 16:09:56.458863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.741 [2024-07-15 16:09:56.461545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.741 [2024-07-15 16:09:56.470463] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.741 [2024-07-15 16:09:56.470882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.741 [2024-07-15 16:09:56.470898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.741 [2024-07-15 16:09:56.470905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.741 [2024-07-15 16:09:56.471068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.741 [2024-07-15 16:09:56.471239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.741 [2024-07-15 16:09:56.471249] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.741 [2024-07-15 16:09:56.471255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.741 [2024-07-15 16:09:56.473856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.741 [2024-07-15 16:09:56.483405] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.741 [2024-07-15 16:09:56.483763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.741 [2024-07-15 16:09:56.483780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.741 [2024-07-15 16:09:56.483788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.741 [2024-07-15 16:09:56.483950] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.741 [2024-07-15 16:09:56.484113] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.741 [2024-07-15 16:09:56.484122] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.741 [2024-07-15 16:09:56.484128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.742 [2024-07-15 16:09:56.486739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.742 [2024-07-15 16:09:56.496266] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.742 [2024-07-15 16:09:56.496570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.742 [2024-07-15 16:09:56.496587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.742 [2024-07-15 16:09:56.496594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.742 [2024-07-15 16:09:56.496757] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.742 [2024-07-15 16:09:56.496920] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.742 [2024-07-15 16:09:56.496929] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.742 [2024-07-15 16:09:56.496935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.742 [2024-07-15 16:09:56.499544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.742 [2024-07-15 16:09:56.509074] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.742 [2024-07-15 16:09:56.509519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.742 [2024-07-15 16:09:56.509536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.742 [2024-07-15 16:09:56.509544] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.742 [2024-07-15 16:09:56.509707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.742 [2024-07-15 16:09:56.509871] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.742 [2024-07-15 16:09:56.509880] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.742 [2024-07-15 16:09:56.509886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.742 [2024-07-15 16:09:56.512491] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.742 [2024-07-15 16:09:56.522005] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.742 [2024-07-15 16:09:56.522382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.742 [2024-07-15 16:09:56.522398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.742 [2024-07-15 16:09:56.522405] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.742 [2024-07-15 16:09:56.522569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.742 [2024-07-15 16:09:56.522732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.742 [2024-07-15 16:09:56.522741] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.742 [2024-07-15 16:09:56.522747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.742 [2024-07-15 16:09:56.525391] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.742 [2024-07-15 16:09:56.534847] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.742 [2024-07-15 16:09:56.535205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.742 [2024-07-15 16:09:56.535259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.742 [2024-07-15 16:09:56.535283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.742 [2024-07-15 16:09:56.535861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.742 [2024-07-15 16:09:56.536107] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.742 [2024-07-15 16:09:56.536116] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.742 [2024-07-15 16:09:56.536122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.742 [2024-07-15 16:09:56.538727] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.742 [2024-07-15 16:09:56.547740] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.742 [2024-07-15 16:09:56.548119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.742 [2024-07-15 16:09:56.548135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.742 [2024-07-15 16:09:56.548142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.742 [2024-07-15 16:09:56.548315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.742 [2024-07-15 16:09:56.548479] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.742 [2024-07-15 16:09:56.548488] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.742 [2024-07-15 16:09:56.548494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.742 [2024-07-15 16:09:56.551096] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.742 [2024-07-15 16:09:56.560825] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.742 [2024-07-15 16:09:56.561318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.742 [2024-07-15 16:09:56.561361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.742 [2024-07-15 16:09:56.561384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.742 [2024-07-15 16:09:56.561964] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.742 [2024-07-15 16:09:56.562420] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.742 [2024-07-15 16:09:56.562430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.742 [2024-07-15 16:09:56.562436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.742 [2024-07-15 16:09:56.565038] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.742 [2024-07-15 16:09:56.573651] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.742 [2024-07-15 16:09:56.574135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.742 [2024-07-15 16:09:56.574177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.742 [2024-07-15 16:09:56.574199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.742 [2024-07-15 16:09:56.574796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.742 [2024-07-15 16:09:56.575316] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.742 [2024-07-15 16:09:56.575326] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.742 [2024-07-15 16:09:56.575333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.742 [2024-07-15 16:09:56.578001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.742 [2024-07-15 16:09:56.586474] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.742 [2024-07-15 16:09:56.586852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.742 [2024-07-15 16:09:56.586868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.742 [2024-07-15 16:09:56.586875] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.742 [2024-07-15 16:09:56.587038] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.742 [2024-07-15 16:09:56.587202] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.742 [2024-07-15 16:09:56.587211] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.742 [2024-07-15 16:09:56.587221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.742 [2024-07-15 16:09:56.589828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.742 [2024-07-15 16:09:56.599361] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.742 [2024-07-15 16:09:56.599803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.742 [2024-07-15 16:09:56.599820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.742 [2024-07-15 16:09:56.599828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.742 [2024-07-15 16:09:56.599991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.742 [2024-07-15 16:09:56.600154] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.742 [2024-07-15 16:09:56.600163] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.742 [2024-07-15 16:09:56.600169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.742 [2024-07-15 16:09:56.602780] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.742 [2024-07-15 16:09:56.612260] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.742 [2024-07-15 16:09:56.612605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.742 [2024-07-15 16:09:56.612621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.742 [2024-07-15 16:09:56.612628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.742 [2024-07-15 16:09:56.612790] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.742 [2024-07-15 16:09:56.612953] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.742 [2024-07-15 16:09:56.612962] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.742 [2024-07-15 16:09:56.612968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.742 [2024-07-15 16:09:56.615575] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.742 [2024-07-15 16:09:56.625103] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.742 [2024-07-15 16:09:56.625473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.742 [2024-07-15 16:09:56.625490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.742 [2024-07-15 16:09:56.625497] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.742 [2024-07-15 16:09:56.625669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.742 [2024-07-15 16:09:56.625844] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.742 [2024-07-15 16:09:56.625853] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.743 [2024-07-15 16:09:56.625861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.743 [2024-07-15 16:09:56.628529] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.743 [2024-07-15 16:09:56.638142] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.743 [2024-07-15 16:09:56.638573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.743 [2024-07-15 16:09:56.638622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.743 [2024-07-15 16:09:56.638646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.743 [2024-07-15 16:09:56.639195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.743 [2024-07-15 16:09:56.639366] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.743 [2024-07-15 16:09:56.639375] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.743 [2024-07-15 16:09:56.639381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.743 [2024-07-15 16:09:56.641979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.743 [2024-07-15 16:09:56.651029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.743 [2024-07-15 16:09:56.651463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.743 [2024-07-15 16:09:56.651507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.743 [2024-07-15 16:09:56.651529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.743 [2024-07-15 16:09:56.652040] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.743 [2024-07-15 16:09:56.652300] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.743 [2024-07-15 16:09:56.652313] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.743 [2024-07-15 16:09:56.652322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.743 [2024-07-15 16:09:56.656388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.743 [2024-07-15 16:09:56.664460] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.743 [2024-07-15 16:09:56.664929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.743 [2024-07-15 16:09:56.664972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:27.743 [2024-07-15 16:09:56.664994] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:27.743 [2024-07-15 16:09:56.665503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:27.743 [2024-07-15 16:09:56.665689] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.743 [2024-07-15 16:09:56.665699] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.743 [2024-07-15 16:09:56.665705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.743 [2024-07-15 16:09:56.668526] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.003 [2024-07-15 16:09:56.677449] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.003 [2024-07-15 16:09:56.677895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.003 [2024-07-15 16:09:56.677911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.003 [2024-07-15 16:09:56.677918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.003 [2024-07-15 16:09:56.678085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.003 [2024-07-15 16:09:56.678256] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.003 [2024-07-15 16:09:56.678266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.003 [2024-07-15 16:09:56.678272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.003 [2024-07-15 16:09:56.681108] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.003 [2024-07-15 16:09:56.690400] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.003 [2024-07-15 16:09:56.690854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.003 [2024-07-15 16:09:56.690896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.003 [2024-07-15 16:09:56.690917] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.003 [2024-07-15 16:09:56.691361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.003 [2024-07-15 16:09:56.691527] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.003 [2024-07-15 16:09:56.691537] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.003 [2024-07-15 16:09:56.691543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.003 [2024-07-15 16:09:56.694139] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.003 [2024-07-15 16:09:56.703334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.003 [2024-07-15 16:09:56.703781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.003 [2024-07-15 16:09:56.703824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.003 [2024-07-15 16:09:56.703845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.003 [2024-07-15 16:09:56.704346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.003 [2024-07-15 16:09:56.704511] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.003 [2024-07-15 16:09:56.704521] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.003 [2024-07-15 16:09:56.704527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.003 [2024-07-15 16:09:56.707123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.003 [2024-07-15 16:09:56.716173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.003 [2024-07-15 16:09:56.716636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.003 [2024-07-15 16:09:56.716653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.003 [2024-07-15 16:09:56.716660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.003 [2024-07-15 16:09:56.716833] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.003 [2024-07-15 16:09:56.717007] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.003 [2024-07-15 16:09:56.717016] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.003 [2024-07-15 16:09:56.717026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.003 [2024-07-15 16:09:56.719708] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.003 [2024-07-15 16:09:56.729093] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.003 [2024-07-15 16:09:56.729480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.003 [2024-07-15 16:09:56.729523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.003 [2024-07-15 16:09:56.729546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.003 [2024-07-15 16:09:56.730124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.003 [2024-07-15 16:09:56.730694] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.003 [2024-07-15 16:09:56.730704] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.003 [2024-07-15 16:09:56.730710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.003 [2024-07-15 16:09:56.733311] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.003 [2024-07-15 16:09:56.741889] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.003 [2024-07-15 16:09:56.742331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.003 [2024-07-15 16:09:56.742348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.003 [2024-07-15 16:09:56.742355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.003 [2024-07-15 16:09:56.742519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.004 [2024-07-15 16:09:56.742682] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.004 [2024-07-15 16:09:56.742691] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.004 [2024-07-15 16:09:56.742697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.004 [2024-07-15 16:09:56.745298] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.004 [2024-07-15 16:09:56.754804] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.004 [2024-07-15 16:09:56.755240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.004 [2024-07-15 16:09:56.755284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.004 [2024-07-15 16:09:56.755307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.004 [2024-07-15 16:09:56.755699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.004 [2024-07-15 16:09:56.755864] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.004 [2024-07-15 16:09:56.755873] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.004 [2024-07-15 16:09:56.755879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.004 [2024-07-15 16:09:56.758513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.004 [2024-07-15 16:09:56.767672] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.004 [2024-07-15 16:09:56.768088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.004 [2024-07-15 16:09:56.768107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.004 [2024-07-15 16:09:56.768115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.004 [2024-07-15 16:09:56.768342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.004 [2024-07-15 16:09:56.768523] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.004 [2024-07-15 16:09:56.768533] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.004 [2024-07-15 16:09:56.768540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.004 [2024-07-15 16:09:56.771203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.004 [2024-07-15 16:09:56.780494] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.004 [2024-07-15 16:09:56.780923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.004 [2024-07-15 16:09:56.780966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.004 [2024-07-15 16:09:56.780988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.004 [2024-07-15 16:09:56.781552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.004 [2024-07-15 16:09:56.781717] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.004 [2024-07-15 16:09:56.781724] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.004 [2024-07-15 16:09:56.781730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.004 [2024-07-15 16:09:56.784331] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.004 [2024-07-15 16:09:56.793378] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.004 [2024-07-15 16:09:56.793838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.004 [2024-07-15 16:09:56.793880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.004 [2024-07-15 16:09:56.793900] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.004 [2024-07-15 16:09:56.794497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.004 [2024-07-15 16:09:56.794965] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.004 [2024-07-15 16:09:56.794975] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.004 [2024-07-15 16:09:56.794981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.004 [2024-07-15 16:09:56.797579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.004 [2024-07-15 16:09:56.806317] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.004 [2024-07-15 16:09:56.806777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.004 [2024-07-15 16:09:56.806820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.004 [2024-07-15 16:09:56.806842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.004 [2024-07-15 16:09:56.807435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.004 [2024-07-15 16:09:56.807918] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.004 [2024-07-15 16:09:56.807928] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.004 [2024-07-15 16:09:56.807934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.004 [2024-07-15 16:09:56.810534] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.004 [2024-07-15 16:09:56.819117] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.004 [2024-07-15 16:09:56.819564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.004 [2024-07-15 16:09:56.819580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.004 [2024-07-15 16:09:56.819587] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.004 [2024-07-15 16:09:56.819750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.004 [2024-07-15 16:09:56.819914] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.004 [2024-07-15 16:09:56.819923] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.004 [2024-07-15 16:09:56.819929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.004 [2024-07-15 16:09:56.822535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.004 [2024-07-15 16:09:56.831970] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.004 [2024-07-15 16:09:56.832415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.004 [2024-07-15 16:09:56.832431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.004 [2024-07-15 16:09:56.832438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.004 [2024-07-15 16:09:56.832602] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.004 [2024-07-15 16:09:56.832766] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.004 [2024-07-15 16:09:56.832774] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.004 [2024-07-15 16:09:56.832781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.004 [2024-07-15 16:09:56.835383] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.004 [2024-07-15 16:09:56.844781] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.004 [2024-07-15 16:09:56.845197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.004 [2024-07-15 16:09:56.845256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.004 [2024-07-15 16:09:56.845280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.004 [2024-07-15 16:09:56.845861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.004 [2024-07-15 16:09:56.846050] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.004 [2024-07-15 16:09:56.846058] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.004 [2024-07-15 16:09:56.846064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.004 [2024-07-15 16:09:56.848682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.004 [2024-07-15 16:09:56.857668] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.004 [2024-07-15 16:09:56.858116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.004 [2024-07-15 16:09:56.858158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.004 [2024-07-15 16:09:56.858179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.004 [2024-07-15 16:09:56.858660] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.004 [2024-07-15 16:09:56.858834] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.004 [2024-07-15 16:09:56.858844] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.004 [2024-07-15 16:09:56.858850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.004 [2024-07-15 16:09:56.861551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.004 [2024-07-15 16:09:56.870593] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.004 [2024-07-15 16:09:56.871019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.004 [2024-07-15 16:09:56.871061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.004 [2024-07-15 16:09:56.871083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.004 [2024-07-15 16:09:56.871677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.004 [2024-07-15 16:09:56.872110] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.004 [2024-07-15 16:09:56.872119] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.004 [2024-07-15 16:09:56.872125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.004 [2024-07-15 16:09:56.874822] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.004 [2024-07-15 16:09:56.883524] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.004 [2024-07-15 16:09:56.883962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.004 [2024-07-15 16:09:56.883979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.004 [2024-07-15 16:09:56.883985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.004 [2024-07-15 16:09:56.884149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.005 [2024-07-15 16:09:56.884318] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.005 [2024-07-15 16:09:56.884328] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.005 [2024-07-15 16:09:56.884335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.005 [2024-07-15 16:09:56.886932] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.005 [2024-07-15 16:09:56.896463] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.005 [2024-07-15 16:09:56.896767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.005 [2024-07-15 16:09:56.896783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.005 [2024-07-15 16:09:56.896793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.005 [2024-07-15 16:09:56.896965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.005 [2024-07-15 16:09:56.897138] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.005 [2024-07-15 16:09:56.897148] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.005 [2024-07-15 16:09:56.897154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.005 [2024-07-15 16:09:56.899791] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.005 [2024-07-15 16:09:56.909293] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.005 [2024-07-15 16:09:56.909690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.005 [2024-07-15 16:09:56.909731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.005 [2024-07-15 16:09:56.909754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.005 [2024-07-15 16:09:56.910257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.005 [2024-07-15 16:09:56.910423] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.005 [2024-07-15 16:09:56.910433] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.005 [2024-07-15 16:09:56.910439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.005 [2024-07-15 16:09:56.913269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.005 [2024-07-15 16:09:56.922129] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.005 [2024-07-15 16:09:56.922549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.005 [2024-07-15 16:09:56.922565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.005 [2024-07-15 16:09:56.922572] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.005 [2024-07-15 16:09:56.922735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.005 [2024-07-15 16:09:56.922899] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.005 [2024-07-15 16:09:56.922908] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.005 [2024-07-15 16:09:56.922914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.005 [2024-07-15 16:09:56.925516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.005 [2024-07-15 16:09:56.935358] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.005 [2024-07-15 16:09:56.935798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.005 [2024-07-15 16:09:56.935816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.005 [2024-07-15 16:09:56.935824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.005 [2024-07-15 16:09:56.936001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.005 [2024-07-15 16:09:56.936180] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.005 [2024-07-15 16:09:56.936192] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.005 [2024-07-15 16:09:56.936199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.265 [2024-07-15 16:09:56.939021] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.265 [2024-07-15 16:09:56.948305] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.265 [2024-07-15 16:09:56.948750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.265 [2024-07-15 16:09:56.948767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.265 [2024-07-15 16:09:56.948774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.265 [2024-07-15 16:09:56.948936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.265 [2024-07-15 16:09:56.949100] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.265 [2024-07-15 16:09:56.949110] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.265 [2024-07-15 16:09:56.949115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.265 [2024-07-15 16:09:56.951721] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.265 [2024-07-15 16:09:56.961392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.265 [2024-07-15 16:09:56.961856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.265 [2024-07-15 16:09:56.961898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.265 [2024-07-15 16:09:56.961920] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.265 [2024-07-15 16:09:56.962409] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.265 [2024-07-15 16:09:56.962574] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.265 [2024-07-15 16:09:56.962584] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.265 [2024-07-15 16:09:56.962590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.265 [2024-07-15 16:09:56.965187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.265 [2024-07-15 16:09:56.974241] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.265 [2024-07-15 16:09:56.974533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.265 [2024-07-15 16:09:56.974549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.265 [2024-07-15 16:09:56.974557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.265 [2024-07-15 16:09:56.974719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.265 [2024-07-15 16:09:56.974882] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.265 [2024-07-15 16:09:56.974891] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.265 [2024-07-15 16:09:56.974897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.265 [2024-07-15 16:09:56.977591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.265 [2024-07-15 16:09:56.987118] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.265 [2024-07-15 16:09:56.987495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.265 [2024-07-15 16:09:56.987512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.265 [2024-07-15 16:09:56.987520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.265 [2024-07-15 16:09:56.987687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.265 [2024-07-15 16:09:56.987851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.265 [2024-07-15 16:09:56.987860] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.265 [2024-07-15 16:09:56.987866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.265 [2024-07-15 16:09:56.990502] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.265 [2024-07-15 16:09:57.000219] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.265 [2024-07-15 16:09:57.000612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.265 [2024-07-15 16:09:57.000629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.265 [2024-07-15 16:09:57.000636] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.265 [2024-07-15 16:09:57.000814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.265 [2024-07-15 16:09:57.000992] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.265 [2024-07-15 16:09:57.001005] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.265 [2024-07-15 16:09:57.001014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.265 [2024-07-15 16:09:57.003801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.265 [2024-07-15 16:09:57.013232] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.265 [2024-07-15 16:09:57.013643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.265 [2024-07-15 16:09:57.013660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.265 [2024-07-15 16:09:57.013668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.265 [2024-07-15 16:09:57.013830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.265 [2024-07-15 16:09:57.013993] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.265 [2024-07-15 16:09:57.014002] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.265 [2024-07-15 16:09:57.014009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.265 [2024-07-15 16:09:57.016611] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.265 [2024-07-15 16:09:57.026129] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.265 [2024-07-15 16:09:57.026538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.265 [2024-07-15 16:09:57.026556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.265 [2024-07-15 16:09:57.026563] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.265 [2024-07-15 16:09:57.026740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.265 [2024-07-15 16:09:57.026914] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.265 [2024-07-15 16:09:57.026923] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.265 [2024-07-15 16:09:57.026930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.265 [2024-07-15 16:09:57.029587] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.265 [2024-07-15 16:09:57.039047] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.265 [2024-07-15 16:09:57.039479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.265 [2024-07-15 16:09:57.039521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.265 [2024-07-15 16:09:57.039544] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.265 [2024-07-15 16:09:57.040122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.265 [2024-07-15 16:09:57.040310] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.265 [2024-07-15 16:09:57.040320] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.265 [2024-07-15 16:09:57.040326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.265 [2024-07-15 16:09:57.042926] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.265 [2024-07-15 16:09:57.051986] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.265 [2024-07-15 16:09:57.052433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.265 [2024-07-15 16:09:57.052450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.265 [2024-07-15 16:09:57.052457] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.265 [2024-07-15 16:09:57.052620] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.265 [2024-07-15 16:09:57.052784] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.265 [2024-07-15 16:09:57.052793] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.265 [2024-07-15 16:09:57.052799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.266 [2024-07-15 16:09:57.055404] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.266 [2024-07-15 16:09:57.064811] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.266 [2024-07-15 16:09:57.065240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.266 [2024-07-15 16:09:57.065256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.266 [2024-07-15 16:09:57.065263] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.266 [2024-07-15 16:09:57.065426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.266 [2024-07-15 16:09:57.065590] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.266 [2024-07-15 16:09:57.065598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.266 [2024-07-15 16:09:57.065608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.266 [2024-07-15 16:09:57.068204] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.266 [2024-07-15 16:09:57.077710] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.266 [2024-07-15 16:09:57.078169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.266 [2024-07-15 16:09:57.078212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.266 [2024-07-15 16:09:57.078248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.266 [2024-07-15 16:09:57.078729] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.266 [2024-07-15 16:09:57.078893] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.266 [2024-07-15 16:09:57.078902] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.266 [2024-07-15 16:09:57.078908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.266 [2024-07-15 16:09:57.081661] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.266 [2024-07-15 16:09:57.090562] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.266 [2024-07-15 16:09:57.090923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.266 [2024-07-15 16:09:57.090965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.266 [2024-07-15 16:09:57.090988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.266 [2024-07-15 16:09:57.091487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.266 [2024-07-15 16:09:57.091653] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.266 [2024-07-15 16:09:57.091662] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.266 [2024-07-15 16:09:57.091668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.266 [2024-07-15 16:09:57.094268] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.266 [2024-07-15 16:09:57.103463] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.266 [2024-07-15 16:09:57.103844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.266 [2024-07-15 16:09:57.103859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.266 [2024-07-15 16:09:57.103866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.266 [2024-07-15 16:09:57.104029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.266 [2024-07-15 16:09:57.104192] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.266 [2024-07-15 16:09:57.104201] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.266 [2024-07-15 16:09:57.104207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.266 [2024-07-15 16:09:57.106907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.266 [2024-07-15 16:09:57.116357] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.266 [2024-07-15 16:09:57.116804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.266 [2024-07-15 16:09:57.116820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.266 [2024-07-15 16:09:57.116827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.266 [2024-07-15 16:09:57.116990] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.266 [2024-07-15 16:09:57.117154] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.266 [2024-07-15 16:09:57.117163] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.266 [2024-07-15 16:09:57.117169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.266 [2024-07-15 16:09:57.119854] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.266 [2024-07-15 16:09:57.129330] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.266 [2024-07-15 16:09:57.129786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.266 [2024-07-15 16:09:57.129827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.266 [2024-07-15 16:09:57.129850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.266 [2024-07-15 16:09:57.130406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.266 [2024-07-15 16:09:57.130570] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.266 [2024-07-15 16:09:57.130581] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.266 [2024-07-15 16:09:57.130586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.266 [2024-07-15 16:09:57.133180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.266 [2024-07-15 16:09:57.142226] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.266 [2024-07-15 16:09:57.142672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.266 [2024-07-15 16:09:57.142688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.266 [2024-07-15 16:09:57.142695] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.266 [2024-07-15 16:09:57.142858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.266 [2024-07-15 16:09:57.143022] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.266 [2024-07-15 16:09:57.143031] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.266 [2024-07-15 16:09:57.143037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.266 [2024-07-15 16:09:57.145641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.266 [2024-07-15 16:09:57.155176] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.266 [2024-07-15 16:09:57.155549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.266 [2024-07-15 16:09:57.155566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.266 [2024-07-15 16:09:57.155573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.266 [2024-07-15 16:09:57.155736] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.266 [2024-07-15 16:09:57.155904] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.266 [2024-07-15 16:09:57.155914] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.266 [2024-07-15 16:09:57.155920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.266 [2024-07-15 16:09:57.158525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.266 [2024-07-15 16:09:57.168067] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.266 [2024-07-15 16:09:57.168385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.266 [2024-07-15 16:09:57.168402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.266 [2024-07-15 16:09:57.168409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.266 [2024-07-15 16:09:57.168571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.266 [2024-07-15 16:09:57.168735] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.266 [2024-07-15 16:09:57.168744] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.266 [2024-07-15 16:09:57.168750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.266 [2024-07-15 16:09:57.171539] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.266 [2024-07-15 16:09:57.180873] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.266 [2024-07-15 16:09:57.181317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.266 [2024-07-15 16:09:57.181351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.266 [2024-07-15 16:09:57.181375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.266 [2024-07-15 16:09:57.181909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.266 [2024-07-15 16:09:57.182074] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.266 [2024-07-15 16:09:57.182083] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.266 [2024-07-15 16:09:57.182089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.266 [2024-07-15 16:09:57.184738] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.266 [2024-07-15 16:09:57.193719] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.266 [2024-07-15 16:09:57.194175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.266 [2024-07-15 16:09:57.194192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.266 [2024-07-15 16:09:57.194200] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.266 [2024-07-15 16:09:57.194383] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.267 [2024-07-15 16:09:57.194561] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.267 [2024-07-15 16:09:57.194571] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.267 [2024-07-15 16:09:57.194577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.267 [2024-07-15 16:09:57.197416] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.526 [2024-07-15 16:09:57.206713] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.526 [2024-07-15 16:09:57.207062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.526 [2024-07-15 16:09:57.207079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.526 [2024-07-15 16:09:57.207086] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.527 [2024-07-15 16:09:57.207270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.527 [2024-07-15 16:09:57.207443] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.527 [2024-07-15 16:09:57.207452] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.527 [2024-07-15 16:09:57.207459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.527 [2024-07-15 16:09:57.210123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.527 [2024-07-15 16:09:57.219589] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.527 [2024-07-15 16:09:57.220032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.527 [2024-07-15 16:09:57.220049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.527 [2024-07-15 16:09:57.220056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.527 [2024-07-15 16:09:57.220219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.527 [2024-07-15 16:09:57.220388] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.527 [2024-07-15 16:09:57.220399] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.527 [2024-07-15 16:09:57.220405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.527 [2024-07-15 16:09:57.223001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.527 [2024-07-15 16:09:57.232413] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.527 [2024-07-15 16:09:57.232839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.527 [2024-07-15 16:09:57.232855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.527 [2024-07-15 16:09:57.232863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.527 [2024-07-15 16:09:57.233025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.527 [2024-07-15 16:09:57.233189] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.527 [2024-07-15 16:09:57.233199] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.527 [2024-07-15 16:09:57.233205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.527 [2024-07-15 16:09:57.235807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.527 [2024-07-15 16:09:57.245258] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.527 [2024-07-15 16:09:57.245644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.527 [2024-07-15 16:09:57.245694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.527 [2024-07-15 16:09:57.245716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.527 [2024-07-15 16:09:57.246312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.527 [2024-07-15 16:09:57.246901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.527 [2024-07-15 16:09:57.246910] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.527 [2024-07-15 16:09:57.246916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.527 [2024-07-15 16:09:57.249514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.527 [2024-07-15 16:09:57.258090] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.527 [2024-07-15 16:09:57.258450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.527 [2024-07-15 16:09:57.258492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.527 [2024-07-15 16:09:57.258514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.527 [2024-07-15 16:09:57.259013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.527 [2024-07-15 16:09:57.259186] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.527 [2024-07-15 16:09:57.259196] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.527 [2024-07-15 16:09:57.259202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.527 [2024-07-15 16:09:57.261921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.527 [2024-07-15 16:09:57.270962] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.527 [2024-07-15 16:09:57.271311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.527 [2024-07-15 16:09:57.271327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.527 [2024-07-15 16:09:57.271334] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.527 [2024-07-15 16:09:57.271497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.527 [2024-07-15 16:09:57.271661] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.527 [2024-07-15 16:09:57.271670] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.527 [2024-07-15 16:09:57.271675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.527 [2024-07-15 16:09:57.274282] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.527 [2024-07-15 16:09:57.283792] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.527 [2024-07-15 16:09:57.284243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.527 [2024-07-15 16:09:57.284285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.527 [2024-07-15 16:09:57.284307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.527 [2024-07-15 16:09:57.284887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.527 [2024-07-15 16:09:57.285290] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.527 [2024-07-15 16:09:57.285299] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.527 [2024-07-15 16:09:57.285305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.527 [2024-07-15 16:09:57.287903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.527 [2024-07-15 16:09:57.296585] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.527 [2024-07-15 16:09:57.297023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.527 [2024-07-15 16:09:57.297066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.527 [2024-07-15 16:09:57.297088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.527 [2024-07-15 16:09:57.297685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.527 [2024-07-15 16:09:57.298181] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.527 [2024-07-15 16:09:57.298191] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.527 [2024-07-15 16:09:57.298197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.527 [2024-07-15 16:09:57.300885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.527 [2024-07-15 16:09:57.309470] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.527 [2024-07-15 16:09:57.309910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.527 [2024-07-15 16:09:57.309925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.527 [2024-07-15 16:09:57.309932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.527 [2024-07-15 16:09:57.310096] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.527 [2024-07-15 16:09:57.310265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.527 [2024-07-15 16:09:57.310275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.527 [2024-07-15 16:09:57.310281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.527 [2024-07-15 16:09:57.312929] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.527 [2024-07-15 16:09:57.322312] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.527 [2024-07-15 16:09:57.322763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.527 [2024-07-15 16:09:57.322806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.527 [2024-07-15 16:09:57.322828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.527 [2024-07-15 16:09:57.323273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.527 [2024-07-15 16:09:57.323439] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.527 [2024-07-15 16:09:57.323448] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.527 [2024-07-15 16:09:57.323453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.527 [2024-07-15 16:09:57.326049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.527 [2024-07-15 16:09:57.335231] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.527 [2024-07-15 16:09:57.335687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.527 [2024-07-15 16:09:57.335730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.527 [2024-07-15 16:09:57.335752] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.527 [2024-07-15 16:09:57.336159] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.527 [2024-07-15 16:09:57.336330] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.527 [2024-07-15 16:09:57.336339] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.527 [2024-07-15 16:09:57.336346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.527 [2024-07-15 16:09:57.338938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3918435 Killed "${NVMF_APP[@]}" "$@" 00:28:28.527 16:09:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:28:28.528 16:09:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:28.528 16:09:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:28.528 16:09:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:28.528 16:09:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:28.528 [2024-07-15 16:09:57.348358] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.528 [2024-07-15 16:09:57.348743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.528 [2024-07-15 16:09:57.348760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.528 [2024-07-15 16:09:57.348768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.528 [2024-07-15 16:09:57.348944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.528 16:09:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3919843 00:28:28.528 [2024-07-15 16:09:57.349122] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.528 [2024-07-15 16:09:57.349139] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.528 [2024-07-15 16:09:57.349145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.528 16:09:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3919843 00:28:28.528 16:09:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:28.528 16:09:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 3919843 ']' 00:28:28.528 16:09:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:28.528 16:09:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:28.528 16:09:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:28.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:28.528 16:09:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:28.528 16:09:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:28.528 [2024-07-15 16:09:57.351977] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.528 [2024-07-15 16:09:57.361520] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.528 [2024-07-15 16:09:57.361888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.528 [2024-07-15 16:09:57.361904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.528 [2024-07-15 16:09:57.361911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.528 [2024-07-15 16:09:57.362091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.528 [2024-07-15 16:09:57.362274] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.528 [2024-07-15 16:09:57.362283] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.528 [2024-07-15 16:09:57.362290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.528 [2024-07-15 16:09:57.365122] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.528 [2024-07-15 16:09:57.374660] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.528 [2024-07-15 16:09:57.375038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.528 [2024-07-15 16:09:57.375055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.528 [2024-07-15 16:09:57.375062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.528 [2024-07-15 16:09:57.375245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.528 [2024-07-15 16:09:57.375426] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.528 [2024-07-15 16:09:57.375436] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.528 [2024-07-15 16:09:57.375442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.528 [2024-07-15 16:09:57.378275] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.528 [2024-07-15 16:09:57.387752] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.528 [2024-07-15 16:09:57.388186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.528 [2024-07-15 16:09:57.388203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.528 [2024-07-15 16:09:57.388211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.528 [2024-07-15 16:09:57.388387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.528 [2024-07-15 16:09:57.388560] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.528 [2024-07-15 16:09:57.388570] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.528 [2024-07-15 16:09:57.388576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.528 [2024-07-15 16:09:57.391329] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.528 [2024-07-15 16:09:57.399258] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:28:28.528 [2024-07-15 16:09:57.399298] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:28.528 [2024-07-15 16:09:57.400813] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.528 [2024-07-15 16:09:57.401248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.528 [2024-07-15 16:09:57.401265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.528 [2024-07-15 16:09:57.401272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.528 [2024-07-15 16:09:57.401446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.528 [2024-07-15 16:09:57.401621] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.528 [2024-07-15 16:09:57.401630] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.528 [2024-07-15 16:09:57.401637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.528 [2024-07-15 16:09:57.404387] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.528 [2024-07-15 16:09:57.414006] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.528 [2024-07-15 16:09:57.414392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.528 [2024-07-15 16:09:57.414410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.528 [2024-07-15 16:09:57.414417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.528 [2024-07-15 16:09:57.414590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.528 [2024-07-15 16:09:57.414762] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.528 [2024-07-15 16:09:57.414771] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.528 [2024-07-15 16:09:57.414778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.528 [2024-07-15 16:09:57.417594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.528 EAL: No free 2048 kB hugepages reported on node 1 00:28:28.528 [2024-07-15 16:09:57.427106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.528 [2024-07-15 16:09:57.427536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.528 [2024-07-15 16:09:57.427554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.528 [2024-07-15 16:09:57.427561] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.528 [2024-07-15 16:09:57.427738] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.528 [2024-07-15 16:09:57.427917] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.528 [2024-07-15 16:09:57.427927] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.528 [2024-07-15 16:09:57.427933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.528 [2024-07-15 16:09:57.430779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.528 [2024-07-15 16:09:57.440154] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.528 [2024-07-15 16:09:57.440578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.528 [2024-07-15 16:09:57.440595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.528 [2024-07-15 16:09:57.440603] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.528 [2024-07-15 16:09:57.440785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.528 [2024-07-15 16:09:57.440965] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.528 [2024-07-15 16:09:57.440975] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.528 [2024-07-15 16:09:57.440981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.528 [2024-07-15 16:09:57.443816] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.528 [2024-07-15 16:09:57.453216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.528 [2024-07-15 16:09:57.453675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.528 [2024-07-15 16:09:57.453692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.528 [2024-07-15 16:09:57.453699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.528 [2024-07-15 16:09:57.453873] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.529 [2024-07-15 16:09:57.454048] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.529 [2024-07-15 16:09:57.454057] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.529 [2024-07-15 16:09:57.454063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.529 [2024-07-15 16:09:57.456882] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.529 [2024-07-15 16:09:57.457205] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:28.788 [2024-07-15 16:09:57.466403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.788 [2024-07-15 16:09:57.466796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.788 [2024-07-15 16:09:57.466816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.788 [2024-07-15 16:09:57.466824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.788 [2024-07-15 16:09:57.467004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.788 [2024-07-15 16:09:57.467184] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.788 [2024-07-15 16:09:57.467194] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.788 [2024-07-15 16:09:57.467201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.788 [2024-07-15 16:09:57.470007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.788 [2024-07-15 16:09:57.479511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.788 [2024-07-15 16:09:57.479857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.788 [2024-07-15 16:09:57.479874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.788 [2024-07-15 16:09:57.479882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.788 [2024-07-15 16:09:57.480056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.788 [2024-07-15 16:09:57.480235] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.788 [2024-07-15 16:09:57.480252] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.788 [2024-07-15 16:09:57.480259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.788 [2024-07-15 16:09:57.483006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.788 [2024-07-15 16:09:57.492596] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.788 [2024-07-15 16:09:57.493052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.788 [2024-07-15 16:09:57.493070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.788 [2024-07-15 16:09:57.493078] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.788 [2024-07-15 16:09:57.493255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.788 [2024-07-15 16:09:57.493430] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.788 [2024-07-15 16:09:57.493440] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.788 [2024-07-15 16:09:57.493447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.788 [2024-07-15 16:09:57.496195] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.788 [2024-07-15 16:09:57.505680] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.788 [2024-07-15 16:09:57.506164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.788 [2024-07-15 16:09:57.506184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.789 [2024-07-15 16:09:57.506192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.789 [2024-07-15 16:09:57.506371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.789 [2024-07-15 16:09:57.506546] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.789 [2024-07-15 16:09:57.506555] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.789 [2024-07-15 16:09:57.506562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.789 [2024-07-15 16:09:57.509316] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.789 [2024-07-15 16:09:57.518808] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.789 [2024-07-15 16:09:57.519264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.789 [2024-07-15 16:09:57.519281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.789 [2024-07-15 16:09:57.519289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.789 [2024-07-15 16:09:57.519461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.789 [2024-07-15 16:09:57.519634] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.789 [2024-07-15 16:09:57.519643] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.789 [2024-07-15 16:09:57.519650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.789 [2024-07-15 16:09:57.522486] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.789 [2024-07-15 16:09:57.531857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.789 [2024-07-15 16:09:57.532234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.789 [2024-07-15 16:09:57.532251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.789 [2024-07-15 16:09:57.532258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.789 [2024-07-15 16:09:57.532435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.789 [2024-07-15 16:09:57.532613] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.789 [2024-07-15 16:09:57.532623] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.789 [2024-07-15 16:09:57.532629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.789 [2024-07-15 16:09:57.535462] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.789 [2024-07-15 16:09:57.538439] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:28.789 [2024-07-15 16:09:57.538465] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:28.789 [2024-07-15 16:09:57.538473] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:28.789 [2024-07-15 16:09:57.538479] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:28.789 [2024-07-15 16:09:57.538484] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:28.789 [2024-07-15 16:09:57.538521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:28.789 [2024-07-15 16:09:57.538609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:28.789 [2024-07-15 16:09:57.538610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:28.789 [2024-07-15 16:09:57.545012] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.789 [2024-07-15 16:09:57.545491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.789 [2024-07-15 16:09:57.545510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.789 [2024-07-15 16:09:57.545518] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.789 [2024-07-15 16:09:57.545697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.789 [2024-07-15 16:09:57.545877] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.789 [2024-07-15 16:09:57.545886] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.789 [2024-07-15 16:09:57.545893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.789 [2024-07-15 16:09:57.548729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.789 [2024-07-15 16:09:57.558107] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.789 [2024-07-15 16:09:57.558596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.789 [2024-07-15 16:09:57.558617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.789 [2024-07-15 16:09:57.558625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.789 [2024-07-15 16:09:57.558803] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.789 [2024-07-15 16:09:57.558984] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.789 [2024-07-15 16:09:57.558999] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.789 [2024-07-15 16:09:57.559006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.789 [2024-07-15 16:09:57.561846] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.789 [2024-07-15 16:09:57.571219] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.789 [2024-07-15 16:09:57.571629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.789 [2024-07-15 16:09:57.571649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.789 [2024-07-15 16:09:57.571657] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.789 [2024-07-15 16:09:57.571837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.789 [2024-07-15 16:09:57.572016] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.789 [2024-07-15 16:09:57.572026] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.789 [2024-07-15 16:09:57.572033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.789 [2024-07-15 16:09:57.574868] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.789 [2024-07-15 16:09:57.584418] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.789 [2024-07-15 16:09:57.584829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.789 [2024-07-15 16:09:57.584848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.789 [2024-07-15 16:09:57.584856] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.789 [2024-07-15 16:09:57.585035] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.789 [2024-07-15 16:09:57.585215] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.789 [2024-07-15 16:09:57.585230] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.789 [2024-07-15 16:09:57.585238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.789 [2024-07-15 16:09:57.588069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.789 [2024-07-15 16:09:57.597608] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.789 [2024-07-15 16:09:57.597995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.789 [2024-07-15 16:09:57.598014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.789 [2024-07-15 16:09:57.598022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.789 [2024-07-15 16:09:57.598200] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.789 [2024-07-15 16:09:57.598386] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.789 [2024-07-15 16:09:57.598397] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.789 [2024-07-15 16:09:57.598404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.789 [2024-07-15 16:09:57.601236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.789 [2024-07-15 16:09:57.610783] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.789 [2024-07-15 16:09:57.611254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.789 [2024-07-15 16:09:57.611271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.789 [2024-07-15 16:09:57.611279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.789 [2024-07-15 16:09:57.611457] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.789 [2024-07-15 16:09:57.611635] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.789 [2024-07-15 16:09:57.611645] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.789 [2024-07-15 16:09:57.611651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.789 [2024-07-15 16:09:57.614488] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.789 [2024-07-15 16:09:57.623855] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.789 [2024-07-15 16:09:57.624296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.789 [2024-07-15 16:09:57.624314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.789 [2024-07-15 16:09:57.624321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.789 [2024-07-15 16:09:57.624499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.789 [2024-07-15 16:09:57.624678] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.789 [2024-07-15 16:09:57.624688] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.789 [2024-07-15 16:09:57.624694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.789 [2024-07-15 16:09:57.627528] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.789 [2024-07-15 16:09:57.637108] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.789 [2024-07-15 16:09:57.637575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.789 [2024-07-15 16:09:57.637593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.789 [2024-07-15 16:09:57.637601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.789 [2024-07-15 16:09:57.637779] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.789 [2024-07-15 16:09:57.637959] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.789 [2024-07-15 16:09:57.637969] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.789 [2024-07-15 16:09:57.637975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.789 [2024-07-15 16:09:57.640811] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.789 [2024-07-15 16:09:57.650181] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.789 [2024-07-15 16:09:57.650506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.789 [2024-07-15 16:09:57.650524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.789 [2024-07-15 16:09:57.650531] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.789 [2024-07-15 16:09:57.650713] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.789 [2024-07-15 16:09:57.650893] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.789 [2024-07-15 16:09:57.650902] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.789 [2024-07-15 16:09:57.650909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.789 [2024-07-15 16:09:57.653750] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.789 [2024-07-15 16:09:57.663288] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.789 [2024-07-15 16:09:57.663752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.789 [2024-07-15 16:09:57.663769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.789 [2024-07-15 16:09:57.663777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.789 [2024-07-15 16:09:57.663955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.789 [2024-07-15 16:09:57.664132] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.789 [2024-07-15 16:09:57.664143] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.789 [2024-07-15 16:09:57.664151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.789 [2024-07-15 16:09:57.666986] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.789 [2024-07-15 16:09:57.676366] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.789 [2024-07-15 16:09:57.676758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.789 [2024-07-15 16:09:57.676775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.789 [2024-07-15 16:09:57.676783] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.789 [2024-07-15 16:09:57.676960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.789 [2024-07-15 16:09:57.677138] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.789 [2024-07-15 16:09:57.677148] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.789 [2024-07-15 16:09:57.677155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.789 [2024-07-15 16:09:57.679990] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.789 [2024-07-15 16:09:57.689551] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.789 [2024-07-15 16:09:57.689880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.789 [2024-07-15 16:09:57.689897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.789 [2024-07-15 16:09:57.689905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.790 [2024-07-15 16:09:57.690082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.790 [2024-07-15 16:09:57.690265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.790 [2024-07-15 16:09:57.690275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.790 [2024-07-15 16:09:57.690285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.790 [2024-07-15 16:09:57.693123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.790 [2024-07-15 16:09:57.702654] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.790 [2024-07-15 16:09:57.703066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.790 [2024-07-15 16:09:57.703083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.790 [2024-07-15 16:09:57.703090] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.790 [2024-07-15 16:09:57.703272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.790 [2024-07-15 16:09:57.703451] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.790 [2024-07-15 16:09:57.703461] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.790 [2024-07-15 16:09:57.703467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.790 [2024-07-15 16:09:57.706302] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.790 [2024-07-15 16:09:57.715839] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.790 [2024-07-15 16:09:57.716202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.790 [2024-07-15 16:09:57.716219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:28.790 [2024-07-15 16:09:57.716232] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:28.790 [2024-07-15 16:09:57.716409] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:28.790 [2024-07-15 16:09:57.716588] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.790 [2024-07-15 16:09:57.716597] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.790 [2024-07-15 16:09:57.716604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.790 [2024-07-15 16:09:57.719439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.049 [2024-07-15 16:09:57.728980] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.049 [2024-07-15 16:09:57.729302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.049 [2024-07-15 16:09:57.729319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:29.049 [2024-07-15 16:09:57.729326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:29.049 [2024-07-15 16:09:57.729504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:29.049 [2024-07-15 16:09:57.729682] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:29.049 [2024-07-15 16:09:57.729692] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:29.049 [2024-07-15 16:09:57.729698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:29.049 [2024-07-15 16:09:57.732533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.049 [2024-07-15 16:09:57.742079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.049 [2024-07-15 16:09:57.742408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.049 [2024-07-15 16:09:57.742428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:29.049 [2024-07-15 16:09:57.742436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:29.049 [2024-07-15 16:09:57.742613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:29.049 [2024-07-15 16:09:57.742790] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:29.049 [2024-07-15 16:09:57.742800] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:29.049 [2024-07-15 16:09:57.742806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:29.049 [2024-07-15 16:09:57.745639] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.049 [2024-07-15 16:09:57.755172] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.049 [2024-07-15 16:09:57.755620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.049 [2024-07-15 16:09:57.755638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:29.049 [2024-07-15 16:09:57.755646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:29.049 [2024-07-15 16:09:57.755824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:29.049 [2024-07-15 16:09:57.756003] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:29.049 [2024-07-15 16:09:57.756013] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:29.049 [2024-07-15 16:09:57.756021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:29.049 [2024-07-15 16:09:57.758860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.049 [2024-07-15 16:09:57.768234] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.049 [2024-07-15 16:09:57.768600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.050 [2024-07-15 16:09:57.768618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:29.050 [2024-07-15 16:09:57.768625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:29.050 [2024-07-15 16:09:57.768803] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:29.050 [2024-07-15 16:09:57.768981] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:29.050 [2024-07-15 16:09:57.768990] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:29.050 [2024-07-15 16:09:57.768996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:29.050 [2024-07-15 16:09:57.771836] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.050 [2024-07-15 16:09:57.781376] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.050 [2024-07-15 16:09:57.781733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.050 [2024-07-15 16:09:57.781750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:29.050 [2024-07-15 16:09:57.781757] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:29.050 [2024-07-15 16:09:57.781935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:29.050 [2024-07-15 16:09:57.782117] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:29.050 [2024-07-15 16:09:57.782126] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:29.050 [2024-07-15 16:09:57.782133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:29.050 [2024-07-15 16:09:57.784979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.050 [2024-07-15 16:09:57.794702] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.050 [2024-07-15 16:09:57.795027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.050 [2024-07-15 16:09:57.795044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:29.050 [2024-07-15 16:09:57.795051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:29.050 [2024-07-15 16:09:57.795234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:29.050 [2024-07-15 16:09:57.795414] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:29.050 [2024-07-15 16:09:57.795423] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:29.050 [2024-07-15 16:09:57.795430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:29.050 [2024-07-15 16:09:57.798267] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.050 [2024-07-15 16:09:57.807806] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.050 [2024-07-15 16:09:57.808174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.050 [2024-07-15 16:09:57.808191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:29.050 [2024-07-15 16:09:57.808199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:29.050 [2024-07-15 16:09:57.808383] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:29.050 [2024-07-15 16:09:57.808561] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:29.050 [2024-07-15 16:09:57.808571] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:29.050 [2024-07-15 16:09:57.808577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:29.050 [2024-07-15 16:09:57.811413] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.050 [2024-07-15 16:09:57.820950] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.050 [2024-07-15 16:09:57.821309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.050 [2024-07-15 16:09:57.821327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:29.050 [2024-07-15 16:09:57.821335] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:29.050 [2024-07-15 16:09:57.821520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:29.050 [2024-07-15 16:09:57.821694] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:29.050 [2024-07-15 16:09:57.821704] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:29.050 [2024-07-15 16:09:57.821710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:29.050 [2024-07-15 16:09:57.824552] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.050 [2024-07-15 16:09:57.834108] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.050 [2024-07-15 16:09:57.834503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.050 [2024-07-15 16:09:57.834520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:29.050 [2024-07-15 16:09:57.834528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:29.050 [2024-07-15 16:09:57.834705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:29.050 [2024-07-15 16:09:57.834883] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:29.050 [2024-07-15 16:09:57.834893] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:29.050 [2024-07-15 16:09:57.834899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:29.050 [2024-07-15 16:09:57.837739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.050 [2024-07-15 16:09:57.847280] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.050 [2024-07-15 16:09:57.847664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.050 [2024-07-15 16:09:57.847681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:29.050 [2024-07-15 16:09:57.847689] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:29.050 [2024-07-15 16:09:57.847867] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:29.050 [2024-07-15 16:09:57.848044] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:29.050 [2024-07-15 16:09:57.848054] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:29.050 [2024-07-15 16:09:57.848061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:29.050 [2024-07-15 16:09:57.850899] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.050 [2024-07-15 16:09:57.860439] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.050 [2024-07-15 16:09:57.860740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.050 [2024-07-15 16:09:57.860757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:29.050 [2024-07-15 16:09:57.860765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:29.050 [2024-07-15 16:09:57.860942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:29.050 [2024-07-15 16:09:57.861121] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:29.050 [2024-07-15 16:09:57.861130] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:29.050 [2024-07-15 16:09:57.861137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:29.050 [2024-07-15 16:09:57.863977] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.050 [2024-07-15 16:09:57.873515] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.050 [2024-07-15 16:09:57.873958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.050 [2024-07-15 16:09:57.873975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:29.050 [2024-07-15 16:09:57.873986] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:29.050 [2024-07-15 16:09:57.874164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:29.050 [2024-07-15 16:09:57.874347] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:29.050 [2024-07-15 16:09:57.874358] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:29.050 [2024-07-15 16:09:57.874364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:29.050 [2024-07-15 16:09:57.877195] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.050 [2024-07-15 16:09:57.886583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.050 [2024-07-15 16:09:57.886902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.050 [2024-07-15 16:09:57.886919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:29.050 [2024-07-15 16:09:57.886927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:29.050 [2024-07-15 16:09:57.887104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:29.050 [2024-07-15 16:09:57.887287] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:29.050 [2024-07-15 16:09:57.887298] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:29.050 [2024-07-15 16:09:57.887305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:29.050 [2024-07-15 16:09:57.890137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.050 [2024-07-15 16:09:57.899677] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.050 [2024-07-15 16:09:57.899991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.050 [2024-07-15 16:09:57.900008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:29.050 [2024-07-15 16:09:57.900016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:29.050 [2024-07-15 16:09:57.900193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:29.050 [2024-07-15 16:09:57.900377] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:29.050 [2024-07-15 16:09:57.900388] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:29.050 [2024-07-15 16:09:57.900394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:29.050 [2024-07-15 16:09:57.903233] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.050 [2024-07-15 16:09:57.912772] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.051 [2024-07-15 16:09:57.913090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.051 [2024-07-15 16:09:57.913107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:29.051 [2024-07-15 16:09:57.913114] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:29.051 [2024-07-15 16:09:57.913295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:29.051 [2024-07-15 16:09:57.913474] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:29.051 [2024-07-15 16:09:57.913486] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:29.051 [2024-07-15 16:09:57.913492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:29.051 [2024-07-15 16:09:57.916330] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.051 [2024-07-15 16:09:57.925873] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.051 [2024-07-15 16:09:57.926191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.051 [2024-07-15 16:09:57.926209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:29.051 [2024-07-15 16:09:57.926216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:29.051 [2024-07-15 16:09:57.926399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:29.051 [2024-07-15 16:09:57.926577] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:29.051 [2024-07-15 16:09:57.926586] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:29.051 [2024-07-15 16:09:57.926593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:29.051 [2024-07-15 16:09:57.929427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.051 [2024-07-15 16:09:57.938971] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.051 [2024-07-15 16:09:57.939395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.051 [2024-07-15 16:09:57.939413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:29.051 [2024-07-15 16:09:57.939421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:29.051 [2024-07-15 16:09:57.939599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:29.051 [2024-07-15 16:09:57.939777] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:29.051 [2024-07-15 16:09:57.939786] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:29.051 [2024-07-15 16:09:57.939792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:29.051 [2024-07-15 16:09:57.942629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.051 [2024-07-15 16:09:57.952171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.051 [2024-07-15 16:09:57.952606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.051 [2024-07-15 16:09:57.952624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:29.051 [2024-07-15 16:09:57.952631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:29.051 [2024-07-15 16:09:57.952809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:29.051 [2024-07-15 16:09:57.952988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:29.051 [2024-07-15 16:09:57.952997] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:29.051 [2024-07-15 16:09:57.953004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:29.051 [2024-07-15 16:09:57.955841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.051 [2024-07-15 16:09:57.965217] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.051 [2024-07-15 16:09:57.965584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.051 [2024-07-15 16:09:57.965601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:29.051 [2024-07-15 16:09:57.965609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:29.051 [2024-07-15 16:09:57.965786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:29.051 [2024-07-15 16:09:57.965965] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:29.051 [2024-07-15 16:09:57.965975] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:29.051 [2024-07-15 16:09:57.965981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:29.051 [2024-07-15 16:09:57.968821] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.051 [2024-07-15 16:09:57.978362] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.051 [2024-07-15 16:09:57.978730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.051 [2024-07-15 16:09:57.978747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:29.051 [2024-07-15 16:09:57.978754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:29.051 [2024-07-15 16:09:57.978932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:29.051 [2024-07-15 16:09:57.979112] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:29.051 [2024-07-15 16:09:57.979121] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:29.051 [2024-07-15 16:09:57.979128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:29.051 [2024-07-15 16:09:57.981966] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.310 [2024-07-15 16:09:57.991519] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.310 [2024-07-15 16:09:57.991904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.310 [2024-07-15 16:09:57.991921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:29.310 [2024-07-15 16:09:57.991928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:29.310 [2024-07-15 16:09:57.992106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:29.310 [2024-07-15 16:09:57.992288] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:29.310 [2024-07-15 16:09:57.992299] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:29.310 [2024-07-15 16:09:57.992305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:29.310 [2024-07-15 16:09:57.995134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.310 [2024-07-15 16:09:58.004676] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.310 [2024-07-15 16:09:58.005044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.310 [2024-07-15 16:09:58.005061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:29.310 [2024-07-15 16:09:58.005072] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:29.310 [2024-07-15 16:09:58.005253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:29.310 [2024-07-15 16:09:58.005431] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:29.310 [2024-07-15 16:09:58.005441] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:29.310 [2024-07-15 16:09:58.005447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:29.310 [2024-07-15 16:09:58.008281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.310 [2024-07-15 16:09:58.017828] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.310 [2024-07-15 16:09:58.018143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.310 [2024-07-15 16:09:58.018161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:29.310 [2024-07-15 16:09:58.018169] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:29.310 [2024-07-15 16:09:58.018352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:29.310 [2024-07-15 16:09:58.018530] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:29.310 [2024-07-15 16:09:58.018541] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:29.310 [2024-07-15 16:09:58.018547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:29.310 [2024-07-15 16:09:58.021385] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.310 [2024-07-15 16:09:58.030926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.310 [2024-07-15 16:09:58.031340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.310 [2024-07-15 16:09:58.031358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:29.310 [2024-07-15 16:09:58.031366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:29.310 [2024-07-15 16:09:58.031544] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:29.310 [2024-07-15 16:09:58.031723] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:29.310 [2024-07-15 16:09:58.031733] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:29.310 [2024-07-15 16:09:58.031739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:29.310 [2024-07-15 16:09:58.034575] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.310 [2024-07-15 16:09:58.044109] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.310 [2024-07-15 16:09:58.044483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.310 [2024-07-15 16:09:58.044500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:29.310 [2024-07-15 16:09:58.044509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:29.310 [2024-07-15 16:09:58.044686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:29.310 [2024-07-15 16:09:58.044864] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:29.310 [2024-07-15 16:09:58.044873] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:29.310 [2024-07-15 16:09:58.044884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:29.310 [2024-07-15 16:09:58.047714] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.310 [2024-07-15 16:09:58.057253] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.310 [2024-07-15 16:09:58.057619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.310 [2024-07-15 16:09:58.057636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:29.310 [2024-07-15 16:09:58.057644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:29.310 [2024-07-15 16:09:58.057822] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:29.310 [2024-07-15 16:09:58.058000] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:29.310 [2024-07-15 16:09:58.058009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:29.310 [2024-07-15 16:09:58.058017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:29.310 [2024-07-15 16:09:58.060850] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.310 [2024-07-15 16:09:58.070467] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.310 [2024-07-15 16:09:58.070906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.310 [2024-07-15 16:09:58.070924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:29.311 [2024-07-15 16:09:58.070931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:29.311 [2024-07-15 16:09:58.071111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:29.311 [2024-07-15 16:09:58.071296] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:29.311 [2024-07-15 16:09:58.071307] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:29.311 [2024-07-15 16:09:58.071314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:29.311 [2024-07-15 16:09:58.074164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.311 [2024-07-15 16:09:58.083587] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.311 [2024-07-15 16:09:58.084048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.311 [2024-07-15 16:09:58.084066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:29.311 [2024-07-15 16:09:58.084073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:29.311 [2024-07-15 16:09:58.084257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:29.311 [2024-07-15 16:09:58.084436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:29.311 [2024-07-15 16:09:58.084446] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:29.311 [2024-07-15 16:09:58.084452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:29.311 [2024-07-15 16:09:58.087298] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.311 [2024-07-15 16:09:58.096676] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.311 [2024-07-15 16:09:58.097047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.311 [2024-07-15 16:09:58.097064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:29.311 [2024-07-15 16:09:58.097071] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:29.311 [2024-07-15 16:09:58.097252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:29.311 [2024-07-15 16:09:58.097431] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:29.311 [2024-07-15 16:09:58.097440] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:29.311 [2024-07-15 16:09:58.097446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:29.311 [2024-07-15 16:09:58.100282] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.311 [2024-07-15 16:09:58.109810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.311 [2024-07-15 16:09:58.110269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.311 [2024-07-15 16:09:58.110286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:29.311 [2024-07-15 16:09:58.110293] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:29.311 [2024-07-15 16:09:58.110471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:29.311 [2024-07-15 16:09:58.110648] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:29.311 [2024-07-15 16:09:58.110658] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:29.311 [2024-07-15 16:09:58.110664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:29.311 [2024-07-15 16:09:58.113496] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.311 [2024-07-15 16:09:58.122859] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.311 [2024-07-15 16:09:58.123312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.311 [2024-07-15 16:09:58.123330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:29.311 [2024-07-15 16:09:58.123339] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:29.311 [2024-07-15 16:09:58.123516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:29.311 [2024-07-15 16:09:58.123693] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:29.311 [2024-07-15 16:09:58.123703] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:29.311 [2024-07-15 16:09:58.123709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:29.311 [2024-07-15 16:09:58.126547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.311 [2024-07-15 16:09:58.135905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.311 [2024-07-15 16:09:58.136343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.311 [2024-07-15 16:09:58.136361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:29.311 [2024-07-15 16:09:58.136368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:29.311 [2024-07-15 16:09:58.136553] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:29.311 [2024-07-15 16:09:58.136733] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:29.311 [2024-07-15 16:09:58.136742] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:29.311 [2024-07-15 16:09:58.136749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:29.311 [2024-07-15 16:09:58.139582] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.311 [2024-07-15 16:09:58.148945] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.311 [2024-07-15 16:09:58.149339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.311 [2024-07-15 16:09:58.149356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:29.311 [2024-07-15 16:09:58.149363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:29.311 [2024-07-15 16:09:58.149541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:29.311 [2024-07-15 16:09:58.149720] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:29.311 [2024-07-15 16:09:58.149730] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:29.311 [2024-07-15 16:09:58.149736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:29.311 [2024-07-15 16:09:58.152569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.311 [2024-07-15 16:09:58.162101] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.311 [2024-07-15 16:09:58.162490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.311 [2024-07-15 16:09:58.162507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:29.311 [2024-07-15 16:09:58.162514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:29.311 [2024-07-15 16:09:58.162692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:29.311 [2024-07-15 16:09:58.162870] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:29.311 [2024-07-15 16:09:58.162879] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:29.311 [2024-07-15 16:09:58.162885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:29.311 [2024-07-15 16:09:58.165722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.311 [2024-07-15 16:09:58.175254] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.311 [2024-07-15 16:09:58.175717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.311 [2024-07-15 16:09:58.175733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:29.311 [2024-07-15 16:09:58.175740] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:29.311 [2024-07-15 16:09:58.175918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:29.311 [2024-07-15 16:09:58.176096] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:29.311 [2024-07-15 16:09:58.176106] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:29.311 [2024-07-15 16:09:58.176116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:29.311 [2024-07-15 16:09:58.178949] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.311 [2024-07-15 16:09:58.188313] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.311 [2024-07-15 16:09:58.188647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.311 [2024-07-15 16:09:58.188665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:29.311 [2024-07-15 16:09:58.188672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:29.311 [2024-07-15 16:09:58.188849] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:29.311 [2024-07-15 16:09:58.189027] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:29.311 [2024-07-15 16:09:58.189036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:29.311 [2024-07-15 16:09:58.189043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:29.311 [2024-07-15 16:09:58.191877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.311 16:09:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:29.311 16:09:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:28:29.311 [2024-07-15 16:09:58.201418] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.311 16:09:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:29.311 16:09:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:29.311 [2024-07-15 16:09:58.201926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.311 [2024-07-15 16:09:58.201944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:29.311 [2024-07-15 16:09:58.201951] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:29.311 16:09:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:29.311 [2024-07-15 16:09:58.202128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:29.311 [2024-07-15 16:09:58.202314] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:29.311 [2024-07-15 16:09:58.202325] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:29.311 [2024-07-15 16:09:58.202331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:29.311 [2024-07-15 16:09:58.205157] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.311 [2024-07-15 16:09:58.214535] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.311 [2024-07-15 16:09:58.214970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.311 [2024-07-15 16:09:58.214988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:29.311 [2024-07-15 16:09:58.214995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:29.311 [2024-07-15 16:09:58.215172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:29.311 [2024-07-15 16:09:58.215354] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:29.311 [2024-07-15 16:09:58.215365] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:29.311 [2024-07-15 16:09:58.215375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:29.311 [2024-07-15 16:09:58.218208] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.311 [2024-07-15 16:09:58.227582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.311 [2024-07-15 16:09:58.228003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.311 [2024-07-15 16:09:58.228020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:29.311 [2024-07-15 16:09:58.228027] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:29.311 [2024-07-15 16:09:58.228205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:29.311 [2024-07-15 16:09:58.228390] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:29.311 [2024-07-15 16:09:58.228401] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:29.311 [2024-07-15 16:09:58.228407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:29.311 [2024-07-15 16:09:58.231243] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.311 16:09:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:29.311 16:09:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:29.311 16:09:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.311 16:09:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:29.311 [2024-07-15 16:09:58.240773] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.311 [2024-07-15 16:09:58.241172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.311 [2024-07-15 16:09:58.241188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:29.311 [2024-07-15 16:09:58.241196] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:29.311 [2024-07-15 16:09:58.241377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:29.311 [2024-07-15 16:09:58.241555] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:29.311 [2024-07-15 16:09:58.241564] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:29.311 [2024-07-15 16:09:58.241571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:29.311 [2024-07-15 16:09:58.242091] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:29.570 [2024-07-15 16:09:58.244418] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.570 16:09:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.570 16:09:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:29.570 16:09:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.570 16:09:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:29.570 [2024-07-15 16:09:58.253953] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.570 [2024-07-15 16:09:58.254351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.570 [2024-07-15 16:09:58.254369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:29.570 [2024-07-15 16:09:58.254377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:29.570 [2024-07-15 16:09:58.254556] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:29.570 [2024-07-15 16:09:58.254739] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:29.570 [2024-07-15 16:09:58.254750] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:29.570 [2024-07-15 16:09:58.254758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:29.570 [2024-07-15 16:09:58.257593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.570 [2024-07-15 16:09:58.267123] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.570 [2024-07-15 16:09:58.267584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.570 [2024-07-15 16:09:58.267601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:29.570 [2024-07-15 16:09:58.267609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:29.570 [2024-07-15 16:09:58.267786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:29.570 [2024-07-15 16:09:58.267965] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:29.570 [2024-07-15 16:09:58.267975] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:29.570 [2024-07-15 16:09:58.267981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:29.570 [2024-07-15 16:09:58.270813] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.570 [2024-07-15 16:09:58.280183] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.570 [2024-07-15 16:09:58.280632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.570 [2024-07-15 16:09:58.280650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:29.570 [2024-07-15 16:09:58.280658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:29.570 [2024-07-15 16:09:58.280836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:29.570 [2024-07-15 16:09:58.281016] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:29.570 [2024-07-15 16:09:58.281025] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:29.570 [2024-07-15 16:09:58.281032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:29.570 [2024-07-15 16:09:58.283867] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.570 Malloc0 00:28:29.570 16:09:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.570 16:09:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:29.570 16:09:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.570 16:09:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:29.570 [2024-07-15 16:09:58.293248] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.570 [2024-07-15 16:09:58.293708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.570 [2024-07-15 16:09:58.293725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:29.570 [2024-07-15 16:09:58.293732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:29.570 [2024-07-15 16:09:58.293911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:29.570 [2024-07-15 16:09:58.294093] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:29.570 [2024-07-15 16:09:58.294103] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:29.570 [2024-07-15 16:09:58.294109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:29.570 [2024-07-15 16:09:58.296939] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.570 16:09:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.570 16:09:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:29.570 16:09:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.570 16:09:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:29.570 [2024-07-15 16:09:58.306311] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.570 [2024-07-15 16:09:58.306782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:29.570 [2024-07-15 16:09:58.306800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0980 with addr=10.0.0.2, port=4420 00:28:29.570 [2024-07-15 16:09:58.306807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0980 is same with the state(5) to be set 00:28:29.570 [2024-07-15 16:09:58.306985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0980 (9): Bad file descriptor 00:28:29.570 [2024-07-15 16:09:58.307164] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:29.570 [2024-07-15 16:09:58.307174] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:29.570 [2024-07-15 16:09:58.307180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:29.570 16:09:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.570 16:09:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:29.570 16:09:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.570 16:09:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:29.570 [2024-07-15 16:09:58.310013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:29.570 [2024-07-15 16:09:58.312617] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:29.570 16:09:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.570 16:09:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3918911 00:28:29.570 [2024-07-15 16:09:58.319382] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:29.570 [2024-07-15 16:09:58.436040] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:39.541 00:28:39.541 Latency(us) 00:28:39.541 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:39.541 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:39.541 Verification LBA range: start 0x0 length 0x4000 00:28:39.541 Nvme1n1 : 15.01 8091.11 31.61 12988.86 0.00 6052.18 658.92 20401.64 00:28:39.541 =================================================================================================================== 00:28:39.541 Total : 8091.11 31.61 12988.86 0.00 6052.18 658.92 20401.64 00:28:39.541 16:10:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:28:39.541 16:10:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:39.541 16:10:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.541 16:10:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:39.541 16:10:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.541 16:10:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:39.541 16:10:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:39.541 16:10:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:39.541 16:10:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:28:39.541 16:10:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:39.541 16:10:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:28:39.541 16:10:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:39.541 16:10:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:39.541 rmmod nvme_tcp 00:28:39.541 rmmod nvme_fabrics 00:28:39.541 rmmod nvme_keyring 00:28:39.541 16:10:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:39.541 16:10:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:28:39.541 16:10:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:28:39.541 16:10:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 3919843 ']' 00:28:39.541 16:10:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 3919843 00:28:39.541 16:10:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 3919843 ']' 00:28:39.541 16:10:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 3919843 00:28:39.541 16:10:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:28:39.541 16:10:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:39.541 16:10:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3919843 00:28:39.541 16:10:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:39.541 16:10:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:39.541 16:10:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3919843' 00:28:39.541 killing process with pid 3919843 00:28:39.541 16:10:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 3919843 00:28:39.541 16:10:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 3919843 00:28:39.541 16:10:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:39.541 16:10:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:39.541 16:10:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:39.541 16:10:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:39.541 16:10:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:39.541 16:10:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:39.541 16:10:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:39.541 16:10:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:40.477 16:10:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:40.477 00:28:40.477 real 0m26.065s 00:28:40.477 user 1m3.041s 00:28:40.477 sys 0m6.043s 00:28:40.477 16:10:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:40.477 16:10:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:40.477 ************************************ 00:28:40.477 END TEST nvmf_bdevperf 00:28:40.477 ************************************ 00:28:40.477 16:10:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:40.477 16:10:09 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:40.477 16:10:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:40.477 16:10:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:40.477 16:10:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:40.477 ************************************ 00:28:40.477 START TEST nvmf_target_disconnect 00:28:40.477 ************************************ 00:28:40.477 16:10:09 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:40.735 * Looking for test storage... 00:28:40.735 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:40.735 16:10:09 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:40.735 16:10:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:28:40.735 16:10:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:40.735 16:10:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:40.735 16:10:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:40.735 16:10:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:40.735 16:10:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:40.735 16:10:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:40.735 16:10:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:40.735 16:10:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:40.735 16:10:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:40.735 16:10:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:40.735 16:10:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:40.735 16:10:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:40.735 16:10:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:40.735 16:10:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:40.735 16:10:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:40.735 16:10:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:40.735 16:10:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:40.735 16:10:09 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:40.735 16:10:09 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:40.735 16:10:09 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:40.735 16:10:09 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.735 16:10:09 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.735 16:10:09 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.735 16:10:09 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:28:40.735 16:10:09 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.735 16:10:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:28:40.735 16:10:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:40.736 16:10:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:40.736 16:10:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:40.736 16:10:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:40.736 16:10:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:40.736 16:10:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:40.736 16:10:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:40.736 16:10:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:40.736 16:10:09 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:40.736 16:10:09 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:40.736 16:10:09 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:40.736 16:10:09 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:28:40.736 16:10:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:40.736 16:10:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:40.736 16:10:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:40.736 16:10:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:40.736 16:10:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:40.736 16:10:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:40.736 16:10:09 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:40.736 16:10:09 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:40.736 16:10:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:40.736 16:10:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:40.736 16:10:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:28:40.736 16:10:09 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:46.107 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:46.107 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:46.107 Found net devices under 0000:86:00.0: cvl_0_0 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:46.107 Found net devices under 0000:86:00.1: cvl_0_1 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:46.107 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:46.107 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:28:46.107 00:28:46.107 --- 10.0.0.2 ping statistics --- 00:28:46.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:46.107 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:46.107 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:46.107 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:28:46.107 00:28:46.107 --- 10.0.0.1 ping statistics --- 00:28:46.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:46.107 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:46.107 ************************************ 00:28:46.107 START TEST nvmf_target_disconnect_tc1 00:28:46.107 ************************************ 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:28:46.107 16:10:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:46.107 EAL: No free 2048 kB hugepages reported on node 1 00:28:46.366 [2024-07-15 16:10:15.065308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-15 16:10:15.065352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200be60 with addr=10.0.0.2, port=4420 00:28:46.366 [2024-07-15 16:10:15.065374] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:46.366 [2024-07-15 16:10:15.065386] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:46.366 [2024-07-15 16:10:15.065392] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:28:46.366 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:28:46.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:28:46.366 Initializing NVMe Controllers 00:28:46.366 16:10:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:28:46.366 16:10:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:46.366 16:10:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:46.366 16:10:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:46.366 00:28:46.366 real 0m0.100s 00:28:46.366 user 0m0.038s 00:28:46.366 sys 0m0.062s 00:28:46.366 16:10:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:46.366 16:10:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:46.366 ************************************ 00:28:46.366 END TEST nvmf_target_disconnect_tc1 00:28:46.366 ************************************ 00:28:46.366 16:10:15 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:28:46.366 16:10:15 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:28:46.366 16:10:15 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:46.366 16:10:15 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:46.366 16:10:15 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:46.366 ************************************ 00:28:46.366 START TEST nvmf_target_disconnect_tc2 00:28:46.366 ************************************ 00:28:46.366 16:10:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:28:46.366 16:10:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:28:46.366 16:10:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:46.366 16:10:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:46.366 16:10:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:46.366 16:10:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:46.366 16:10:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3924997 00:28:46.366 16:10:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3924997 00:28:46.366 16:10:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:46.366 16:10:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3924997 ']' 00:28:46.366 16:10:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:46.366 16:10:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:46.366 16:10:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:46.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:46.366 16:10:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:46.366 16:10:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:46.366 [2024-07-15 16:10:15.203119] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:28:46.366 [2024-07-15 16:10:15.203156] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:46.366 EAL: No free 2048 kB hugepages reported on node 1 00:28:46.367 [2024-07-15 16:10:15.272504] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:46.624 [2024-07-15 16:10:15.351930] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:46.624 [2024-07-15 16:10:15.351970] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:46.624 [2024-07-15 16:10:15.351977] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:46.624 [2024-07-15 16:10:15.351983] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:46.624 [2024-07-15 16:10:15.351988] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:46.624 [2024-07-15 16:10:15.352098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:28:46.624 [2024-07-15 16:10:15.352219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:28:46.624 [2024-07-15 16:10:15.352325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:28:46.624 [2024-07-15 16:10:15.352325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:28:47.187 16:10:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:47.187 16:10:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:28:47.187 16:10:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:47.187 16:10:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:47.187 16:10:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:47.187 16:10:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:47.187 16:10:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:47.187 16:10:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.187 16:10:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:47.187 Malloc0 00:28:47.188 16:10:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.188 16:10:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:47.188 16:10:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.188 16:10:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:47.188 [2024-07-15 16:10:16.068438] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:47.188 16:10:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.188 16:10:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:47.188 16:10:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.188 16:10:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:47.188 16:10:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.188 16:10:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:47.188 16:10:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.188 16:10:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:47.188 16:10:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.188 16:10:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:47.188 16:10:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.188 16:10:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:47.188 [2024-07-15 16:10:16.100677] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:47.188 16:10:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.188 16:10:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:47.188 16:10:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.188 16:10:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:47.188 16:10:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.188 16:10:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3925037 00:28:47.188 16:10:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:28:47.188 16:10:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:47.444 EAL: No free 2048 kB hugepages reported on node 1 00:28:49.352 16:10:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3924997 00:28:49.352 16:10:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Write completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Write completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Write completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Write completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Write completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Write completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Write completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Write completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Write completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Write completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Write completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 [2024-07-15 16:10:18.134357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Write completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Write completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Write completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Write completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Write completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Write completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Write completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Write completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Write completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Write completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Write completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Write completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Write completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Write completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 [2024-07-15 16:10:18.134556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Write completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Write completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Write completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Write completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Write completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Write completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 [2024-07-15 16:10:18.134751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Write completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.352 Read completed with error (sct=0, sc=8) 00:28:49.352 starting I/O failed 00:28:49.353 Read completed with error (sct=0, sc=8) 00:28:49.353 starting I/O failed 00:28:49.353 Read completed with error (sct=0, sc=8) 00:28:49.353 starting I/O failed 00:28:49.353 Read completed with error (sct=0, sc=8) 00:28:49.353 starting I/O failed 00:28:49.353 Write completed with error (sct=0, sc=8) 00:28:49.353 starting I/O failed 00:28:49.353 Write completed with error (sct=0, sc=8) 00:28:49.353 starting I/O failed 00:28:49.353 Write completed with error (sct=0, sc=8) 00:28:49.353 starting I/O failed 00:28:49.353 Write completed with error (sct=0, sc=8) 00:28:49.353 starting I/O failed 00:28:49.353 Write completed with error (sct=0, sc=8) 00:28:49.353 starting I/O failed 00:28:49.353 Read completed with error (sct=0, sc=8) 00:28:49.353 starting I/O failed 00:28:49.353 Write completed with error (sct=0, sc=8) 00:28:49.353 starting I/O failed 00:28:49.353 Write completed with error (sct=0, sc=8) 00:28:49.353 starting I/O failed 00:28:49.353 Read completed with error (sct=0, sc=8) 00:28:49.353 starting I/O failed 00:28:49.353 Write completed with error (sct=0, sc=8) 00:28:49.353 starting I/O failed 00:28:49.353 Read completed with error (sct=0, sc=8) 00:28:49.353 starting I/O failed 00:28:49.353 Write completed with error (sct=0, sc=8) 00:28:49.353 starting I/O failed 00:28:49.353 Write completed with error (sct=0, sc=8) 00:28:49.353 starting I/O failed 00:28:49.353 Write completed with error (sct=0, sc=8) 00:28:49.353 starting I/O failed 00:28:49.353 Read completed with error (sct=0, sc=8) 00:28:49.353 starting I/O failed 00:28:49.353 Write completed with error (sct=0, sc=8) 00:28:49.353 starting I/O failed 00:28:49.353 Write completed with error (sct=0, sc=8) 00:28:49.353 starting I/O failed 00:28:49.353 Read completed with error (sct=0, sc=8) 00:28:49.353 starting I/O failed 00:28:49.353 Read completed with error (sct=0, sc=8) 00:28:49.353 starting I/O failed 00:28:49.353 Write completed with error (sct=0, sc=8) 00:28:49.353 starting I/O failed 00:28:49.353 Read completed with error (sct=0, sc=8) 00:28:49.353 starting I/O failed 00:28:49.353 Write completed with error (sct=0, sc=8) 00:28:49.353 starting I/O failed 00:28:49.353 Write completed with error (sct=0, sc=8) 00:28:49.353 starting I/O failed 00:28:49.353 Write completed with error (sct=0, sc=8) 00:28:49.353 starting I/O failed 00:28:49.353 Write completed with error (sct=0, sc=8) 00:28:49.353 starting I/O failed 00:28:49.353 [2024-07-15 16:10:18.134946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.353 [2024-07-15 16:10:18.135153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.353 [2024-07-15 16:10:18.135173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.353 qpair failed and we were unable to recover it. 00:28:49.353 [2024-07-15 16:10:18.135315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.353 [2024-07-15 16:10:18.136182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.353 qpair failed and we were unable to recover it. 00:28:49.353 [2024-07-15 16:10:18.136356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.353 [2024-07-15 16:10:18.136367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.353 qpair failed and we were unable to recover it. 00:28:49.353 [2024-07-15 16:10:18.136636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.353 [2024-07-15 16:10:18.136676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.353 qpair failed and we were unable to recover it. 00:28:49.353 [2024-07-15 16:10:18.136890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.353 [2024-07-15 16:10:18.136921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.353 qpair failed and we were unable to recover it. 00:28:49.353 [2024-07-15 16:10:18.137188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.353 [2024-07-15 16:10:18.137218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.353 qpair failed and we were unable to recover it. 00:28:49.353 [2024-07-15 16:10:18.137393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.353 [2024-07-15 16:10:18.137424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.353 qpair failed and we were unable to recover it. 00:28:49.353 [2024-07-15 16:10:18.137569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.353 [2024-07-15 16:10:18.137599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.353 qpair failed and we were unable to recover it. 00:28:49.353 [2024-07-15 16:10:18.137817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.353 [2024-07-15 16:10:18.137848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.353 qpair failed and we were unable to recover it. 00:28:49.353 [2024-07-15 16:10:18.138005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.353 [2024-07-15 16:10:18.138035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.353 qpair failed and we were unable to recover it. 00:28:49.353 [2024-07-15 16:10:18.138310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.353 [2024-07-15 16:10:18.138342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.353 qpair failed and we were unable to recover it. 00:28:49.353 [2024-07-15 16:10:18.138545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.353 [2024-07-15 16:10:18.138577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.353 qpair failed and we were unable to recover it. 00:28:49.353 [2024-07-15 16:10:18.138727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.353 [2024-07-15 16:10:18.138758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.353 qpair failed and we were unable to recover it. 00:28:49.353 [2024-07-15 16:10:18.138907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.353 [2024-07-15 16:10:18.138937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.353 qpair failed and we were unable to recover it. 00:28:49.353 [2024-07-15 16:10:18.139100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.353 [2024-07-15 16:10:18.139131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.353 qpair failed and we were unable to recover it. 00:28:49.353 [2024-07-15 16:10:18.139290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.353 [2024-07-15 16:10:18.139323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.353 qpair failed and we were unable to recover it. 00:28:49.353 [2024-07-15 16:10:18.139563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.353 [2024-07-15 16:10:18.139594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.353 qpair failed and we were unable to recover it. 00:28:49.353 [2024-07-15 16:10:18.139734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.353 [2024-07-15 16:10:18.139765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.353 qpair failed and we were unable to recover it. 00:28:49.353 [2024-07-15 16:10:18.140021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.353 [2024-07-15 16:10:18.140053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.353 qpair failed and we were unable to recover it. 00:28:49.353 [2024-07-15 16:10:18.140190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.353 [2024-07-15 16:10:18.140221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.353 qpair failed and we were unable to recover it. 00:28:49.353 [2024-07-15 16:10:18.140455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.353 [2024-07-15 16:10:18.140486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.353 qpair failed and we were unable to recover it. 00:28:49.353 [2024-07-15 16:10:18.140637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.353 [2024-07-15 16:10:18.140667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.353 qpair failed and we were unable to recover it. 00:28:49.353 [2024-07-15 16:10:18.140819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.353 [2024-07-15 16:10:18.140850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.353 qpair failed and we were unable to recover it. 00:28:49.353 [2024-07-15 16:10:18.141126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.353 [2024-07-15 16:10:18.141157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.353 qpair failed and we were unable to recover it. 00:28:49.353 [2024-07-15 16:10:18.141427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.353 [2024-07-15 16:10:18.141458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.353 qpair failed and we were unable to recover it. 00:28:49.353 [2024-07-15 16:10:18.141662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.353 [2024-07-15 16:10:18.141692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.353 qpair failed and we were unable to recover it. 00:28:49.353 [2024-07-15 16:10:18.141899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.353 [2024-07-15 16:10:18.141930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.353 qpair failed and we were unable to recover it. 00:28:49.353 [2024-07-15 16:10:18.142139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.353 [2024-07-15 16:10:18.142170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.353 qpair failed and we were unable to recover it. 00:28:49.353 [2024-07-15 16:10:18.142423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.354 [2024-07-15 16:10:18.142435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.354 qpair failed and we were unable to recover it. 00:28:49.354 [2024-07-15 16:10:18.142606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.354 [2024-07-15 16:10:18.142617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.354 qpair failed and we were unable to recover it. 00:28:49.354 [2024-07-15 16:10:18.142727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.354 [2024-07-15 16:10:18.142739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.354 qpair failed and we were unable to recover it. 00:28:49.354 [2024-07-15 16:10:18.142864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.354 [2024-07-15 16:10:18.142876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.354 qpair failed and we were unable to recover it. 00:28:49.354 [2024-07-15 16:10:18.142989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.354 [2024-07-15 16:10:18.143028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.354 qpair failed and we were unable to recover it. 00:28:49.354 [2024-07-15 16:10:18.143305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.354 [2024-07-15 16:10:18.143338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.354 qpair failed and we were unable to recover it. 00:28:49.354 [2024-07-15 16:10:18.143497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.354 [2024-07-15 16:10:18.143527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.354 qpair failed and we were unable to recover it. 00:28:49.354 [2024-07-15 16:10:18.143797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.354 [2024-07-15 16:10:18.143839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.354 qpair failed and we were unable to recover it. 00:28:49.354 [2024-07-15 16:10:18.143996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.354 [2024-07-15 16:10:18.144008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.354 qpair failed and we were unable to recover it. 00:28:49.354 [2024-07-15 16:10:18.144114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.354 [2024-07-15 16:10:18.144146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.354 qpair failed and we were unable to recover it. 00:28:49.354 [2024-07-15 16:10:18.144375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.354 [2024-07-15 16:10:18.144406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.354 qpair failed and we were unable to recover it. 00:28:49.354 [2024-07-15 16:10:18.144629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.354 [2024-07-15 16:10:18.144660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.354 qpair failed and we were unable to recover it. 00:28:49.354 [2024-07-15 16:10:18.144866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.354 [2024-07-15 16:10:18.144898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.354 qpair failed and we were unable to recover it. 00:28:49.354 [2024-07-15 16:10:18.145187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.354 [2024-07-15 16:10:18.145218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.354 qpair failed and we were unable to recover it. 00:28:49.354 [2024-07-15 16:10:18.145436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.354 [2024-07-15 16:10:18.145468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.354 qpair failed and we were unable to recover it. 00:28:49.354 [2024-07-15 16:10:18.145592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.354 [2024-07-15 16:10:18.145635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.354 qpair failed and we were unable to recover it. 00:28:49.354 [2024-07-15 16:10:18.145868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.354 [2024-07-15 16:10:18.145899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.354 qpair failed and we were unable to recover it. 00:28:49.354 [2024-07-15 16:10:18.146034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.354 [2024-07-15 16:10:18.146065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.354 qpair failed and we were unable to recover it. 00:28:49.354 [2024-07-15 16:10:18.146301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.354 [2024-07-15 16:10:18.146332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.354 qpair failed and we were unable to recover it. 00:28:49.354 [2024-07-15 16:10:18.146549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.354 [2024-07-15 16:10:18.146580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.354 qpair failed and we were unable to recover it. 00:28:49.354 [2024-07-15 16:10:18.146788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.354 [2024-07-15 16:10:18.146819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.354 qpair failed and we were unable to recover it. 00:28:49.354 [2024-07-15 16:10:18.147092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.354 [2024-07-15 16:10:18.147123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.354 qpair failed and we were unable to recover it. 00:28:49.354 [2024-07-15 16:10:18.147267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.354 [2024-07-15 16:10:18.147283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.354 qpair failed and we were unable to recover it. 00:28:49.354 [2024-07-15 16:10:18.147453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.354 [2024-07-15 16:10:18.147484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.354 qpair failed and we were unable to recover it. 00:28:49.354 [2024-07-15 16:10:18.147685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.354 [2024-07-15 16:10:18.147716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.354 qpair failed and we were unable to recover it. 00:28:49.354 [2024-07-15 16:10:18.147935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.354 [2024-07-15 16:10:18.147966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.354 qpair failed and we were unable to recover it. 00:28:49.354 [2024-07-15 16:10:18.148163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.354 [2024-07-15 16:10:18.148179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.354 qpair failed and we were unable to recover it. 00:28:49.354 [2024-07-15 16:10:18.148286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.354 [2024-07-15 16:10:18.148301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.354 qpair failed and we were unable to recover it. 00:28:49.354 [2024-07-15 16:10:18.148537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.354 [2024-07-15 16:10:18.148567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.354 qpair failed and we were unable to recover it. 00:28:49.354 [2024-07-15 16:10:18.148771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.354 [2024-07-15 16:10:18.148802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.354 qpair failed and we were unable to recover it. 00:28:49.354 [2024-07-15 16:10:18.149092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.354 [2024-07-15 16:10:18.149124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.354 qpair failed and we were unable to recover it. 00:28:49.354 [2024-07-15 16:10:18.149411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.354 [2024-07-15 16:10:18.149448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.354 qpair failed and we were unable to recover it. 00:28:49.354 [2024-07-15 16:10:18.149581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.354 [2024-07-15 16:10:18.149612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.354 qpair failed and we were unable to recover it. 00:28:49.354 [2024-07-15 16:10:18.149752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.354 [2024-07-15 16:10:18.149784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.354 qpair failed and we were unable to recover it. 00:28:49.354 [2024-07-15 16:10:18.149992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.354 [2024-07-15 16:10:18.150022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.354 qpair failed and we were unable to recover it. 00:28:49.354 [2024-07-15 16:10:18.150308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.354 [2024-07-15 16:10:18.150339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.354 qpair failed and we were unable to recover it. 00:28:49.354 [2024-07-15 16:10:18.150549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.354 [2024-07-15 16:10:18.150579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.354 qpair failed and we were unable to recover it. 00:28:49.354 [2024-07-15 16:10:18.150757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.354 [2024-07-15 16:10:18.150788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.354 qpair failed and we were unable to recover it. 00:28:49.354 [2024-07-15 16:10:18.151021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.355 [2024-07-15 16:10:18.151052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.355 qpair failed and we were unable to recover it. 00:28:49.355 [2024-07-15 16:10:18.151257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.355 [2024-07-15 16:10:18.151288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.355 qpair failed and we were unable to recover it. 00:28:49.355 [2024-07-15 16:10:18.151450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.355 [2024-07-15 16:10:18.151481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.355 qpair failed and we were unable to recover it. 00:28:49.355 [2024-07-15 16:10:18.151689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.355 [2024-07-15 16:10:18.151720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.355 qpair failed and we were unable to recover it. 00:28:49.355 [2024-07-15 16:10:18.151993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.355 [2024-07-15 16:10:18.152024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.355 qpair failed and we were unable to recover it. 00:28:49.355 [2024-07-15 16:10:18.152271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.355 [2024-07-15 16:10:18.152287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.355 qpair failed and we were unable to recover it. 00:28:49.355 [2024-07-15 16:10:18.152404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.355 [2024-07-15 16:10:18.152420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.355 qpair failed and we were unable to recover it. 00:28:49.355 [2024-07-15 16:10:18.152583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.355 [2024-07-15 16:10:18.152626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.355 qpair failed and we were unable to recover it. 00:28:49.355 [2024-07-15 16:10:18.152837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.355 [2024-07-15 16:10:18.152868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.355 qpair failed and we were unable to recover it. 00:28:49.355 [2024-07-15 16:10:18.153147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.355 [2024-07-15 16:10:18.153162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.355 qpair failed and we were unable to recover it. 00:28:49.355 [2024-07-15 16:10:18.153344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.355 [2024-07-15 16:10:18.153360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.355 qpair failed and we were unable to recover it. 00:28:49.355 [2024-07-15 16:10:18.153594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.355 [2024-07-15 16:10:18.153625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.355 qpair failed and we were unable to recover it. 00:28:49.355 [2024-07-15 16:10:18.153857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.355 [2024-07-15 16:10:18.153888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.355 qpair failed and we were unable to recover it. 00:28:49.355 [2024-07-15 16:10:18.154155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.355 [2024-07-15 16:10:18.154185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.355 qpair failed and we were unable to recover it. 00:28:49.355 [2024-07-15 16:10:18.154403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.355 [2024-07-15 16:10:18.154419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.355 qpair failed and we were unable to recover it. 00:28:49.355 [2024-07-15 16:10:18.154598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.355 [2024-07-15 16:10:18.154629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.355 qpair failed and we were unable to recover it. 00:28:49.355 [2024-07-15 16:10:18.154842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.355 [2024-07-15 16:10:18.154873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.355 qpair failed and we were unable to recover it. 00:28:49.355 [2024-07-15 16:10:18.155018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.355 [2024-07-15 16:10:18.155054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.355 qpair failed and we were unable to recover it. 00:28:49.355 [2024-07-15 16:10:18.155275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.355 [2024-07-15 16:10:18.155309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.355 qpair failed and we were unable to recover it. 00:28:49.355 [2024-07-15 16:10:18.155578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.355 [2024-07-15 16:10:18.155609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.355 qpair failed and we were unable to recover it. 00:28:49.355 [2024-07-15 16:10:18.155812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.355 [2024-07-15 16:10:18.155842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.355 qpair failed and we were unable to recover it. 00:28:49.355 [2024-07-15 16:10:18.156078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.355 [2024-07-15 16:10:18.156110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.355 qpair failed and we were unable to recover it. 00:28:49.355 [2024-07-15 16:10:18.156333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.355 [2024-07-15 16:10:18.156348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.355 qpair failed and we were unable to recover it. 00:28:49.355 [2024-07-15 16:10:18.156579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.355 [2024-07-15 16:10:18.156595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.355 qpair failed and we were unable to recover it. 00:28:49.355 [2024-07-15 16:10:18.156712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.355 [2024-07-15 16:10:18.156727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.355 qpair failed and we were unable to recover it. 00:28:49.355 [2024-07-15 16:10:18.156913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.355 [2024-07-15 16:10:18.156928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.355 qpair failed and we were unable to recover it. 00:28:49.355 [2024-07-15 16:10:18.157041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.355 [2024-07-15 16:10:18.157056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.355 qpair failed and we were unable to recover it. 00:28:49.355 [2024-07-15 16:10:18.157222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.355 [2024-07-15 16:10:18.157241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.355 qpair failed and we were unable to recover it. 00:28:49.355 [2024-07-15 16:10:18.157503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.355 [2024-07-15 16:10:18.157534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.355 qpair failed and we were unable to recover it. 00:28:49.355 [2024-07-15 16:10:18.157765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.355 [2024-07-15 16:10:18.157796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.355 qpair failed and we were unable to recover it. 00:28:49.355 [2024-07-15 16:10:18.157934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.355 [2024-07-15 16:10:18.157977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.355 qpair failed and we were unable to recover it. 00:28:49.355 [2024-07-15 16:10:18.158161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.355 [2024-07-15 16:10:18.158176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.355 qpair failed and we were unable to recover it. 00:28:49.355 [2024-07-15 16:10:18.158340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.355 [2024-07-15 16:10:18.158356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.355 qpair failed and we were unable to recover it. 00:28:49.355 [2024-07-15 16:10:18.158542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.355 [2024-07-15 16:10:18.158573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.355 qpair failed and we were unable to recover it. 00:28:49.355 [2024-07-15 16:10:18.158789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.355 [2024-07-15 16:10:18.158819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.355 qpair failed and we were unable to recover it. 00:28:49.355 [2024-07-15 16:10:18.159029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.355 [2024-07-15 16:10:18.159060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.355 qpair failed and we were unable to recover it. 00:28:49.355 [2024-07-15 16:10:18.159260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.355 [2024-07-15 16:10:18.159277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.355 qpair failed and we were unable to recover it. 00:28:49.355 [2024-07-15 16:10:18.159447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.355 [2024-07-15 16:10:18.159478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.355 qpair failed and we were unable to recover it. 00:28:49.355 [2024-07-15 16:10:18.159683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.355 [2024-07-15 16:10:18.159714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.355 qpair failed and we were unable to recover it. 00:28:49.356 [2024-07-15 16:10:18.159942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.356 [2024-07-15 16:10:18.159973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.356 qpair failed and we were unable to recover it. 00:28:49.356 [2024-07-15 16:10:18.160163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.356 [2024-07-15 16:10:18.160193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.356 qpair failed and we were unable to recover it. 00:28:49.356 [2024-07-15 16:10:18.160444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.356 [2024-07-15 16:10:18.160476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.356 qpair failed and we were unable to recover it. 00:28:49.356 [2024-07-15 16:10:18.160636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.356 [2024-07-15 16:10:18.160667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.356 qpair failed and we were unable to recover it. 00:28:49.356 [2024-07-15 16:10:18.160880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.356 [2024-07-15 16:10:18.160911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.356 qpair failed and we were unable to recover it. 00:28:49.356 [2024-07-15 16:10:18.161126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.356 [2024-07-15 16:10:18.161157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.356 qpair failed and we were unable to recover it. 00:28:49.356 [2024-07-15 16:10:18.161302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.356 [2024-07-15 16:10:18.161333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.356 qpair failed and we were unable to recover it. 00:28:49.356 [2024-07-15 16:10:18.161598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.356 [2024-07-15 16:10:18.161629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.356 qpair failed and we were unable to recover it. 00:28:49.356 [2024-07-15 16:10:18.161902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.356 [2024-07-15 16:10:18.161918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.356 qpair failed and we were unable to recover it. 00:28:49.356 [2024-07-15 16:10:18.162074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.356 [2024-07-15 16:10:18.162091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.356 qpair failed and we were unable to recover it. 00:28:49.356 [2024-07-15 16:10:18.162278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.356 [2024-07-15 16:10:18.162293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.356 qpair failed and we were unable to recover it. 00:28:49.356 [2024-07-15 16:10:18.162412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.356 [2024-07-15 16:10:18.162443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.356 qpair failed and we were unable to recover it. 00:28:49.356 [2024-07-15 16:10:18.162729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.356 [2024-07-15 16:10:18.162760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.356 qpair failed and we were unable to recover it. 00:28:49.356 [2024-07-15 16:10:18.163051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.356 [2024-07-15 16:10:18.163082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.356 qpair failed and we were unable to recover it. 00:28:49.356 [2024-07-15 16:10:18.163223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.356 [2024-07-15 16:10:18.163286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.356 qpair failed and we were unable to recover it. 00:28:49.356 [2024-07-15 16:10:18.163527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.356 [2024-07-15 16:10:18.163557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.356 qpair failed and we were unable to recover it. 00:28:49.356 [2024-07-15 16:10:18.163775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.356 [2024-07-15 16:10:18.163806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.356 qpair failed and we were unable to recover it. 00:28:49.356 [2024-07-15 16:10:18.164097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.356 [2024-07-15 16:10:18.164113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.356 qpair failed and we were unable to recover it. 00:28:49.356 [2024-07-15 16:10:18.164236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.356 [2024-07-15 16:10:18.164255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.356 qpair failed and we were unable to recover it. 00:28:49.356 [2024-07-15 16:10:18.164449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.356 [2024-07-15 16:10:18.164480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.356 qpair failed and we were unable to recover it. 00:28:49.356 [2024-07-15 16:10:18.164716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.356 [2024-07-15 16:10:18.164747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.356 qpair failed and we were unable to recover it. 00:28:49.356 [2024-07-15 16:10:18.164953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.356 [2024-07-15 16:10:18.164984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.356 qpair failed and we were unable to recover it. 00:28:49.356 [2024-07-15 16:10:18.165269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.356 [2024-07-15 16:10:18.165301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.356 qpair failed and we were unable to recover it. 00:28:49.356 [2024-07-15 16:10:18.165446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.356 [2024-07-15 16:10:18.165476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.356 qpair failed and we were unable to recover it. 00:28:49.356 [2024-07-15 16:10:18.165619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.356 [2024-07-15 16:10:18.165650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.356 qpair failed and we were unable to recover it. 00:28:49.356 [2024-07-15 16:10:18.165914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.356 [2024-07-15 16:10:18.165946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.356 qpair failed and we were unable to recover it. 00:28:49.356 [2024-07-15 16:10:18.166141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.356 [2024-07-15 16:10:18.166157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.356 qpair failed and we were unable to recover it. 00:28:49.356 [2024-07-15 16:10:18.166371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.356 [2024-07-15 16:10:18.166386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.356 qpair failed and we were unable to recover it. 00:28:49.356 [2024-07-15 16:10:18.166629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.356 [2024-07-15 16:10:18.166659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.356 qpair failed and we were unable to recover it. 00:28:49.356 [2024-07-15 16:10:18.166867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.356 [2024-07-15 16:10:18.166898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.356 qpair failed and we were unable to recover it. 00:28:49.356 [2024-07-15 16:10:18.167100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.356 [2024-07-15 16:10:18.167131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.356 qpair failed and we were unable to recover it. 00:28:49.357 [2024-07-15 16:10:18.167417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.357 [2024-07-15 16:10:18.167448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.357 qpair failed and we were unable to recover it. 00:28:49.357 [2024-07-15 16:10:18.167667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.357 [2024-07-15 16:10:18.167698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.357 qpair failed and we were unable to recover it. 00:28:49.357 [2024-07-15 16:10:18.168005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.357 [2024-07-15 16:10:18.168021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.357 qpair failed and we were unable to recover it. 00:28:49.357 [2024-07-15 16:10:18.168206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.357 [2024-07-15 16:10:18.168221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.357 qpair failed and we were unable to recover it. 00:28:49.357 [2024-07-15 16:10:18.168421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.357 [2024-07-15 16:10:18.168436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.357 qpair failed and we were unable to recover it. 00:28:49.357 [2024-07-15 16:10:18.168602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.357 [2024-07-15 16:10:18.168617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.357 qpair failed and we were unable to recover it. 00:28:49.357 [2024-07-15 16:10:18.168730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.357 [2024-07-15 16:10:18.168760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.357 qpair failed and we were unable to recover it. 00:28:49.357 [2024-07-15 16:10:18.168905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.357 [2024-07-15 16:10:18.168936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.357 qpair failed and we were unable to recover it. 00:28:49.357 [2024-07-15 16:10:18.169134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.357 [2024-07-15 16:10:18.169165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.357 qpair failed and we were unable to recover it. 00:28:49.357 [2024-07-15 16:10:18.169323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.357 [2024-07-15 16:10:18.169356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.357 qpair failed and we were unable to recover it. 00:28:49.357 [2024-07-15 16:10:18.169569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.357 [2024-07-15 16:10:18.169600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.357 qpair failed and we were unable to recover it. 00:28:49.357 [2024-07-15 16:10:18.169790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.357 [2024-07-15 16:10:18.169821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.357 qpair failed and we were unable to recover it. 00:28:49.357 [2024-07-15 16:10:18.169997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.357 [2024-07-15 16:10:18.170029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.357 qpair failed and we were unable to recover it. 00:28:49.357 [2024-07-15 16:10:18.170223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.357 [2024-07-15 16:10:18.170243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.357 qpair failed and we were unable to recover it. 00:28:49.357 [2024-07-15 16:10:18.170425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.357 [2024-07-15 16:10:18.170456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.357 qpair failed and we were unable to recover it. 00:28:49.357 [2024-07-15 16:10:18.170722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.357 [2024-07-15 16:10:18.170753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.357 qpair failed and we were unable to recover it. 00:28:49.357 [2024-07-15 16:10:18.170974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.357 [2024-07-15 16:10:18.171005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.357 qpair failed and we were unable to recover it. 00:28:49.357 [2024-07-15 16:10:18.171269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.357 [2024-07-15 16:10:18.171302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.357 qpair failed and we were unable to recover it. 00:28:49.357 [2024-07-15 16:10:18.171519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.357 [2024-07-15 16:10:18.171549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.357 qpair failed and we were unable to recover it. 00:28:49.357 [2024-07-15 16:10:18.171861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.357 [2024-07-15 16:10:18.171902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.357 qpair failed and we were unable to recover it. 00:28:49.357 [2024-07-15 16:10:18.172096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.357 [2024-07-15 16:10:18.172111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.357 qpair failed and we were unable to recover it. 00:28:49.357 [2024-07-15 16:10:18.172347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.357 [2024-07-15 16:10:18.172379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.357 qpair failed and we were unable to recover it. 00:28:49.357 [2024-07-15 16:10:18.172602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.357 [2024-07-15 16:10:18.172633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.357 qpair failed and we were unable to recover it. 00:28:49.357 [2024-07-15 16:10:18.172842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.357 [2024-07-15 16:10:18.172872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.357 qpair failed and we were unable to recover it. 00:28:49.357 [2024-07-15 16:10:18.173123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.357 [2024-07-15 16:10:18.173139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.357 qpair failed and we were unable to recover it. 00:28:49.357 [2024-07-15 16:10:18.173306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.357 [2024-07-15 16:10:18.173337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.357 qpair failed and we were unable to recover it. 00:28:49.357 [2024-07-15 16:10:18.173537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.357 [2024-07-15 16:10:18.173568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.357 qpair failed and we were unable to recover it. 00:28:49.357 [2024-07-15 16:10:18.173770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.357 [2024-07-15 16:10:18.173807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.357 qpair failed and we were unable to recover it. 00:28:49.357 [2024-07-15 16:10:18.174031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.357 [2024-07-15 16:10:18.174062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.357 qpair failed and we were unable to recover it. 00:28:49.357 [2024-07-15 16:10:18.174294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.357 [2024-07-15 16:10:18.174325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.357 qpair failed and we were unable to recover it. 00:28:49.357 [2024-07-15 16:10:18.174488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.357 [2024-07-15 16:10:18.174519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.357 qpair failed and we were unable to recover it. 00:28:49.357 [2024-07-15 16:10:18.174628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.357 [2024-07-15 16:10:18.174658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.357 qpair failed and we were unable to recover it. 00:28:49.357 [2024-07-15 16:10:18.174866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.357 [2024-07-15 16:10:18.174896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.357 qpair failed and we were unable to recover it. 00:28:49.357 [2024-07-15 16:10:18.175103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.357 [2024-07-15 16:10:18.175134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.357 qpair failed and we were unable to recover it. 00:28:49.357 [2024-07-15 16:10:18.175272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.357 [2024-07-15 16:10:18.175288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.357 qpair failed and we were unable to recover it. 00:28:49.357 [2024-07-15 16:10:18.175523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.357 [2024-07-15 16:10:18.175554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.357 qpair failed and we were unable to recover it. 00:28:49.357 [2024-07-15 16:10:18.175784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.357 [2024-07-15 16:10:18.175815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.357 qpair failed and we were unable to recover it. 00:28:49.357 [2024-07-15 16:10:18.176040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.357 [2024-07-15 16:10:18.176071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.357 qpair failed and we were unable to recover it. 00:28:49.357 [2024-07-15 16:10:18.176278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.357 [2024-07-15 16:10:18.176310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.357 qpair failed and we were unable to recover it. 00:28:49.357 [2024-07-15 16:10:18.176517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.357 [2024-07-15 16:10:18.176548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.357 qpair failed and we were unable to recover it. 00:28:49.358 [2024-07-15 16:10:18.176706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.358 [2024-07-15 16:10:18.176737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.358 qpair failed and we were unable to recover it. 00:28:49.358 [2024-07-15 16:10:18.176955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.358 [2024-07-15 16:10:18.176987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.358 qpair failed and we were unable to recover it. 00:28:49.358 [2024-07-15 16:10:18.177141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.358 [2024-07-15 16:10:18.177156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.358 qpair failed and we were unable to recover it. 00:28:49.358 [2024-07-15 16:10:18.177395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.358 [2024-07-15 16:10:18.177427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.358 qpair failed and we were unable to recover it. 00:28:49.358 [2024-07-15 16:10:18.177642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.358 [2024-07-15 16:10:18.177673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.358 qpair failed and we were unable to recover it. 00:28:49.358 [2024-07-15 16:10:18.177883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.358 [2024-07-15 16:10:18.177898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.358 qpair failed and we were unable to recover it. 00:28:49.358 [2024-07-15 16:10:18.178012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.358 [2024-07-15 16:10:18.178043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.358 qpair failed and we were unable to recover it. 00:28:49.358 [2024-07-15 16:10:18.178252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.358 [2024-07-15 16:10:18.178284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.358 qpair failed and we were unable to recover it. 00:28:49.358 [2024-07-15 16:10:18.178524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.358 [2024-07-15 16:10:18.178555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.358 qpair failed and we were unable to recover it. 00:28:49.358 [2024-07-15 16:10:18.178753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.358 [2024-07-15 16:10:18.178784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.358 qpair failed and we were unable to recover it. 00:28:49.358 [2024-07-15 16:10:18.178984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.358 [2024-07-15 16:10:18.179015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.358 qpair failed and we were unable to recover it. 00:28:49.358 [2024-07-15 16:10:18.179276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.358 [2024-07-15 16:10:18.179292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.358 qpair failed and we were unable to recover it. 00:28:49.358 [2024-07-15 16:10:18.179419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.358 [2024-07-15 16:10:18.179435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.358 qpair failed and we were unable to recover it. 00:28:49.358 [2024-07-15 16:10:18.179622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.358 [2024-07-15 16:10:18.179653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.358 qpair failed and we were unable to recover it. 00:28:49.358 [2024-07-15 16:10:18.179873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.358 [2024-07-15 16:10:18.179904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.358 qpair failed and we were unable to recover it. 00:28:49.358 [2024-07-15 16:10:18.180106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.358 [2024-07-15 16:10:18.180147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.358 qpair failed and we were unable to recover it. 00:28:49.358 [2024-07-15 16:10:18.180310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.358 [2024-07-15 16:10:18.180326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.358 qpair failed and we were unable to recover it. 00:28:49.358 [2024-07-15 16:10:18.180509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.358 [2024-07-15 16:10:18.180524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.358 qpair failed and we were unable to recover it. 00:28:49.358 [2024-07-15 16:10:18.180711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.358 [2024-07-15 16:10:18.180726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.358 qpair failed and we were unable to recover it. 00:28:49.358 [2024-07-15 16:10:18.180968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.358 [2024-07-15 16:10:18.180983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.358 qpair failed and we were unable to recover it. 00:28:49.358 [2024-07-15 16:10:18.181158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.358 [2024-07-15 16:10:18.181173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.358 qpair failed and we were unable to recover it. 00:28:49.358 [2024-07-15 16:10:18.181296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.358 [2024-07-15 16:10:18.181312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.358 qpair failed and we were unable to recover it. 00:28:49.358 [2024-07-15 16:10:18.181575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.358 [2024-07-15 16:10:18.181607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.358 qpair failed and we were unable to recover it. 00:28:49.358 [2024-07-15 16:10:18.181820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.358 [2024-07-15 16:10:18.181851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.358 qpair failed and we were unable to recover it. 00:28:49.358 [2024-07-15 16:10:18.182060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.358 [2024-07-15 16:10:18.182090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.358 qpair failed and we were unable to recover it. 00:28:49.358 [2024-07-15 16:10:18.182231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.358 [2024-07-15 16:10:18.182263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.358 qpair failed and we were unable to recover it. 00:28:49.358 [2024-07-15 16:10:18.182468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.358 [2024-07-15 16:10:18.182500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.358 qpair failed and we were unable to recover it. 00:28:49.358 [2024-07-15 16:10:18.182771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.358 [2024-07-15 16:10:18.182813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.358 qpair failed and we were unable to recover it. 00:28:49.358 [2024-07-15 16:10:18.183003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.358 [2024-07-15 16:10:18.183019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.358 qpair failed and we were unable to recover it. 00:28:49.358 [2024-07-15 16:10:18.183194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.358 [2024-07-15 16:10:18.183210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.358 qpair failed and we were unable to recover it. 00:28:49.358 [2024-07-15 16:10:18.183337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.358 [2024-07-15 16:10:18.183353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.358 qpair failed and we were unable to recover it. 00:28:49.358 [2024-07-15 16:10:18.183470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.358 [2024-07-15 16:10:18.183486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.358 qpair failed and we were unable to recover it. 00:28:49.358 [2024-07-15 16:10:18.183650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.358 [2024-07-15 16:10:18.183665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.358 qpair failed and we were unable to recover it. 00:28:49.358 [2024-07-15 16:10:18.183926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.358 [2024-07-15 16:10:18.183957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.358 qpair failed and we were unable to recover it. 00:28:49.358 [2024-07-15 16:10:18.184160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.358 [2024-07-15 16:10:18.184191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.358 qpair failed and we were unable to recover it. 00:28:49.358 [2024-07-15 16:10:18.184401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.358 [2024-07-15 16:10:18.184432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.358 qpair failed and we were unable to recover it. 00:28:49.358 [2024-07-15 16:10:18.184573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.358 [2024-07-15 16:10:18.184603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.358 qpair failed and we were unable to recover it. 00:28:49.358 [2024-07-15 16:10:18.184746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.358 [2024-07-15 16:10:18.184777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.358 qpair failed and we were unable to recover it. 00:28:49.358 [2024-07-15 16:10:18.184926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.358 [2024-07-15 16:10:18.184957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.358 qpair failed and we were unable to recover it. 00:28:49.358 [2024-07-15 16:10:18.185221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.358 [2024-07-15 16:10:18.185264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.359 qpair failed and we were unable to recover it. 00:28:49.359 [2024-07-15 16:10:18.185461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.359 [2024-07-15 16:10:18.185492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.359 qpair failed and we were unable to recover it. 00:28:49.359 [2024-07-15 16:10:18.185695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.359 [2024-07-15 16:10:18.185726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.359 qpair failed and we were unable to recover it. 00:28:49.359 [2024-07-15 16:10:18.186007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.359 [2024-07-15 16:10:18.186022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.359 qpair failed and we were unable to recover it. 00:28:49.359 [2024-07-15 16:10:18.186230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.359 [2024-07-15 16:10:18.186247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.359 qpair failed and we were unable to recover it. 00:28:49.359 [2024-07-15 16:10:18.186354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.359 [2024-07-15 16:10:18.186369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.359 qpair failed and we were unable to recover it. 00:28:49.359 [2024-07-15 16:10:18.186483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.359 [2024-07-15 16:10:18.186499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.359 qpair failed and we were unable to recover it. 00:28:49.359 [2024-07-15 16:10:18.186614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.359 [2024-07-15 16:10:18.186630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.359 qpair failed and we were unable to recover it. 00:28:49.359 [2024-07-15 16:10:18.186744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.359 [2024-07-15 16:10:18.186759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.359 qpair failed and we were unable to recover it. 00:28:49.359 [2024-07-15 16:10:18.186884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.359 [2024-07-15 16:10:18.186899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.359 qpair failed and we were unable to recover it. 00:28:49.359 [2024-07-15 16:10:18.187139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.359 [2024-07-15 16:10:18.187170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.359 qpair failed and we were unable to recover it. 00:28:49.359 [2024-07-15 16:10:18.187415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.359 [2024-07-15 16:10:18.187448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.359 qpair failed and we were unable to recover it. 00:28:49.359 [2024-07-15 16:10:18.187716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.359 [2024-07-15 16:10:18.187747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.359 qpair failed and we were unable to recover it. 00:28:49.359 [2024-07-15 16:10:18.187891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.359 [2024-07-15 16:10:18.187921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.359 qpair failed and we were unable to recover it. 00:28:49.359 [2024-07-15 16:10:18.188091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.359 [2024-07-15 16:10:18.188122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.359 qpair failed and we were unable to recover it. 00:28:49.359 [2024-07-15 16:10:18.188325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.359 [2024-07-15 16:10:18.188341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.359 qpair failed and we were unable to recover it. 00:28:49.359 [2024-07-15 16:10:18.188463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.359 [2024-07-15 16:10:18.188495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.359 qpair failed and we were unable to recover it. 00:28:49.359 [2024-07-15 16:10:18.188653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.359 [2024-07-15 16:10:18.188684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.359 qpair failed and we were unable to recover it. 00:28:49.359 [2024-07-15 16:10:18.188898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.359 [2024-07-15 16:10:18.188929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.359 qpair failed and we were unable to recover it. 00:28:49.359 [2024-07-15 16:10:18.189120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.359 [2024-07-15 16:10:18.189135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.359 qpair failed and we were unable to recover it. 00:28:49.359 [2024-07-15 16:10:18.189318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.359 [2024-07-15 16:10:18.189350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.359 qpair failed and we were unable to recover it. 00:28:49.359 [2024-07-15 16:10:18.189617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.359 [2024-07-15 16:10:18.189648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.359 qpair failed and we were unable to recover it. 00:28:49.359 [2024-07-15 16:10:18.189805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.359 [2024-07-15 16:10:18.189836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.359 qpair failed and we were unable to recover it. 00:28:49.359 [2024-07-15 16:10:18.190093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.359 [2024-07-15 16:10:18.190109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.359 qpair failed and we were unable to recover it. 00:28:49.359 [2024-07-15 16:10:18.190308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.359 [2024-07-15 16:10:18.190340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.359 qpair failed and we were unable to recover it. 00:28:49.359 [2024-07-15 16:10:18.190605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.359 [2024-07-15 16:10:18.190635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.359 qpair failed and we were unable to recover it. 00:28:49.359 [2024-07-15 16:10:18.190923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.359 [2024-07-15 16:10:18.190963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.359 qpair failed and we were unable to recover it. 00:28:49.359 [2024-07-15 16:10:18.191090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.359 [2024-07-15 16:10:18.191106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.359 qpair failed and we were unable to recover it. 00:28:49.359 [2024-07-15 16:10:18.191281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.359 [2024-07-15 16:10:18.191323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.359 qpair failed and we were unable to recover it. 00:28:49.359 [2024-07-15 16:10:18.191484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.359 [2024-07-15 16:10:18.191515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.359 qpair failed and we were unable to recover it. 00:28:49.359 [2024-07-15 16:10:18.191790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.359 [2024-07-15 16:10:18.191821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.359 qpair failed and we were unable to recover it. 00:28:49.359 [2024-07-15 16:10:18.192088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.359 [2024-07-15 16:10:18.192119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.359 qpair failed and we were unable to recover it. 00:28:49.359 [2024-07-15 16:10:18.192253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.359 [2024-07-15 16:10:18.192286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.359 qpair failed and we were unable to recover it. 00:28:49.359 [2024-07-15 16:10:18.192438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.359 [2024-07-15 16:10:18.192453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.359 qpair failed and we were unable to recover it. 00:28:49.359 [2024-07-15 16:10:18.192686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.359 [2024-07-15 16:10:18.192701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.359 qpair failed and we were unable to recover it. 00:28:49.359 [2024-07-15 16:10:18.192805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.359 [2024-07-15 16:10:18.192820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.359 qpair failed and we were unable to recover it. 00:28:49.359 [2024-07-15 16:10:18.193057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.359 [2024-07-15 16:10:18.193072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.359 qpair failed and we were unable to recover it. 00:28:49.359 [2024-07-15 16:10:18.193179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.359 [2024-07-15 16:10:18.193195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.359 qpair failed and we were unable to recover it. 00:28:49.359 [2024-07-15 16:10:18.193368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.359 [2024-07-15 16:10:18.193383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.359 qpair failed and we were unable to recover it. 00:28:49.359 [2024-07-15 16:10:18.193543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.359 [2024-07-15 16:10:18.193558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.359 qpair failed and we were unable to recover it. 00:28:49.359 [2024-07-15 16:10:18.193759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.359 [2024-07-15 16:10:18.193774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.360 qpair failed and we were unable to recover it. 00:28:49.360 [2024-07-15 16:10:18.193910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.360 [2024-07-15 16:10:18.193924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.360 qpair failed and we were unable to recover it. 00:28:49.360 [2024-07-15 16:10:18.194127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.360 [2024-07-15 16:10:18.194159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.360 qpair failed and we were unable to recover it. 00:28:49.360 [2024-07-15 16:10:18.194310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.360 [2024-07-15 16:10:18.194341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.360 qpair failed and we were unable to recover it. 00:28:49.360 [2024-07-15 16:10:18.194631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.360 [2024-07-15 16:10:18.194663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.360 qpair failed and we were unable to recover it. 00:28:49.360 [2024-07-15 16:10:18.194823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.360 [2024-07-15 16:10:18.194853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.360 qpair failed and we were unable to recover it. 00:28:49.360 [2024-07-15 16:10:18.195102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.360 [2024-07-15 16:10:18.195116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.360 qpair failed and we were unable to recover it. 00:28:49.360 [2024-07-15 16:10:18.195343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.360 [2024-07-15 16:10:18.195359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.360 qpair failed and we were unable to recover it. 00:28:49.360 [2024-07-15 16:10:18.195644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.360 [2024-07-15 16:10:18.195675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.360 qpair failed and we were unable to recover it. 00:28:49.360 [2024-07-15 16:10:18.195941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.360 [2024-07-15 16:10:18.195971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.360 qpair failed and we were unable to recover it. 00:28:49.360 [2024-07-15 16:10:18.196124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.360 [2024-07-15 16:10:18.196155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.360 qpair failed and we were unable to recover it. 00:28:49.360 [2024-07-15 16:10:18.196368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.360 [2024-07-15 16:10:18.196384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.360 qpair failed and we were unable to recover it. 00:28:49.360 [2024-07-15 16:10:18.196555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.360 [2024-07-15 16:10:18.196570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.360 qpair failed and we were unable to recover it. 00:28:49.360 [2024-07-15 16:10:18.196761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.360 [2024-07-15 16:10:18.196791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.360 qpair failed and we were unable to recover it. 00:28:49.360 [2024-07-15 16:10:18.196993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.360 [2024-07-15 16:10:18.197024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.360 qpair failed and we were unable to recover it. 00:28:49.360 [2024-07-15 16:10:18.197180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.360 [2024-07-15 16:10:18.197212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.360 qpair failed and we were unable to recover it. 00:28:49.360 [2024-07-15 16:10:18.197380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.360 [2024-07-15 16:10:18.197396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.360 qpair failed and we were unable to recover it. 00:28:49.360 [2024-07-15 16:10:18.197564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.360 [2024-07-15 16:10:18.197595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.360 qpair failed and we were unable to recover it. 00:28:49.360 [2024-07-15 16:10:18.197738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.360 [2024-07-15 16:10:18.197768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.360 qpair failed and we were unable to recover it. 00:28:49.360 [2024-07-15 16:10:18.198049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.360 [2024-07-15 16:10:18.198090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.360 qpair failed and we were unable to recover it. 00:28:49.360 [2024-07-15 16:10:18.198263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.360 [2024-07-15 16:10:18.198279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.360 qpair failed and we were unable to recover it. 00:28:49.360 [2024-07-15 16:10:18.198551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.360 [2024-07-15 16:10:18.198582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.360 qpair failed and we were unable to recover it. 00:28:49.360 [2024-07-15 16:10:18.198813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.360 [2024-07-15 16:10:18.198843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.360 qpair failed and we were unable to recover it. 00:28:49.360 [2024-07-15 16:10:18.199054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.360 [2024-07-15 16:10:18.199085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.360 qpair failed and we were unable to recover it. 00:28:49.360 [2024-07-15 16:10:18.199269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.360 [2024-07-15 16:10:18.199302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.360 qpair failed and we were unable to recover it. 00:28:49.360 [2024-07-15 16:10:18.199454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.360 [2024-07-15 16:10:18.199485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.360 qpair failed and we were unable to recover it. 00:28:49.360 [2024-07-15 16:10:18.199732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.360 [2024-07-15 16:10:18.199763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.360 qpair failed and we were unable to recover it. 00:28:49.360 [2024-07-15 16:10:18.199976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.360 [2024-07-15 16:10:18.200006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.360 qpair failed and we were unable to recover it. 00:28:49.360 [2024-07-15 16:10:18.200162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.360 [2024-07-15 16:10:18.200180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.360 qpair failed and we were unable to recover it. 00:28:49.360 [2024-07-15 16:10:18.200291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.360 [2024-07-15 16:10:18.200306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.360 qpair failed and we were unable to recover it. 00:28:49.360 [2024-07-15 16:10:18.200553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.360 [2024-07-15 16:10:18.200585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.360 qpair failed and we were unable to recover it. 00:28:49.360 [2024-07-15 16:10:18.200748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.360 [2024-07-15 16:10:18.200779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.360 qpair failed and we were unable to recover it. 00:28:49.360 [2024-07-15 16:10:18.201064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.360 [2024-07-15 16:10:18.201095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.360 qpair failed and we were unable to recover it. 00:28:49.360 [2024-07-15 16:10:18.201320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.360 [2024-07-15 16:10:18.201337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.360 qpair failed and we were unable to recover it. 00:28:49.360 [2024-07-15 16:10:18.201608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.360 [2024-07-15 16:10:18.201639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.360 qpair failed and we were unable to recover it. 00:28:49.360 [2024-07-15 16:10:18.201810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.360 [2024-07-15 16:10:18.201841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.360 qpair failed and we were unable to recover it. 00:28:49.360 [2024-07-15 16:10:18.202103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.360 [2024-07-15 16:10:18.202133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.360 qpair failed and we were unable to recover it. 00:28:49.360 [2024-07-15 16:10:18.202287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.360 [2024-07-15 16:10:18.202318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.360 qpair failed and we were unable to recover it. 00:28:49.360 [2024-07-15 16:10:18.202482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.360 [2024-07-15 16:10:18.202513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.360 qpair failed and we were unable to recover it. 00:28:49.360 [2024-07-15 16:10:18.202780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.360 [2024-07-15 16:10:18.202810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.360 qpair failed and we were unable to recover it. 00:28:49.360 [2024-07-15 16:10:18.203009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.360 [2024-07-15 16:10:18.203040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.360 qpair failed and we were unable to recover it. 00:28:49.360 [2024-07-15 16:10:18.203191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.361 [2024-07-15 16:10:18.203206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.361 qpair failed and we were unable to recover it. 00:28:49.361 [2024-07-15 16:10:18.203395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.361 [2024-07-15 16:10:18.203411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.361 qpair failed and we were unable to recover it. 00:28:49.361 [2024-07-15 16:10:18.203508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.361 [2024-07-15 16:10:18.203523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.361 qpair failed and we were unable to recover it. 00:28:49.361 [2024-07-15 16:10:18.203702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.361 [2024-07-15 16:10:18.203733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.361 qpair failed and we were unable to recover it. 00:28:49.361 [2024-07-15 16:10:18.203945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.361 [2024-07-15 16:10:18.203976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.361 qpair failed and we were unable to recover it. 00:28:49.361 [2024-07-15 16:10:18.204204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.361 [2024-07-15 16:10:18.204242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.361 qpair failed and we were unable to recover it. 00:28:49.361 [2024-07-15 16:10:18.204407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.361 [2024-07-15 16:10:18.204438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.361 qpair failed and we were unable to recover it. 00:28:49.361 [2024-07-15 16:10:18.204636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.361 [2024-07-15 16:10:18.204667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.361 qpair failed and we were unable to recover it. 00:28:49.361 [2024-07-15 16:10:18.204866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.361 [2024-07-15 16:10:18.204896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.361 qpair failed and we were unable to recover it. 00:28:49.361 [2024-07-15 16:10:18.205102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.361 [2024-07-15 16:10:18.205133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.361 qpair failed and we were unable to recover it. 00:28:49.361 [2024-07-15 16:10:18.205349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.361 [2024-07-15 16:10:18.205381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.361 qpair failed and we were unable to recover it. 00:28:49.361 [2024-07-15 16:10:18.205533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.361 [2024-07-15 16:10:18.205563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.361 qpair failed and we were unable to recover it. 00:28:49.361 [2024-07-15 16:10:18.205767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.361 [2024-07-15 16:10:18.205798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.361 qpair failed and we were unable to recover it. 00:28:49.361 [2024-07-15 16:10:18.206086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.361 [2024-07-15 16:10:18.206116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.361 qpair failed and we were unable to recover it. 00:28:49.361 [2024-07-15 16:10:18.206319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.361 [2024-07-15 16:10:18.206350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.361 qpair failed and we were unable to recover it. 00:28:49.361 [2024-07-15 16:10:18.206544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.361 [2024-07-15 16:10:18.206559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.361 qpair failed and we were unable to recover it. 00:28:49.361 [2024-07-15 16:10:18.206745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.361 [2024-07-15 16:10:18.206776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.361 qpair failed and we were unable to recover it. 00:28:49.361 [2024-07-15 16:10:18.206991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.361 [2024-07-15 16:10:18.207021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.361 qpair failed and we were unable to recover it. 00:28:49.361 [2024-07-15 16:10:18.207262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.361 [2024-07-15 16:10:18.207294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.361 qpair failed and we were unable to recover it. 00:28:49.361 [2024-07-15 16:10:18.207490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.361 [2024-07-15 16:10:18.207521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.361 qpair failed and we were unable to recover it. 00:28:49.361 [2024-07-15 16:10:18.207786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.361 [2024-07-15 16:10:18.207816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.361 qpair failed and we were unable to recover it. 00:28:49.361 [2024-07-15 16:10:18.207953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.361 [2024-07-15 16:10:18.207984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.361 qpair failed and we were unable to recover it. 00:28:49.361 [2024-07-15 16:10:18.208246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.361 [2024-07-15 16:10:18.208278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.361 qpair failed and we were unable to recover it. 00:28:49.361 [2024-07-15 16:10:18.208424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.361 [2024-07-15 16:10:18.208439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.361 qpair failed and we were unable to recover it. 00:28:49.361 [2024-07-15 16:10:18.208636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.361 [2024-07-15 16:10:18.208651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.361 qpair failed and we were unable to recover it. 00:28:49.361 [2024-07-15 16:10:18.208782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.361 [2024-07-15 16:10:18.208813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.361 qpair failed and we were unable to recover it. 00:28:49.361 [2024-07-15 16:10:18.209080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.361 [2024-07-15 16:10:18.209110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.361 qpair failed and we were unable to recover it. 00:28:49.361 [2024-07-15 16:10:18.209373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.361 [2024-07-15 16:10:18.209411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.361 qpair failed and we were unable to recover it. 00:28:49.361 [2024-07-15 16:10:18.209628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.361 [2024-07-15 16:10:18.209659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.361 qpair failed and we were unable to recover it. 00:28:49.361 [2024-07-15 16:10:18.209862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.361 [2024-07-15 16:10:18.209893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.361 qpair failed and we were unable to recover it. 00:28:49.361 [2024-07-15 16:10:18.210184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.361 [2024-07-15 16:10:18.210215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.361 qpair failed and we were unable to recover it. 00:28:49.361 [2024-07-15 16:10:18.210342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.361 [2024-07-15 16:10:18.210374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.361 qpair failed and we were unable to recover it. 00:28:49.361 [2024-07-15 16:10:18.210641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.361 [2024-07-15 16:10:18.210673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.361 qpair failed and we were unable to recover it. 00:28:49.361 [2024-07-15 16:10:18.210808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.361 [2024-07-15 16:10:18.210839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.361 qpair failed and we were unable to recover it. 00:28:49.361 [2024-07-15 16:10:18.211128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.361 [2024-07-15 16:10:18.211158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.361 qpair failed and we were unable to recover it. 00:28:49.361 [2024-07-15 16:10:18.211422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.362 [2024-07-15 16:10:18.211454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.362 qpair failed and we were unable to recover it. 00:28:49.362 [2024-07-15 16:10:18.211735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.362 [2024-07-15 16:10:18.211766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.362 qpair failed and we were unable to recover it. 00:28:49.362 [2024-07-15 16:10:18.211974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.362 [2024-07-15 16:10:18.212005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.362 qpair failed and we were unable to recover it. 00:28:49.362 [2024-07-15 16:10:18.212269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.362 [2024-07-15 16:10:18.212301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.362 qpair failed and we were unable to recover it. 00:28:49.362 [2024-07-15 16:10:18.212494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.362 [2024-07-15 16:10:18.212525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.362 qpair failed and we were unable to recover it. 00:28:49.362 [2024-07-15 16:10:18.212721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.362 [2024-07-15 16:10:18.212751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.362 qpair failed and we were unable to recover it. 00:28:49.362 [2024-07-15 16:10:18.212937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.362 [2024-07-15 16:10:18.212969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.362 qpair failed and we were unable to recover it. 00:28:49.362 [2024-07-15 16:10:18.213238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.362 [2024-07-15 16:10:18.213254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.362 qpair failed and we were unable to recover it. 00:28:49.362 [2024-07-15 16:10:18.213374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.362 [2024-07-15 16:10:18.213389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.362 qpair failed and we were unable to recover it. 00:28:49.362 [2024-07-15 16:10:18.213566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.362 [2024-07-15 16:10:18.213596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.362 qpair failed and we were unable to recover it. 00:28:49.362 [2024-07-15 16:10:18.213794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.362 [2024-07-15 16:10:18.213826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.362 qpair failed and we were unable to recover it. 00:28:49.362 [2024-07-15 16:10:18.214034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.362 [2024-07-15 16:10:18.214064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.362 qpair failed and we were unable to recover it. 00:28:49.362 [2024-07-15 16:10:18.214251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.362 [2024-07-15 16:10:18.214267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.362 qpair failed and we were unable to recover it. 00:28:49.362 [2024-07-15 16:10:18.214509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.362 [2024-07-15 16:10:18.214540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.362 qpair failed and we were unable to recover it. 00:28:49.362 [2024-07-15 16:10:18.214650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.362 [2024-07-15 16:10:18.214681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.362 qpair failed and we were unable to recover it. 00:28:49.362 [2024-07-15 16:10:18.214843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.362 [2024-07-15 16:10:18.214874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.362 qpair failed and we were unable to recover it. 00:28:49.362 [2024-07-15 16:10:18.215019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.362 [2024-07-15 16:10:18.215034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.362 qpair failed and we were unable to recover it. 00:28:49.362 [2024-07-15 16:10:18.215222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.362 [2024-07-15 16:10:18.215270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.362 qpair failed and we were unable to recover it. 00:28:49.362 [2024-07-15 16:10:18.215403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.362 [2024-07-15 16:10:18.215434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.362 qpair failed and we were unable to recover it. 00:28:49.362 [2024-07-15 16:10:18.215650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.362 [2024-07-15 16:10:18.215681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.362 qpair failed and we were unable to recover it. 00:28:49.362 [2024-07-15 16:10:18.215944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.362 [2024-07-15 16:10:18.215990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.362 qpair failed and we were unable to recover it. 00:28:49.362 [2024-07-15 16:10:18.216167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.362 [2024-07-15 16:10:18.216183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.362 qpair failed and we were unable to recover it. 00:28:49.362 [2024-07-15 16:10:18.216423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.362 [2024-07-15 16:10:18.216438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.362 qpair failed and we were unable to recover it. 00:28:49.362 [2024-07-15 16:10:18.216557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.362 [2024-07-15 16:10:18.216572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.362 qpair failed and we were unable to recover it. 00:28:49.362 [2024-07-15 16:10:18.216853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.362 [2024-07-15 16:10:18.216884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.362 qpair failed and we were unable to recover it. 00:28:49.362 [2024-07-15 16:10:18.217101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.362 [2024-07-15 16:10:18.217131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.362 qpair failed and we were unable to recover it. 00:28:49.362 [2024-07-15 16:10:18.217345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.362 [2024-07-15 16:10:18.217376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.362 qpair failed and we were unable to recover it. 00:28:49.362 [2024-07-15 16:10:18.217510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.362 [2024-07-15 16:10:18.217540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.362 qpair failed and we were unable to recover it. 00:28:49.362 [2024-07-15 16:10:18.217738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.362 [2024-07-15 16:10:18.217769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.362 qpair failed and we were unable to recover it. 00:28:49.362 [2024-07-15 16:10:18.217980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.362 [2024-07-15 16:10:18.218011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.362 qpair failed and we were unable to recover it. 00:28:49.362 [2024-07-15 16:10:18.218241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.362 [2024-07-15 16:10:18.218273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.362 qpair failed and we were unable to recover it. 00:28:49.362 [2024-07-15 16:10:18.218550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.362 [2024-07-15 16:10:18.218581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.362 qpair failed and we were unable to recover it. 00:28:49.362 [2024-07-15 16:10:18.218871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.362 [2024-07-15 16:10:18.218902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.362 qpair failed and we were unable to recover it. 00:28:49.362 [2024-07-15 16:10:18.219065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.362 [2024-07-15 16:10:18.219096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.362 qpair failed and we were unable to recover it. 00:28:49.362 [2024-07-15 16:10:18.219295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.362 [2024-07-15 16:10:18.219311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.362 qpair failed and we were unable to recover it. 00:28:49.362 [2024-07-15 16:10:18.219545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.362 [2024-07-15 16:10:18.219560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.362 qpair failed and we were unable to recover it. 00:28:49.362 [2024-07-15 16:10:18.219696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.362 [2024-07-15 16:10:18.219727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.362 qpair failed and we were unable to recover it. 00:28:49.362 [2024-07-15 16:10:18.220002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.362 [2024-07-15 16:10:18.220033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.362 qpair failed and we were unable to recover it. 00:28:49.362 [2024-07-15 16:10:18.220249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.362 [2024-07-15 16:10:18.220287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.362 qpair failed and we were unable to recover it. 00:28:49.362 [2024-07-15 16:10:18.220450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.362 [2024-07-15 16:10:18.220466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.362 qpair failed and we were unable to recover it. 00:28:49.363 [2024-07-15 16:10:18.220574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.363 [2024-07-15 16:10:18.220589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.363 qpair failed and we were unable to recover it. 00:28:49.363 [2024-07-15 16:10:18.220714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.363 [2024-07-15 16:10:18.220730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.363 qpair failed and we were unable to recover it. 00:28:49.363 [2024-07-15 16:10:18.220899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.363 [2024-07-15 16:10:18.220930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.363 qpair failed and we were unable to recover it. 00:28:49.363 [2024-07-15 16:10:18.221089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.363 [2024-07-15 16:10:18.221119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.363 qpair failed and we were unable to recover it. 00:28:49.363 [2024-07-15 16:10:18.221383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.363 [2024-07-15 16:10:18.221415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.363 qpair failed and we were unable to recover it. 00:28:49.363 [2024-07-15 16:10:18.221624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.363 [2024-07-15 16:10:18.221654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.363 qpair failed and we were unable to recover it. 00:28:49.363 [2024-07-15 16:10:18.221878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.363 [2024-07-15 16:10:18.221909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.363 qpair failed and we were unable to recover it. 00:28:49.363 [2024-07-15 16:10:18.222194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.363 [2024-07-15 16:10:18.222209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.363 qpair failed and we were unable to recover it. 00:28:49.363 [2024-07-15 16:10:18.222524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.363 [2024-07-15 16:10:18.222554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.363 qpair failed and we were unable to recover it. 00:28:49.363 [2024-07-15 16:10:18.222837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.363 [2024-07-15 16:10:18.222880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.363 qpair failed and we were unable to recover it. 00:28:49.363 [2024-07-15 16:10:18.223137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.363 [2024-07-15 16:10:18.223181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.363 qpair failed and we were unable to recover it. 00:28:49.363 [2024-07-15 16:10:18.223340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.363 [2024-07-15 16:10:18.223373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.363 qpair failed and we were unable to recover it. 00:28:49.363 [2024-07-15 16:10:18.223591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.363 [2024-07-15 16:10:18.223622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.363 qpair failed and we were unable to recover it. 00:28:49.363 [2024-07-15 16:10:18.223836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.363 [2024-07-15 16:10:18.223867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.363 qpair failed and we were unable to recover it. 00:28:49.363 [2024-07-15 16:10:18.224065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.363 [2024-07-15 16:10:18.224096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.363 qpair failed and we were unable to recover it. 00:28:49.363 [2024-07-15 16:10:18.224245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.363 [2024-07-15 16:10:18.224261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.363 qpair failed and we were unable to recover it. 00:28:49.363 [2024-07-15 16:10:18.224438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.363 [2024-07-15 16:10:18.224469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.363 qpair failed and we were unable to recover it. 00:28:49.363 [2024-07-15 16:10:18.224673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.363 [2024-07-15 16:10:18.224703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.363 qpair failed and we were unable to recover it. 00:28:49.363 [2024-07-15 16:10:18.224851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.363 [2024-07-15 16:10:18.224882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.363 qpair failed and we were unable to recover it. 00:28:49.363 [2024-07-15 16:10:18.225121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.363 [2024-07-15 16:10:18.225157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.363 qpair failed and we were unable to recover it. 00:28:49.363 [2024-07-15 16:10:18.225384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.363 [2024-07-15 16:10:18.225417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.363 qpair failed and we were unable to recover it. 00:28:49.363 [2024-07-15 16:10:18.225574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.363 [2024-07-15 16:10:18.225605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.363 qpair failed and we were unable to recover it. 00:28:49.363 [2024-07-15 16:10:18.225755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.363 [2024-07-15 16:10:18.225786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.363 qpair failed and we were unable to recover it. 00:28:49.363 [2024-07-15 16:10:18.225949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.363 [2024-07-15 16:10:18.225980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.363 qpair failed and we were unable to recover it. 00:28:49.363 [2024-07-15 16:10:18.226174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.363 [2024-07-15 16:10:18.226205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.363 qpair failed and we were unable to recover it. 00:28:49.363 [2024-07-15 16:10:18.226432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.363 [2024-07-15 16:10:18.226447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.363 qpair failed and we were unable to recover it. 00:28:49.363 [2024-07-15 16:10:18.226617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.363 [2024-07-15 16:10:18.226647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.363 qpair failed and we were unable to recover it. 00:28:49.363 [2024-07-15 16:10:18.226792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.363 [2024-07-15 16:10:18.226822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.363 qpair failed and we were unable to recover it. 00:28:49.363 [2024-07-15 16:10:18.227037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.363 [2024-07-15 16:10:18.227068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.363 qpair failed and we were unable to recover it. 00:28:49.363 [2024-07-15 16:10:18.227374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.363 [2024-07-15 16:10:18.227389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.363 qpair failed and we were unable to recover it. 00:28:49.363 [2024-07-15 16:10:18.227576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.363 [2024-07-15 16:10:18.227592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.363 qpair failed and we were unable to recover it. 00:28:49.363 [2024-07-15 16:10:18.227754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.363 [2024-07-15 16:10:18.227769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.363 qpair failed and we were unable to recover it. 00:28:49.363 [2024-07-15 16:10:18.227938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.363 [2024-07-15 16:10:18.227968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.363 qpair failed and we were unable to recover it. 00:28:49.363 [2024-07-15 16:10:18.228166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.363 [2024-07-15 16:10:18.228197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.363 qpair failed and we were unable to recover it. 00:28:49.363 [2024-07-15 16:10:18.228410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.363 [2024-07-15 16:10:18.228482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.363 qpair failed and we were unable to recover it. 00:28:49.363 [2024-07-15 16:10:18.228776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.363 [2024-07-15 16:10:18.228846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.363 qpair failed and we were unable to recover it. 00:28:49.363 [2024-07-15 16:10:18.229157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.363 [2024-07-15 16:10:18.229191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.363 qpair failed and we were unable to recover it. 00:28:49.363 [2024-07-15 16:10:18.229448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.363 [2024-07-15 16:10:18.229482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.363 qpair failed and we were unable to recover it. 00:28:49.363 [2024-07-15 16:10:18.229661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.363 [2024-07-15 16:10:18.229692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.363 qpair failed and we were unable to recover it. 00:28:49.363 [2024-07-15 16:10:18.229869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.363 [2024-07-15 16:10:18.229900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.363 qpair failed and we were unable to recover it. 00:28:49.364 [2024-07-15 16:10:18.230100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.364 [2024-07-15 16:10:18.230131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.364 qpair failed and we were unable to recover it. 00:28:49.364 [2024-07-15 16:10:18.230414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.364 [2024-07-15 16:10:18.230431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.364 qpair failed and we were unable to recover it. 00:28:49.364 [2024-07-15 16:10:18.230648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.364 [2024-07-15 16:10:18.230679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.364 qpair failed and we were unable to recover it. 00:28:49.364 [2024-07-15 16:10:18.230835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.364 [2024-07-15 16:10:18.230866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.364 qpair failed and we were unable to recover it. 00:28:49.364 [2024-07-15 16:10:18.231095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.364 [2024-07-15 16:10:18.231126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.364 qpair failed and we were unable to recover it. 00:28:49.364 [2024-07-15 16:10:18.231333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.364 [2024-07-15 16:10:18.231366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.364 qpair failed and we were unable to recover it. 00:28:49.364 [2024-07-15 16:10:18.231591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.364 [2024-07-15 16:10:18.231624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.364 qpair failed and we were unable to recover it. 00:28:49.364 [2024-07-15 16:10:18.231858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.364 [2024-07-15 16:10:18.231889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.364 qpair failed and we were unable to recover it. 00:28:49.364 [2024-07-15 16:10:18.232098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.364 [2024-07-15 16:10:18.232113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.364 qpair failed and we were unable to recover it. 00:28:49.364 [2024-07-15 16:10:18.232364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.364 [2024-07-15 16:10:18.232396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.364 qpair failed and we were unable to recover it. 00:28:49.364 [2024-07-15 16:10:18.232599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.364 [2024-07-15 16:10:18.232629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.364 qpair failed and we were unable to recover it. 00:28:49.364 [2024-07-15 16:10:18.232862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.364 [2024-07-15 16:10:18.232893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.364 qpair failed and we were unable to recover it. 00:28:49.364 [2024-07-15 16:10:18.233107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.364 [2024-07-15 16:10:18.233137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.364 qpair failed and we were unable to recover it. 00:28:49.364 [2024-07-15 16:10:18.233335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.364 [2024-07-15 16:10:18.233351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.364 qpair failed and we were unable to recover it. 00:28:49.364 [2024-07-15 16:10:18.233531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.364 [2024-07-15 16:10:18.233564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.364 qpair failed and we were unable to recover it. 00:28:49.364 [2024-07-15 16:10:18.233731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.364 [2024-07-15 16:10:18.233762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.364 qpair failed and we were unable to recover it. 00:28:49.364 [2024-07-15 16:10:18.233928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.364 [2024-07-15 16:10:18.233959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.364 qpair failed and we were unable to recover it. 00:28:49.364 [2024-07-15 16:10:18.234197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.364 [2024-07-15 16:10:18.234236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.364 qpair failed and we were unable to recover it. 00:28:49.364 [2024-07-15 16:10:18.234453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.364 [2024-07-15 16:10:18.234484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.364 qpair failed and we were unable to recover it. 00:28:49.364 [2024-07-15 16:10:18.234646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.364 [2024-07-15 16:10:18.234682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.364 qpair failed and we were unable to recover it. 00:28:49.364 [2024-07-15 16:10:18.234879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.364 [2024-07-15 16:10:18.234910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.364 qpair failed and we were unable to recover it. 00:28:49.364 [2024-07-15 16:10:18.235200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.364 [2024-07-15 16:10:18.235248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.364 qpair failed and we were unable to recover it. 00:28:49.364 [2024-07-15 16:10:18.235404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.364 [2024-07-15 16:10:18.235419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.364 qpair failed and we were unable to recover it. 00:28:49.364 [2024-07-15 16:10:18.235656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.364 [2024-07-15 16:10:18.235671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.364 qpair failed and we were unable to recover it. 00:28:49.364 [2024-07-15 16:10:18.235851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.364 [2024-07-15 16:10:18.235881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.364 qpair failed and we were unable to recover it. 00:28:49.364 [2024-07-15 16:10:18.236094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.364 [2024-07-15 16:10:18.236125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.364 qpair failed and we were unable to recover it. 00:28:49.364 [2024-07-15 16:10:18.236327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.364 [2024-07-15 16:10:18.236364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.364 qpair failed and we were unable to recover it. 00:28:49.364 [2024-07-15 16:10:18.236525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.364 [2024-07-15 16:10:18.236540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.364 qpair failed and we were unable to recover it. 00:28:49.364 [2024-07-15 16:10:18.236710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.364 [2024-07-15 16:10:18.236741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.364 qpair failed and we were unable to recover it. 00:28:49.364 [2024-07-15 16:10:18.236953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.364 [2024-07-15 16:10:18.236984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.364 qpair failed and we were unable to recover it. 00:28:49.364 [2024-07-15 16:10:18.237248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.364 [2024-07-15 16:10:18.237280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.364 qpair failed and we were unable to recover it. 00:28:49.364 [2024-07-15 16:10:18.237476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.364 [2024-07-15 16:10:18.237493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.364 qpair failed and we were unable to recover it. 00:28:49.364 [2024-07-15 16:10:18.237665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.364 [2024-07-15 16:10:18.237696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.364 qpair failed and we were unable to recover it. 00:28:49.364 [2024-07-15 16:10:18.237859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.364 [2024-07-15 16:10:18.237891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.364 qpair failed and we were unable to recover it. 00:28:49.364 [2024-07-15 16:10:18.238042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.364 [2024-07-15 16:10:18.238072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.364 qpair failed and we were unable to recover it. 00:28:49.364 [2024-07-15 16:10:18.238211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.364 [2024-07-15 16:10:18.238251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.364 qpair failed and we were unable to recover it. 00:28:49.364 [2024-07-15 16:10:18.238538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.364 [2024-07-15 16:10:18.238569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.364 qpair failed and we were unable to recover it. 00:28:49.364 [2024-07-15 16:10:18.238831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.364 [2024-07-15 16:10:18.238861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.364 qpair failed and we were unable to recover it. 00:28:49.364 [2024-07-15 16:10:18.239013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.364 [2024-07-15 16:10:18.239044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.364 qpair failed and we were unable to recover it. 00:28:49.364 [2024-07-15 16:10:18.239250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.364 [2024-07-15 16:10:18.239265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.365 qpair failed and we were unable to recover it. 00:28:49.365 [2024-07-15 16:10:18.239435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.365 [2024-07-15 16:10:18.239466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.365 qpair failed and we were unable to recover it. 00:28:49.365 [2024-07-15 16:10:18.239686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.365 [2024-07-15 16:10:18.239717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.365 qpair failed and we were unable to recover it. 00:28:49.365 [2024-07-15 16:10:18.239986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.365 [2024-07-15 16:10:18.240016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.365 qpair failed and we were unable to recover it. 00:28:49.365 [2024-07-15 16:10:18.240315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.365 [2024-07-15 16:10:18.240347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.365 qpair failed and we were unable to recover it. 00:28:49.365 [2024-07-15 16:10:18.240606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.365 [2024-07-15 16:10:18.240636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.365 qpair failed and we were unable to recover it. 00:28:49.365 [2024-07-15 16:10:18.240838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.365 [2024-07-15 16:10:18.240869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.365 qpair failed and we were unable to recover it. 00:28:49.365 [2024-07-15 16:10:18.241095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.365 [2024-07-15 16:10:18.241139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.365 qpair failed and we were unable to recover it. 00:28:49.365 [2024-07-15 16:10:18.241343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.365 [2024-07-15 16:10:18.241371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.365 qpair failed and we were unable to recover it. 00:28:49.365 [2024-07-15 16:10:18.241486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.365 [2024-07-15 16:10:18.241499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.365 qpair failed and we were unable to recover it. 00:28:49.365 [2024-07-15 16:10:18.241679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.365 [2024-07-15 16:10:18.241711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.365 qpair failed and we were unable to recover it. 00:28:49.365 [2024-07-15 16:10:18.241913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.365 [2024-07-15 16:10:18.241944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.365 qpair failed and we were unable to recover it. 00:28:49.365 [2024-07-15 16:10:18.242101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.365 [2024-07-15 16:10:18.242133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.365 qpair failed and we were unable to recover it. 00:28:49.365 [2024-07-15 16:10:18.242328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.365 [2024-07-15 16:10:18.242340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.365 qpair failed and we were unable to recover it. 00:28:49.365 [2024-07-15 16:10:18.242458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.365 [2024-07-15 16:10:18.242489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.365 qpair failed and we were unable to recover it. 00:28:49.365 [2024-07-15 16:10:18.242700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.365 [2024-07-15 16:10:18.242732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.365 qpair failed and we were unable to recover it. 00:28:49.365 [2024-07-15 16:10:18.242948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.365 [2024-07-15 16:10:18.242979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.365 qpair failed and we were unable to recover it. 00:28:49.365 [2024-07-15 16:10:18.243176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.365 [2024-07-15 16:10:18.243188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.365 qpair failed and we were unable to recover it. 00:28:49.365 [2024-07-15 16:10:18.243368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.365 [2024-07-15 16:10:18.243400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.365 qpair failed and we were unable to recover it. 00:28:49.365 [2024-07-15 16:10:18.243612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.365 [2024-07-15 16:10:18.243644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.365 qpair failed and we were unable to recover it. 00:28:49.365 [2024-07-15 16:10:18.243862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.365 [2024-07-15 16:10:18.243902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.365 qpair failed and we were unable to recover it. 00:28:49.365 [2024-07-15 16:10:18.244051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.365 [2024-07-15 16:10:18.244063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.365 qpair failed and we were unable to recover it. 00:28:49.365 [2024-07-15 16:10:18.244274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.365 [2024-07-15 16:10:18.244306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.365 qpair failed and we were unable to recover it. 00:28:49.365 [2024-07-15 16:10:18.244569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.365 [2024-07-15 16:10:18.244599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.365 qpair failed and we were unable to recover it. 00:28:49.365 [2024-07-15 16:10:18.244873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.365 [2024-07-15 16:10:18.244904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.365 qpair failed and we were unable to recover it. 00:28:49.365 [2024-07-15 16:10:18.245122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.365 [2024-07-15 16:10:18.245133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.365 qpair failed and we were unable to recover it. 00:28:49.365 [2024-07-15 16:10:18.245337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.365 [2024-07-15 16:10:18.245368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.365 qpair failed and we were unable to recover it. 00:28:49.365 [2024-07-15 16:10:18.245657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.365 [2024-07-15 16:10:18.245689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.365 qpair failed and we were unable to recover it. 00:28:49.365 [2024-07-15 16:10:18.245919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.365 [2024-07-15 16:10:18.245950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.365 qpair failed and we were unable to recover it. 00:28:49.365 [2024-07-15 16:10:18.246184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.365 [2024-07-15 16:10:18.246215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.365 qpair failed and we were unable to recover it. 00:28:49.365 [2024-07-15 16:10:18.246442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.365 [2024-07-15 16:10:18.246454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.365 qpair failed and we were unable to recover it. 00:28:49.365 [2024-07-15 16:10:18.246573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.365 [2024-07-15 16:10:18.246603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.365 qpair failed and we were unable to recover it. 00:28:49.365 [2024-07-15 16:10:18.246812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.365 [2024-07-15 16:10:18.246843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.365 qpair failed and we were unable to recover it. 00:28:49.365 [2024-07-15 16:10:18.246966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.365 [2024-07-15 16:10:18.246996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.365 qpair failed and we were unable to recover it. 00:28:49.365 [2024-07-15 16:10:18.247214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.365 [2024-07-15 16:10:18.247230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.366 qpair failed and we were unable to recover it. 00:28:49.366 [2024-07-15 16:10:18.247456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.366 [2024-07-15 16:10:18.247487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.366 qpair failed and we were unable to recover it. 00:28:49.366 [2024-07-15 16:10:18.247715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.366 [2024-07-15 16:10:18.247745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.366 qpair failed and we were unable to recover it. 00:28:49.366 [2024-07-15 16:10:18.247950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.366 [2024-07-15 16:10:18.247981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.366 qpair failed and we were unable to recover it. 00:28:49.366 [2024-07-15 16:10:18.248247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.366 [2024-07-15 16:10:18.248280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.366 qpair failed and we were unable to recover it. 00:28:49.366 [2024-07-15 16:10:18.248633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.366 [2024-07-15 16:10:18.248664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.366 qpair failed and we were unable to recover it. 00:28:49.366 [2024-07-15 16:10:18.248934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.366 [2024-07-15 16:10:18.248965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.366 qpair failed and we were unable to recover it. 00:28:49.366 [2024-07-15 16:10:18.249076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.366 [2024-07-15 16:10:18.249106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.366 qpair failed and we were unable to recover it. 00:28:49.366 [2024-07-15 16:10:18.249300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.366 [2024-07-15 16:10:18.249311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.366 qpair failed and we were unable to recover it. 00:28:49.366 [2024-07-15 16:10:18.249517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.366 [2024-07-15 16:10:18.249548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.366 qpair failed and we were unable to recover it. 00:28:49.366 [2024-07-15 16:10:18.249845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.366 [2024-07-15 16:10:18.249876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.366 qpair failed and we were unable to recover it. 00:28:49.366 [2024-07-15 16:10:18.250037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.366 [2024-07-15 16:10:18.250048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.366 qpair failed and we were unable to recover it. 00:28:49.366 [2024-07-15 16:10:18.250253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.366 [2024-07-15 16:10:18.250284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.366 qpair failed and we were unable to recover it. 00:28:49.366 [2024-07-15 16:10:18.250619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.366 [2024-07-15 16:10:18.250689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.366 qpair failed and we were unable to recover it. 00:28:49.366 [2024-07-15 16:10:18.250873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.366 [2024-07-15 16:10:18.250908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.366 qpair failed and we were unable to recover it. 00:28:49.366 [2024-07-15 16:10:18.251064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.366 [2024-07-15 16:10:18.251080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.366 qpair failed and we were unable to recover it. 00:28:49.366 [2024-07-15 16:10:18.251244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.366 [2024-07-15 16:10:18.251260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.366 qpair failed and we were unable to recover it. 00:28:49.366 [2024-07-15 16:10:18.251500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.366 [2024-07-15 16:10:18.251531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.366 qpair failed and we were unable to recover it. 00:28:49.366 [2024-07-15 16:10:18.251692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.366 [2024-07-15 16:10:18.251723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.366 qpair failed and we were unable to recover it. 00:28:49.366 [2024-07-15 16:10:18.251989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.366 [2024-07-15 16:10:18.252020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.366 qpair failed and we were unable to recover it. 00:28:49.366 [2024-07-15 16:10:18.252250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.366 [2024-07-15 16:10:18.252283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.366 qpair failed and we were unable to recover it. 00:28:49.366 [2024-07-15 16:10:18.252492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.366 [2024-07-15 16:10:18.252522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.366 qpair failed and we were unable to recover it. 00:28:49.366 [2024-07-15 16:10:18.252723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.366 [2024-07-15 16:10:18.252754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.366 qpair failed and we were unable to recover it. 00:28:49.366 [2024-07-15 16:10:18.252962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.366 [2024-07-15 16:10:18.252994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.366 qpair failed and we were unable to recover it. 00:28:49.366 [2024-07-15 16:10:18.253264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.366 [2024-07-15 16:10:18.253280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.366 qpair failed and we were unable to recover it. 00:28:49.366 [2024-07-15 16:10:18.253405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.366 [2024-07-15 16:10:18.253418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.366 qpair failed and we were unable to recover it. 00:28:49.366 [2024-07-15 16:10:18.253511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.366 [2024-07-15 16:10:18.253527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.366 qpair failed and we were unable to recover it. 00:28:49.366 [2024-07-15 16:10:18.253687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.366 [2024-07-15 16:10:18.253699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.366 qpair failed and we were unable to recover it. 00:28:49.366 [2024-07-15 16:10:18.253807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.366 [2024-07-15 16:10:18.253838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.366 qpair failed and we were unable to recover it. 00:28:49.366 [2024-07-15 16:10:18.253981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.366 [2024-07-15 16:10:18.254012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.366 qpair failed and we were unable to recover it. 00:28:49.366 [2024-07-15 16:10:18.254157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.366 [2024-07-15 16:10:18.254188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.366 qpair failed and we were unable to recover it. 00:28:49.366 [2024-07-15 16:10:18.254417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.366 [2024-07-15 16:10:18.254429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.366 qpair failed and we were unable to recover it. 00:28:49.366 [2024-07-15 16:10:18.254599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.366 [2024-07-15 16:10:18.254611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.366 qpair failed and we were unable to recover it. 00:28:49.366 [2024-07-15 16:10:18.254786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.366 [2024-07-15 16:10:18.254797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.366 qpair failed and we were unable to recover it. 00:28:49.366 [2024-07-15 16:10:18.254972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.366 [2024-07-15 16:10:18.255002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.366 qpair failed and we were unable to recover it. 00:28:49.366 [2024-07-15 16:10:18.255170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.366 [2024-07-15 16:10:18.255200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.366 qpair failed and we were unable to recover it. 00:28:49.366 [2024-07-15 16:10:18.255424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.366 [2024-07-15 16:10:18.255455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.366 qpair failed and we were unable to recover it. 00:28:49.366 [2024-07-15 16:10:18.255664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.366 [2024-07-15 16:10:18.255695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.366 qpair failed and we were unable to recover it. 00:28:49.366 [2024-07-15 16:10:18.255897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.366 [2024-07-15 16:10:18.255928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.366 qpair failed and we were unable to recover it. 00:28:49.366 [2024-07-15 16:10:18.256108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.366 [2024-07-15 16:10:18.256119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.366 qpair failed and we were unable to recover it. 00:28:49.366 [2024-07-15 16:10:18.256306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.367 [2024-07-15 16:10:18.256338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.367 qpair failed and we were unable to recover it. 00:28:49.367 [2024-07-15 16:10:18.256468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.367 [2024-07-15 16:10:18.256498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.367 qpair failed and we were unable to recover it. 00:28:49.367 [2024-07-15 16:10:18.256710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.367 [2024-07-15 16:10:18.256740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.367 qpair failed and we were unable to recover it. 00:28:49.367 [2024-07-15 16:10:18.256966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.367 [2024-07-15 16:10:18.256996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.367 qpair failed and we were unable to recover it. 00:28:49.367 [2024-07-15 16:10:18.257197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.367 [2024-07-15 16:10:18.257209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.367 qpair failed and we were unable to recover it. 00:28:49.367 [2024-07-15 16:10:18.257398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.367 [2024-07-15 16:10:18.257430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.367 qpair failed and we were unable to recover it. 00:28:49.367 [2024-07-15 16:10:18.257638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.367 [2024-07-15 16:10:18.257669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.367 qpair failed and we were unable to recover it. 00:28:49.367 [2024-07-15 16:10:18.257816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.367 [2024-07-15 16:10:18.257847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.367 qpair failed and we were unable to recover it. 00:28:49.367 [2024-07-15 16:10:18.258112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.367 [2024-07-15 16:10:18.258142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.367 qpair failed and we were unable to recover it. 00:28:49.367 [2024-07-15 16:10:18.258295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.367 [2024-07-15 16:10:18.258327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.367 qpair failed and we were unable to recover it. 00:28:49.367 [2024-07-15 16:10:18.258473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.367 [2024-07-15 16:10:18.258503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.367 qpair failed and we were unable to recover it. 00:28:49.367 [2024-07-15 16:10:18.258645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.367 [2024-07-15 16:10:18.258676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.367 qpair failed and we were unable to recover it. 00:28:49.367 [2024-07-15 16:10:18.258880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.367 [2024-07-15 16:10:18.258911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.367 qpair failed and we were unable to recover it. 00:28:49.367 [2024-07-15 16:10:18.259124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.367 [2024-07-15 16:10:18.259156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.367 qpair failed and we were unable to recover it. 00:28:49.367 [2024-07-15 16:10:18.259385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.367 [2024-07-15 16:10:18.259397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.367 qpair failed and we were unable to recover it. 00:28:49.367 [2024-07-15 16:10:18.259485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.367 [2024-07-15 16:10:18.259496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.367 qpair failed and we were unable to recover it. 00:28:49.367 [2024-07-15 16:10:18.259650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.367 [2024-07-15 16:10:18.259661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.367 qpair failed and we were unable to recover it. 00:28:49.367 [2024-07-15 16:10:18.259820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.367 [2024-07-15 16:10:18.259832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.367 qpair failed and we were unable to recover it. 00:28:49.367 [2024-07-15 16:10:18.260002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.367 [2024-07-15 16:10:18.260013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.367 qpair failed and we were unable to recover it. 00:28:49.367 [2024-07-15 16:10:18.260122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.367 [2024-07-15 16:10:18.260134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.367 qpair failed and we were unable to recover it. 00:28:49.367 [2024-07-15 16:10:18.260244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.367 [2024-07-15 16:10:18.260257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.367 qpair failed and we were unable to recover it. 00:28:49.367 [2024-07-15 16:10:18.260434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.367 [2024-07-15 16:10:18.260446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.367 qpair failed and we were unable to recover it. 00:28:49.367 [2024-07-15 16:10:18.260557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.367 [2024-07-15 16:10:18.260568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.367 qpair failed and we were unable to recover it. 00:28:49.367 [2024-07-15 16:10:18.260671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.367 [2024-07-15 16:10:18.260684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.367 qpair failed and we were unable to recover it. 00:28:49.367 [2024-07-15 16:10:18.260782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.367 [2024-07-15 16:10:18.260793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.367 qpair failed and we were unable to recover it. 00:28:49.367 [2024-07-15 16:10:18.260895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.367 [2024-07-15 16:10:18.260906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.367 qpair failed and we were unable to recover it. 00:28:49.367 [2024-07-15 16:10:18.261113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.367 [2024-07-15 16:10:18.261150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.367 qpair failed and we were unable to recover it. 00:28:49.367 [2024-07-15 16:10:18.261302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.367 [2024-07-15 16:10:18.261333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.367 qpair failed and we were unable to recover it. 00:28:49.367 [2024-07-15 16:10:18.261494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.367 [2024-07-15 16:10:18.261525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.367 qpair failed and we were unable to recover it. 00:28:49.367 [2024-07-15 16:10:18.261669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.367 [2024-07-15 16:10:18.261699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.367 qpair failed and we were unable to recover it. 00:28:49.367 [2024-07-15 16:10:18.261911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.367 [2024-07-15 16:10:18.261942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.367 qpair failed and we were unable to recover it. 00:28:49.367 [2024-07-15 16:10:18.262089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.367 [2024-07-15 16:10:18.262100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.367 qpair failed and we were unable to recover it. 00:28:49.367 [2024-07-15 16:10:18.262189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.367 [2024-07-15 16:10:18.262199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.367 qpair failed and we were unable to recover it. 00:28:49.367 [2024-07-15 16:10:18.262326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.367 [2024-07-15 16:10:18.262356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.367 qpair failed and we were unable to recover it. 00:28:49.367 [2024-07-15 16:10:18.262516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.367 [2024-07-15 16:10:18.262547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.367 qpair failed and we were unable to recover it. 00:28:49.367 [2024-07-15 16:10:18.262735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.367 [2024-07-15 16:10:18.262765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.367 qpair failed and we were unable to recover it. 00:28:49.367 [2024-07-15 16:10:18.263078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.367 [2024-07-15 16:10:18.263108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.367 qpair failed and we were unable to recover it. 00:28:49.367 [2024-07-15 16:10:18.263412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.367 [2024-07-15 16:10:18.263444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.367 qpair failed and we were unable to recover it. 00:28:49.367 [2024-07-15 16:10:18.263577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.367 [2024-07-15 16:10:18.263588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.367 qpair failed and we were unable to recover it. 00:28:49.367 [2024-07-15 16:10:18.263757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.368 [2024-07-15 16:10:18.263769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.368 qpair failed and we were unable to recover it. 00:28:49.368 [2024-07-15 16:10:18.263939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.368 [2024-07-15 16:10:18.263951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.368 qpair failed and we were unable to recover it. 00:28:49.368 [2024-07-15 16:10:18.264118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.368 [2024-07-15 16:10:18.264131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.368 qpair failed and we were unable to recover it. 00:28:49.368 [2024-07-15 16:10:18.264366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.368 [2024-07-15 16:10:18.264399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.368 qpair failed and we were unable to recover it. 00:28:49.368 [2024-07-15 16:10:18.264601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.368 [2024-07-15 16:10:18.264632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.368 qpair failed and we were unable to recover it. 00:28:49.368 [2024-07-15 16:10:18.264841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.368 [2024-07-15 16:10:18.264872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.368 qpair failed and we were unable to recover it. 00:28:49.368 [2024-07-15 16:10:18.265082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.368 [2024-07-15 16:10:18.265113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.368 qpair failed and we were unable to recover it. 00:28:49.368 [2024-07-15 16:10:18.265372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.368 [2024-07-15 16:10:18.265383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.368 qpair failed and we were unable to recover it. 00:28:49.368 [2024-07-15 16:10:18.265493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.368 [2024-07-15 16:10:18.265505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.368 qpair failed and we were unable to recover it. 00:28:49.368 [2024-07-15 16:10:18.265755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.368 [2024-07-15 16:10:18.265786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.368 qpair failed and we were unable to recover it. 00:28:49.368 [2024-07-15 16:10:18.265926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.368 [2024-07-15 16:10:18.265956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.368 qpair failed and we were unable to recover it. 00:28:49.368 [2024-07-15 16:10:18.266199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.368 [2024-07-15 16:10:18.266236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.368 qpair failed and we were unable to recover it. 00:28:49.368 [2024-07-15 16:10:18.266434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.368 [2024-07-15 16:10:18.266464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.368 qpair failed and we were unable to recover it. 00:28:49.368 [2024-07-15 16:10:18.266731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.368 [2024-07-15 16:10:18.266762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.368 qpair failed and we were unable to recover it. 00:28:49.368 [2024-07-15 16:10:18.266973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.368 [2024-07-15 16:10:18.267004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.368 qpair failed and we were unable to recover it. 00:28:49.368 [2024-07-15 16:10:18.267321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.368 [2024-07-15 16:10:18.267352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.368 qpair failed and we were unable to recover it. 00:28:49.368 [2024-07-15 16:10:18.267600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.368 [2024-07-15 16:10:18.267611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.368 qpair failed and we were unable to recover it. 00:28:49.368 [2024-07-15 16:10:18.267767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.368 [2024-07-15 16:10:18.267778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.368 qpair failed and we were unable to recover it. 00:28:49.368 [2024-07-15 16:10:18.268004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.368 [2024-07-15 16:10:18.268034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.368 qpair failed and we were unable to recover it. 00:28:49.368 [2024-07-15 16:10:18.268267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.368 [2024-07-15 16:10:18.268300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.368 qpair failed and we were unable to recover it. 00:28:49.368 [2024-07-15 16:10:18.268439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.368 [2024-07-15 16:10:18.268450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.368 qpair failed and we were unable to recover it. 00:28:49.368 [2024-07-15 16:10:18.268603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.368 [2024-07-15 16:10:18.268614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.368 qpair failed and we were unable to recover it. 00:28:49.368 [2024-07-15 16:10:18.268765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.368 [2024-07-15 16:10:18.268776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.368 qpair failed and we were unable to recover it. 00:28:49.368 [2024-07-15 16:10:18.269026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.368 [2024-07-15 16:10:18.269057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.368 qpair failed and we were unable to recover it. 00:28:49.368 [2024-07-15 16:10:18.269187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.368 [2024-07-15 16:10:18.269218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.368 qpair failed and we were unable to recover it. 00:28:49.368 [2024-07-15 16:10:18.269438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.368 [2024-07-15 16:10:18.269470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.368 qpair failed and we were unable to recover it. 00:28:49.368 [2024-07-15 16:10:18.269740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.368 [2024-07-15 16:10:18.269771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.368 qpair failed and we were unable to recover it. 00:28:49.368 [2024-07-15 16:10:18.269924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.368 [2024-07-15 16:10:18.269954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.368 qpair failed and we were unable to recover it. 00:28:49.368 [2024-07-15 16:10:18.270104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.368 [2024-07-15 16:10:18.270135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.368 qpair failed and we were unable to recover it. 00:28:49.368 [2024-07-15 16:10:18.270290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.368 [2024-07-15 16:10:18.270323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.368 qpair failed and we were unable to recover it. 00:28:49.368 [2024-07-15 16:10:18.270588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.368 [2024-07-15 16:10:18.270620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.368 qpair failed and we were unable to recover it. 00:28:49.368 [2024-07-15 16:10:18.270836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.368 [2024-07-15 16:10:18.270867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.368 qpair failed and we were unable to recover it. 00:28:49.368 [2024-07-15 16:10:18.271082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.368 [2024-07-15 16:10:18.271113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.368 qpair failed and we were unable to recover it. 00:28:49.368 [2024-07-15 16:10:18.271324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.368 [2024-07-15 16:10:18.271356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.368 qpair failed and we were unable to recover it. 00:28:49.368 [2024-07-15 16:10:18.271567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.368 [2024-07-15 16:10:18.271598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.368 qpair failed and we were unable to recover it. 00:28:49.368 [2024-07-15 16:10:18.271803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.368 [2024-07-15 16:10:18.271834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.368 qpair failed and we were unable to recover it. 00:28:49.368 [2024-07-15 16:10:18.272059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.368 [2024-07-15 16:10:18.272090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.368 qpair failed and we were unable to recover it. 00:28:49.368 [2024-07-15 16:10:18.272307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.368 [2024-07-15 16:10:18.272340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.368 qpair failed and we were unable to recover it. 00:28:49.368 [2024-07-15 16:10:18.272553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.368 [2024-07-15 16:10:18.272565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.368 qpair failed and we were unable to recover it. 00:28:49.368 [2024-07-15 16:10:18.272734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.368 [2024-07-15 16:10:18.272747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.368 qpair failed and we were unable to recover it. 00:28:49.369 [2024-07-15 16:10:18.272826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.369 [2024-07-15 16:10:18.272837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.369 qpair failed and we were unable to recover it. 00:28:49.369 [2024-07-15 16:10:18.272924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.369 [2024-07-15 16:10:18.272935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.369 qpair failed and we were unable to recover it. 00:28:49.369 [2024-07-15 16:10:18.273042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.369 [2024-07-15 16:10:18.273054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.369 qpair failed and we were unable to recover it. 00:28:49.369 [2024-07-15 16:10:18.273156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.369 [2024-07-15 16:10:18.273169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.369 qpair failed and we were unable to recover it. 00:28:49.369 [2024-07-15 16:10:18.273337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.369 [2024-07-15 16:10:18.273349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.369 qpair failed and we were unable to recover it. 00:28:49.369 [2024-07-15 16:10:18.273491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.369 [2024-07-15 16:10:18.273503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.369 qpair failed and we were unable to recover it. 00:28:49.369 [2024-07-15 16:10:18.273565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.369 [2024-07-15 16:10:18.273575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.369 qpair failed and we were unable to recover it. 00:28:49.369 [2024-07-15 16:10:18.273734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.369 [2024-07-15 16:10:18.273746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.369 qpair failed and we were unable to recover it. 00:28:49.369 [2024-07-15 16:10:18.273990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.369 [2024-07-15 16:10:18.274030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.369 qpair failed and we were unable to recover it. 00:28:49.369 [2024-07-15 16:10:18.274192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.369 [2024-07-15 16:10:18.274223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.369 qpair failed and we were unable to recover it. 00:28:49.369 [2024-07-15 16:10:18.274452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.369 [2024-07-15 16:10:18.274484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.369 qpair failed and we were unable to recover it. 00:28:49.369 [2024-07-15 16:10:18.274643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.369 [2024-07-15 16:10:18.274674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.369 qpair failed and we were unable to recover it. 00:28:49.369 [2024-07-15 16:10:18.274819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.369 [2024-07-15 16:10:18.274850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.369 qpair failed and we were unable to recover it. 00:28:49.369 [2024-07-15 16:10:18.275038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.369 [2024-07-15 16:10:18.275068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.369 qpair failed and we were unable to recover it. 00:28:49.369 [2024-07-15 16:10:18.275302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.369 [2024-07-15 16:10:18.275339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.369 qpair failed and we were unable to recover it. 00:28:49.369 [2024-07-15 16:10:18.275538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.369 [2024-07-15 16:10:18.275570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.369 qpair failed and we were unable to recover it. 00:28:49.369 [2024-07-15 16:10:18.275716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.369 [2024-07-15 16:10:18.275747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.369 qpair failed and we were unable to recover it. 00:28:49.369 [2024-07-15 16:10:18.275956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.369 [2024-07-15 16:10:18.275987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.369 qpair failed and we were unable to recover it. 00:28:49.369 [2024-07-15 16:10:18.276140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.369 [2024-07-15 16:10:18.276171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.369 qpair failed and we were unable to recover it. 00:28:49.369 [2024-07-15 16:10:18.276505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.369 [2024-07-15 16:10:18.276539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.369 qpair failed and we were unable to recover it. 00:28:49.369 [2024-07-15 16:10:18.276754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.369 [2024-07-15 16:10:18.276785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.369 qpair failed and we were unable to recover it. 00:28:49.369 [2024-07-15 16:10:18.276944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.369 [2024-07-15 16:10:18.276976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.369 qpair failed and we were unable to recover it. 00:28:49.647 [2024-07-15 16:10:18.277184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.647 [2024-07-15 16:10:18.277221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.647 qpair failed and we were unable to recover it. 00:28:49.647 [2024-07-15 16:10:18.277403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.647 [2024-07-15 16:10:18.277416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.647 qpair failed and we were unable to recover it. 00:28:49.647 [2024-07-15 16:10:18.277539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.647 [2024-07-15 16:10:18.277551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.647 qpair failed and we were unable to recover it. 00:28:49.647 [2024-07-15 16:10:18.277773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.647 [2024-07-15 16:10:18.277785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.647 qpair failed and we were unable to recover it. 00:28:49.647 [2024-07-15 16:10:18.278038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.647 [2024-07-15 16:10:18.278050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.647 qpair failed and we were unable to recover it. 00:28:49.647 [2024-07-15 16:10:18.278154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.647 [2024-07-15 16:10:18.278167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.647 qpair failed and we were unable to recover it. 00:28:49.647 [2024-07-15 16:10:18.278247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.647 [2024-07-15 16:10:18.278258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.647 qpair failed and we were unable to recover it. 00:28:49.647 [2024-07-15 16:10:18.278361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.647 [2024-07-15 16:10:18.278371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.647 qpair failed and we were unable to recover it. 00:28:49.647 [2024-07-15 16:10:18.278486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.647 [2024-07-15 16:10:18.278498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.647 qpair failed and we were unable to recover it. 00:28:49.647 [2024-07-15 16:10:18.278717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.647 [2024-07-15 16:10:18.278730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.647 qpair failed and we were unable to recover it. 00:28:49.647 [2024-07-15 16:10:18.278895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.647 [2024-07-15 16:10:18.278907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.647 qpair failed and we were unable to recover it. 00:28:49.647 [2024-07-15 16:10:18.279005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.647 [2024-07-15 16:10:18.279016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.647 qpair failed and we were unable to recover it. 00:28:49.647 [2024-07-15 16:10:18.279186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.647 [2024-07-15 16:10:18.279199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.647 qpair failed and we were unable to recover it. 00:28:49.647 [2024-07-15 16:10:18.279368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.647 [2024-07-15 16:10:18.279381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.647 qpair failed and we were unable to recover it. 00:28:49.647 [2024-07-15 16:10:18.279473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.647 [2024-07-15 16:10:18.279484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.647 qpair failed and we were unable to recover it. 00:28:49.647 [2024-07-15 16:10:18.279605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.647 [2024-07-15 16:10:18.279617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.647 qpair failed and we were unable to recover it. 00:28:49.647 [2024-07-15 16:10:18.279800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.647 [2024-07-15 16:10:18.279812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.647 qpair failed and we were unable to recover it. 00:28:49.647 [2024-07-15 16:10:18.280000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.647 [2024-07-15 16:10:18.280031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.647 qpair failed and we were unable to recover it. 00:28:49.647 [2024-07-15 16:10:18.280185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.647 [2024-07-15 16:10:18.280216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.647 qpair failed and we were unable to recover it. 00:28:49.647 [2024-07-15 16:10:18.280434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.647 [2024-07-15 16:10:18.280466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.647 qpair failed and we were unable to recover it. 00:28:49.647 [2024-07-15 16:10:18.280680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.647 [2024-07-15 16:10:18.280710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.647 qpair failed and we were unable to recover it. 00:28:49.647 [2024-07-15 16:10:18.280852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.647 [2024-07-15 16:10:18.280882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.647 qpair failed and we were unable to recover it. 00:28:49.647 [2024-07-15 16:10:18.281098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.647 [2024-07-15 16:10:18.281128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.647 qpair failed and we were unable to recover it. 00:28:49.647 [2024-07-15 16:10:18.281266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.647 [2024-07-15 16:10:18.281297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.647 qpair failed and we were unable to recover it. 00:28:49.647 [2024-07-15 16:10:18.281402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.647 [2024-07-15 16:10:18.281412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.647 qpair failed and we were unable to recover it. 00:28:49.647 [2024-07-15 16:10:18.281564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.647 [2024-07-15 16:10:18.281576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.647 qpair failed and we were unable to recover it. 00:28:49.647 [2024-07-15 16:10:18.281669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.647 [2024-07-15 16:10:18.281711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.647 qpair failed and we were unable to recover it. 00:28:49.647 [2024-07-15 16:10:18.281857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.647 [2024-07-15 16:10:18.281888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.647 qpair failed and we were unable to recover it. 00:28:49.647 [2024-07-15 16:10:18.282100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.647 [2024-07-15 16:10:18.282130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.647 qpair failed and we were unable to recover it. 00:28:49.647 [2024-07-15 16:10:18.282253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.647 [2024-07-15 16:10:18.282297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.647 qpair failed and we were unable to recover it. 00:28:49.647 [2024-07-15 16:10:18.282450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.647 [2024-07-15 16:10:18.282481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.647 qpair failed and we were unable to recover it. 00:28:49.647 [2024-07-15 16:10:18.282638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.648 [2024-07-15 16:10:18.282668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.648 qpair failed and we were unable to recover it. 00:28:49.648 [2024-07-15 16:10:18.282888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.648 [2024-07-15 16:10:18.282923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.648 qpair failed and we were unable to recover it. 00:28:49.648 [2024-07-15 16:10:18.283205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.648 [2024-07-15 16:10:18.283245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.648 qpair failed and we were unable to recover it. 00:28:49.648 [2024-07-15 16:10:18.283382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.648 [2024-07-15 16:10:18.283394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.648 qpair failed and we were unable to recover it. 00:28:49.648 [2024-07-15 16:10:18.283488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.648 [2024-07-15 16:10:18.283500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.648 qpair failed and we were unable to recover it. 00:28:49.648 [2024-07-15 16:10:18.283726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.648 [2024-07-15 16:10:18.283757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.648 qpair failed and we were unable to recover it. 00:28:49.648 [2024-07-15 16:10:18.283884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.648 [2024-07-15 16:10:18.283915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.648 qpair failed and we were unable to recover it. 00:28:49.648 [2024-07-15 16:10:18.284063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.648 [2024-07-15 16:10:18.284093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.648 qpair failed and we were unable to recover it. 00:28:49.648 [2024-07-15 16:10:18.284243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.648 [2024-07-15 16:10:18.284255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.648 qpair failed and we were unable to recover it. 00:28:49.648 [2024-07-15 16:10:18.284388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.648 [2024-07-15 16:10:18.284399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.648 qpair failed and we were unable to recover it. 00:28:49.648 [2024-07-15 16:10:18.284497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.648 [2024-07-15 16:10:18.284508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.648 qpair failed and we were unable to recover it. 00:28:49.648 [2024-07-15 16:10:18.284764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.648 [2024-07-15 16:10:18.284795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.648 qpair failed and we were unable to recover it. 00:28:49.648 [2024-07-15 16:10:18.285002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.648 [2024-07-15 16:10:18.285032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.648 qpair failed and we were unable to recover it. 00:28:49.648 [2024-07-15 16:10:18.285171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.648 [2024-07-15 16:10:18.285202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.648 qpair failed and we were unable to recover it. 00:28:49.648 [2024-07-15 16:10:18.285368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.648 [2024-07-15 16:10:18.285380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.648 qpair failed and we were unable to recover it. 00:28:49.648 [2024-07-15 16:10:18.285537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.648 [2024-07-15 16:10:18.285568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.648 qpair failed and we were unable to recover it. 00:28:49.648 [2024-07-15 16:10:18.285725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.648 [2024-07-15 16:10:18.285754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.648 qpair failed and we were unable to recover it. 00:28:49.648 [2024-07-15 16:10:18.285997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.648 [2024-07-15 16:10:18.286027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.648 qpair failed and we were unable to recover it. 00:28:49.648 [2024-07-15 16:10:18.286238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.648 [2024-07-15 16:10:18.286269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.648 qpair failed and we were unable to recover it. 00:28:49.648 [2024-07-15 16:10:18.286414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.648 [2024-07-15 16:10:18.286445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.648 qpair failed and we were unable to recover it. 00:28:49.648 [2024-07-15 16:10:18.286727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.648 [2024-07-15 16:10:18.286758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.648 qpair failed and we were unable to recover it. 00:28:49.648 [2024-07-15 16:10:18.286958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.648 [2024-07-15 16:10:18.286988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.648 qpair failed and we were unable to recover it. 00:28:49.648 [2024-07-15 16:10:18.287196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.648 [2024-07-15 16:10:18.287236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.648 qpair failed and we were unable to recover it. 00:28:49.648 [2024-07-15 16:10:18.287532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.648 [2024-07-15 16:10:18.287563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.648 qpair failed and we were unable to recover it. 00:28:49.648 [2024-07-15 16:10:18.287760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.648 [2024-07-15 16:10:18.287790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.648 qpair failed and we were unable to recover it. 00:28:49.648 [2024-07-15 16:10:18.287991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.648 [2024-07-15 16:10:18.288021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.648 qpair failed and we were unable to recover it. 00:28:49.648 [2024-07-15 16:10:18.288242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.648 [2024-07-15 16:10:18.288254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.648 qpair failed and we were unable to recover it. 00:28:49.648 [2024-07-15 16:10:18.288529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.648 [2024-07-15 16:10:18.288541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.648 qpair failed and we were unable to recover it. 00:28:49.648 [2024-07-15 16:10:18.288797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.648 [2024-07-15 16:10:18.288809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.648 qpair failed and we were unable to recover it. 00:28:49.648 [2024-07-15 16:10:18.288970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.648 [2024-07-15 16:10:18.288982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.648 qpair failed and we were unable to recover it. 00:28:49.648 [2024-07-15 16:10:18.289154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.648 [2024-07-15 16:10:18.289165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.648 qpair failed and we were unable to recover it. 00:28:49.648 [2024-07-15 16:10:18.289337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.648 [2024-07-15 16:10:18.289349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.648 qpair failed and we were unable to recover it. 00:28:49.648 [2024-07-15 16:10:18.289523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.648 [2024-07-15 16:10:18.289554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.648 qpair failed and we were unable to recover it. 00:28:49.648 [2024-07-15 16:10:18.289845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.648 [2024-07-15 16:10:18.289876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.648 qpair failed and we were unable to recover it. 00:28:49.648 [2024-07-15 16:10:18.290073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.648 [2024-07-15 16:10:18.290103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.648 qpair failed and we were unable to recover it. 00:28:49.648 [2024-07-15 16:10:18.290248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.648 [2024-07-15 16:10:18.290260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.648 qpair failed and we were unable to recover it. 00:28:49.648 [2024-07-15 16:10:18.290415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.648 [2024-07-15 16:10:18.290427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.648 qpair failed and we were unable to recover it. 00:28:49.648 [2024-07-15 16:10:18.290527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.648 [2024-07-15 16:10:18.290538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.648 qpair failed and we were unable to recover it. 00:28:49.648 [2024-07-15 16:10:18.290654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.648 [2024-07-15 16:10:18.290666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.648 qpair failed and we were unable to recover it. 00:28:49.648 [2024-07-15 16:10:18.290851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.648 [2024-07-15 16:10:18.290862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.648 qpair failed and we were unable to recover it. 00:28:49.648 [2024-07-15 16:10:18.290966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.648 [2024-07-15 16:10:18.290997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.648 qpair failed and we were unable to recover it. 00:28:49.649 [2024-07-15 16:10:18.291289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.649 [2024-07-15 16:10:18.291325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.649 qpair failed and we were unable to recover it. 00:28:49.649 [2024-07-15 16:10:18.291480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.649 [2024-07-15 16:10:18.291512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.649 qpair failed and we were unable to recover it. 00:28:49.649 [2024-07-15 16:10:18.291802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.649 [2024-07-15 16:10:18.291833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.649 qpair failed and we were unable to recover it. 00:28:49.649 [2024-07-15 16:10:18.291970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.649 [2024-07-15 16:10:18.292000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.649 qpair failed and we were unable to recover it. 00:28:49.649 [2024-07-15 16:10:18.292216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.649 [2024-07-15 16:10:18.292255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.649 qpair failed and we were unable to recover it. 00:28:49.649 [2024-07-15 16:10:18.292480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.649 [2024-07-15 16:10:18.292511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.649 qpair failed and we were unable to recover it. 00:28:49.649 [2024-07-15 16:10:18.292714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.649 [2024-07-15 16:10:18.292743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.649 qpair failed and we were unable to recover it. 00:28:49.649 [2024-07-15 16:10:18.292962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.649 [2024-07-15 16:10:18.292993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.649 qpair failed and we were unable to recover it. 00:28:49.649 [2024-07-15 16:10:18.293263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.649 [2024-07-15 16:10:18.293296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.649 qpair failed and we were unable to recover it. 00:28:49.649 [2024-07-15 16:10:18.293459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.649 [2024-07-15 16:10:18.293490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.649 qpair failed and we were unable to recover it. 00:28:49.649 [2024-07-15 16:10:18.293692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.649 [2024-07-15 16:10:18.293723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.649 qpair failed and we were unable to recover it. 00:28:49.649 [2024-07-15 16:10:18.293919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.649 [2024-07-15 16:10:18.293949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.649 qpair failed and we were unable to recover it. 00:28:49.649 [2024-07-15 16:10:18.294236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.649 [2024-07-15 16:10:18.294268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.649 qpair failed and we were unable to recover it. 00:28:49.649 [2024-07-15 16:10:18.294428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.649 [2024-07-15 16:10:18.294441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.649 qpair failed and we were unable to recover it. 00:28:49.649 [2024-07-15 16:10:18.294674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.649 [2024-07-15 16:10:18.294704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.649 qpair failed and we were unable to recover it. 00:28:49.649 [2024-07-15 16:10:18.294920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.649 [2024-07-15 16:10:18.294950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.649 qpair failed and we were unable to recover it. 00:28:49.649 [2024-07-15 16:10:18.295149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.649 [2024-07-15 16:10:18.295180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.649 qpair failed and we were unable to recover it. 00:28:49.649 [2024-07-15 16:10:18.295310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.649 [2024-07-15 16:10:18.295323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.649 qpair failed and we were unable to recover it. 00:28:49.649 [2024-07-15 16:10:18.295439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.649 [2024-07-15 16:10:18.295450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.649 qpair failed and we were unable to recover it. 00:28:49.649 [2024-07-15 16:10:18.295641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.649 [2024-07-15 16:10:18.295652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.649 qpair failed and we were unable to recover it. 00:28:49.649 [2024-07-15 16:10:18.295808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.649 [2024-07-15 16:10:18.295820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.649 qpair failed and we were unable to recover it. 00:28:49.649 [2024-07-15 16:10:18.296010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.649 [2024-07-15 16:10:18.296040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.649 qpair failed and we were unable to recover it. 00:28:49.649 [2024-07-15 16:10:18.296181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.649 [2024-07-15 16:10:18.296212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.649 qpair failed and we were unable to recover it. 00:28:49.649 [2024-07-15 16:10:18.296355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.649 [2024-07-15 16:10:18.296387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.649 qpair failed and we were unable to recover it. 00:28:49.649 [2024-07-15 16:10:18.296532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.649 [2024-07-15 16:10:18.296544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.649 qpair failed and we were unable to recover it. 00:28:49.649 [2024-07-15 16:10:18.296652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.649 [2024-07-15 16:10:18.296664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.649 qpair failed and we were unable to recover it. 00:28:49.649 [2024-07-15 16:10:18.296832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.649 [2024-07-15 16:10:18.296844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.649 qpair failed and we were unable to recover it. 00:28:49.649 [2024-07-15 16:10:18.297125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.649 [2024-07-15 16:10:18.297156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.649 qpair failed and we were unable to recover it. 00:28:49.649 [2024-07-15 16:10:18.297354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.649 [2024-07-15 16:10:18.297386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.649 qpair failed and we were unable to recover it. 00:28:49.649 [2024-07-15 16:10:18.297534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.649 [2024-07-15 16:10:18.297576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.649 qpair failed and we were unable to recover it. 00:28:49.649 [2024-07-15 16:10:18.297676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.649 [2024-07-15 16:10:18.297689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.649 qpair failed and we were unable to recover it. 00:28:49.649 [2024-07-15 16:10:18.297871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.649 [2024-07-15 16:10:18.297901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.649 qpair failed and we were unable to recover it. 00:28:49.649 [2024-07-15 16:10:18.298112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.649 [2024-07-15 16:10:18.298143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.649 qpair failed and we were unable to recover it. 00:28:49.649 [2024-07-15 16:10:18.298355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.649 [2024-07-15 16:10:18.298391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.649 qpair failed and we were unable to recover it. 00:28:49.649 [2024-07-15 16:10:18.298562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.649 [2024-07-15 16:10:18.298573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.649 qpair failed and we were unable to recover it. 00:28:49.649 [2024-07-15 16:10:18.298686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.649 [2024-07-15 16:10:18.298716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.649 qpair failed and we were unable to recover it. 00:28:49.649 [2024-07-15 16:10:18.298857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.649 [2024-07-15 16:10:18.298888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.649 qpair failed and we were unable to recover it. 00:28:49.649 [2024-07-15 16:10:18.299103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.649 [2024-07-15 16:10:18.299134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.649 qpair failed and we were unable to recover it. 00:28:49.649 [2024-07-15 16:10:18.299342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.649 [2024-07-15 16:10:18.299354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.649 qpair failed and we were unable to recover it. 00:28:49.649 [2024-07-15 16:10:18.299516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.649 [2024-07-15 16:10:18.299547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.649 qpair failed and we were unable to recover it. 00:28:49.649 [2024-07-15 16:10:18.299749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.650 [2024-07-15 16:10:18.299785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.650 qpair failed and we were unable to recover it. 00:28:49.650 [2024-07-15 16:10:18.299930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.650 [2024-07-15 16:10:18.299961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.650 qpair failed and we were unable to recover it. 00:28:49.650 [2024-07-15 16:10:18.300156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.650 [2024-07-15 16:10:18.300167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.650 qpair failed and we were unable to recover it. 00:28:49.650 [2024-07-15 16:10:18.300283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.650 [2024-07-15 16:10:18.300295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.650 qpair failed and we were unable to recover it. 00:28:49.650 [2024-07-15 16:10:18.300491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.650 [2024-07-15 16:10:18.300522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.650 qpair failed and we were unable to recover it. 00:28:49.650 [2024-07-15 16:10:18.300786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.650 [2024-07-15 16:10:18.300816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.650 qpair failed and we were unable to recover it. 00:28:49.650 [2024-07-15 16:10:18.301028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.650 [2024-07-15 16:10:18.301059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.650 qpair failed and we were unable to recover it. 00:28:49.650 [2024-07-15 16:10:18.301269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.650 [2024-07-15 16:10:18.301302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.650 qpair failed and we were unable to recover it. 00:28:49.650 [2024-07-15 16:10:18.301514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.650 [2024-07-15 16:10:18.301544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.650 qpair failed and we were unable to recover it. 00:28:49.650 [2024-07-15 16:10:18.301760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.650 [2024-07-15 16:10:18.301790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.650 qpair failed and we were unable to recover it. 00:28:49.650 [2024-07-15 16:10:18.301943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.650 [2024-07-15 16:10:18.301974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.650 qpair failed and we were unable to recover it. 00:28:49.650 [2024-07-15 16:10:18.302272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.650 [2024-07-15 16:10:18.302303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.650 qpair failed and we were unable to recover it. 00:28:49.650 [2024-07-15 16:10:18.302538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.650 [2024-07-15 16:10:18.302569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.650 qpair failed and we were unable to recover it. 00:28:49.650 [2024-07-15 16:10:18.302734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.650 [2024-07-15 16:10:18.302765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.650 qpair failed and we were unable to recover it. 00:28:49.650 [2024-07-15 16:10:18.302969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.650 [2024-07-15 16:10:18.302999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.650 qpair failed and we were unable to recover it. 00:28:49.650 [2024-07-15 16:10:18.303207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.650 [2024-07-15 16:10:18.303219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.650 qpair failed and we were unable to recover it. 00:28:49.650 [2024-07-15 16:10:18.303393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.650 [2024-07-15 16:10:18.303405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.650 qpair failed and we were unable to recover it. 00:28:49.650 [2024-07-15 16:10:18.303599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.650 [2024-07-15 16:10:18.303611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.650 qpair failed and we were unable to recover it. 00:28:49.650 [2024-07-15 16:10:18.303727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.650 [2024-07-15 16:10:18.303738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.650 qpair failed and we were unable to recover it. 00:28:49.650 [2024-07-15 16:10:18.303844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.650 [2024-07-15 16:10:18.303856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.650 qpair failed and we were unable to recover it. 00:28:49.650 [2024-07-15 16:10:18.303945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.650 [2024-07-15 16:10:18.303956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.650 qpair failed and we were unable to recover it. 00:28:49.650 [2024-07-15 16:10:18.304108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.650 [2024-07-15 16:10:18.304120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.650 qpair failed and we were unable to recover it. 00:28:49.650 [2024-07-15 16:10:18.304360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.650 [2024-07-15 16:10:18.304373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.650 qpair failed and we were unable to recover it. 00:28:49.650 [2024-07-15 16:10:18.304597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.650 [2024-07-15 16:10:18.304609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.650 qpair failed and we were unable to recover it. 00:28:49.650 [2024-07-15 16:10:18.304766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.650 [2024-07-15 16:10:18.304778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.650 qpair failed and we were unable to recover it. 00:28:49.650 [2024-07-15 16:10:18.304938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.650 [2024-07-15 16:10:18.304950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.650 qpair failed and we were unable to recover it. 00:28:49.650 [2024-07-15 16:10:18.305108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.650 [2024-07-15 16:10:18.305120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.650 qpair failed and we were unable to recover it. 00:28:49.650 [2024-07-15 16:10:18.305219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.650 [2024-07-15 16:10:18.305234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.650 qpair failed and we were unable to recover it. 00:28:49.650 [2024-07-15 16:10:18.305403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.650 [2024-07-15 16:10:18.305416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.650 qpair failed and we were unable to recover it. 00:28:49.650 [2024-07-15 16:10:18.305573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.650 [2024-07-15 16:10:18.305584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.650 qpair failed and we were unable to recover it. 00:28:49.650 [2024-07-15 16:10:18.305744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.650 [2024-07-15 16:10:18.305756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.650 qpair failed and we were unable to recover it. 00:28:49.650 [2024-07-15 16:10:18.305981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.650 [2024-07-15 16:10:18.305993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.650 qpair failed and we were unable to recover it. 00:28:49.650 [2024-07-15 16:10:18.306190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.650 [2024-07-15 16:10:18.306202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.650 qpair failed and we were unable to recover it. 00:28:49.650 [2024-07-15 16:10:18.306362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.650 [2024-07-15 16:10:18.306374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.650 qpair failed and we were unable to recover it. 00:28:49.650 [2024-07-15 16:10:18.306488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.650 [2024-07-15 16:10:18.306500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.650 qpair failed and we were unable to recover it. 00:28:49.650 [2024-07-15 16:10:18.306689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.650 [2024-07-15 16:10:18.306701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.650 qpair failed and we were unable to recover it. 00:28:49.650 [2024-07-15 16:10:18.306959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.650 [2024-07-15 16:10:18.306972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.650 qpair failed and we were unable to recover it. 00:28:49.650 [2024-07-15 16:10:18.307136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.650 [2024-07-15 16:10:18.307148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.650 qpair failed and we were unable to recover it. 00:28:49.650 [2024-07-15 16:10:18.307267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.650 [2024-07-15 16:10:18.307279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.650 qpair failed and we were unable to recover it. 00:28:49.650 [2024-07-15 16:10:18.307467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.650 [2024-07-15 16:10:18.307479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.650 qpair failed and we were unable to recover it. 00:28:49.650 [2024-07-15 16:10:18.307614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.650 [2024-07-15 16:10:18.307629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.650 qpair failed and we were unable to recover it. 00:28:49.650 [2024-07-15 16:10:18.307793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.651 [2024-07-15 16:10:18.307805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.651 qpair failed and we were unable to recover it. 00:28:49.651 [2024-07-15 16:10:18.307985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.651 [2024-07-15 16:10:18.308015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.651 qpair failed and we were unable to recover it. 00:28:49.651 [2024-07-15 16:10:18.308161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.651 [2024-07-15 16:10:18.308192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.651 qpair failed and we were unable to recover it. 00:28:49.651 [2024-07-15 16:10:18.308417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.651 [2024-07-15 16:10:18.308449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.651 qpair failed and we were unable to recover it. 00:28:49.651 [2024-07-15 16:10:18.308665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.651 [2024-07-15 16:10:18.308695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.651 qpair failed and we were unable to recover it. 00:28:49.651 [2024-07-15 16:10:18.308832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.651 [2024-07-15 16:10:18.308863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.651 qpair failed and we were unable to recover it. 00:28:49.651 [2024-07-15 16:10:18.309129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.651 [2024-07-15 16:10:18.309160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.651 qpair failed and we were unable to recover it. 00:28:49.651 [2024-07-15 16:10:18.309324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.651 [2024-07-15 16:10:18.309336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.651 qpair failed and we were unable to recover it. 00:28:49.651 [2024-07-15 16:10:18.309440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.651 [2024-07-15 16:10:18.309452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.651 qpair failed and we were unable to recover it. 00:28:49.651 [2024-07-15 16:10:18.309645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.651 [2024-07-15 16:10:18.309656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.651 qpair failed and we were unable to recover it. 00:28:49.651 [2024-07-15 16:10:18.309814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.651 [2024-07-15 16:10:18.309827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.651 qpair failed and we were unable to recover it. 00:28:49.651 [2024-07-15 16:10:18.310017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.651 [2024-07-15 16:10:18.310047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.651 qpair failed and we were unable to recover it. 00:28:49.651 [2024-07-15 16:10:18.310185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.651 [2024-07-15 16:10:18.310216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.651 qpair failed and we were unable to recover it. 00:28:49.651 [2024-07-15 16:10:18.310449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.651 [2024-07-15 16:10:18.310482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.651 qpair failed and we were unable to recover it. 00:28:49.651 [2024-07-15 16:10:18.310625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.651 [2024-07-15 16:10:18.310656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.651 qpair failed and we were unable to recover it. 00:28:49.651 [2024-07-15 16:10:18.310792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.651 [2024-07-15 16:10:18.310822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.651 qpair failed and we were unable to recover it. 00:28:49.651 [2024-07-15 16:10:18.310968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.651 [2024-07-15 16:10:18.311000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.651 qpair failed and we were unable to recover it. 00:28:49.651 [2024-07-15 16:10:18.311122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.651 [2024-07-15 16:10:18.311153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.651 qpair failed and we were unable to recover it. 00:28:49.651 [2024-07-15 16:10:18.311323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.651 [2024-07-15 16:10:18.311359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.651 qpair failed and we were unable to recover it. 00:28:49.651 [2024-07-15 16:10:18.311483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.651 [2024-07-15 16:10:18.311495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.651 qpair failed and we were unable to recover it. 00:28:49.651 [2024-07-15 16:10:18.311582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.651 [2024-07-15 16:10:18.311593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.651 qpair failed and we were unable to recover it. 00:28:49.651 [2024-07-15 16:10:18.311723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.651 [2024-07-15 16:10:18.311734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.651 qpair failed and we were unable to recover it. 00:28:49.651 [2024-07-15 16:10:18.311824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.651 [2024-07-15 16:10:18.311834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.651 qpair failed and we were unable to recover it. 00:28:49.651 [2024-07-15 16:10:18.312086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.651 [2024-07-15 16:10:18.312116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.651 qpair failed and we were unable to recover it. 00:28:49.651 [2024-07-15 16:10:18.312263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.651 [2024-07-15 16:10:18.312295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.651 qpair failed and we were unable to recover it. 00:28:49.651 [2024-07-15 16:10:18.312450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.651 [2024-07-15 16:10:18.312481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.651 qpair failed and we were unable to recover it. 00:28:49.651 [2024-07-15 16:10:18.312684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.651 [2024-07-15 16:10:18.312695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.651 qpair failed and we were unable to recover it. 00:28:49.651 [2024-07-15 16:10:18.312898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.651 [2024-07-15 16:10:18.312929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.651 qpair failed and we were unable to recover it. 00:28:49.651 [2024-07-15 16:10:18.313127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.651 [2024-07-15 16:10:18.313157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.651 qpair failed and we were unable to recover it. 00:28:49.651 [2024-07-15 16:10:18.313280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.651 [2024-07-15 16:10:18.313312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.651 qpair failed and we were unable to recover it. 00:28:49.651 [2024-07-15 16:10:18.313492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.651 [2024-07-15 16:10:18.313504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.651 qpair failed and we were unable to recover it. 00:28:49.651 [2024-07-15 16:10:18.313604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.651 [2024-07-15 16:10:18.313616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.651 qpair failed and we were unable to recover it. 00:28:49.652 [2024-07-15 16:10:18.313737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.652 [2024-07-15 16:10:18.313748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.652 qpair failed and we were unable to recover it. 00:28:49.652 [2024-07-15 16:10:18.313976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.652 [2024-07-15 16:10:18.314007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.652 qpair failed and we were unable to recover it. 00:28:49.652 [2024-07-15 16:10:18.314145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.652 [2024-07-15 16:10:18.314176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.652 qpair failed and we were unable to recover it. 00:28:49.652 [2024-07-15 16:10:18.314394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.652 [2024-07-15 16:10:18.314425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.652 qpair failed and we were unable to recover it. 00:28:49.652 [2024-07-15 16:10:18.314623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.652 [2024-07-15 16:10:18.314653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.652 qpair failed and we were unable to recover it. 00:28:49.652 [2024-07-15 16:10:18.314918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.652 [2024-07-15 16:10:18.314948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.652 qpair failed and we were unable to recover it. 00:28:49.652 [2024-07-15 16:10:18.315152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.652 [2024-07-15 16:10:18.315183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.652 qpair failed and we were unable to recover it. 00:28:49.652 [2024-07-15 16:10:18.315356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.652 [2024-07-15 16:10:18.315394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.652 qpair failed and we were unable to recover it. 00:28:49.652 [2024-07-15 16:10:18.315536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.652 [2024-07-15 16:10:18.315547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.652 qpair failed and we were unable to recover it. 00:28:49.652 [2024-07-15 16:10:18.315778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.652 [2024-07-15 16:10:18.315809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.652 qpair failed and we were unable to recover it. 00:28:49.652 [2024-07-15 16:10:18.316045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.652 [2024-07-15 16:10:18.316076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.652 qpair failed and we were unable to recover it. 00:28:49.652 [2024-07-15 16:10:18.316234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.652 [2024-07-15 16:10:18.316267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.652 qpair failed and we were unable to recover it. 00:28:49.652 [2024-07-15 16:10:18.316468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.652 [2024-07-15 16:10:18.316481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.652 qpair failed and we were unable to recover it. 00:28:49.652 [2024-07-15 16:10:18.316656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.652 [2024-07-15 16:10:18.316686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.652 qpair failed and we were unable to recover it. 00:28:49.652 [2024-07-15 16:10:18.316900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.652 [2024-07-15 16:10:18.316931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.652 qpair failed and we were unable to recover it. 00:28:49.652 [2024-07-15 16:10:18.317194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.652 [2024-07-15 16:10:18.317233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.652 qpair failed and we were unable to recover it. 00:28:49.652 [2024-07-15 16:10:18.317455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.652 [2024-07-15 16:10:18.317485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.652 qpair failed and we were unable to recover it. 00:28:49.652 [2024-07-15 16:10:18.317699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.652 [2024-07-15 16:10:18.317730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.652 qpair failed and we were unable to recover it. 00:28:49.652 [2024-07-15 16:10:18.317952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.652 [2024-07-15 16:10:18.317983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.652 qpair failed and we were unable to recover it. 00:28:49.652 [2024-07-15 16:10:18.318195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.652 [2024-07-15 16:10:18.318234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.652 qpair failed and we were unable to recover it. 00:28:49.652 [2024-07-15 16:10:18.318447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.652 [2024-07-15 16:10:18.318477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.652 qpair failed and we were unable to recover it. 00:28:49.652 [2024-07-15 16:10:18.318751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.652 [2024-07-15 16:10:18.318763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.652 qpair failed and we were unable to recover it. 00:28:49.652 [2024-07-15 16:10:18.318949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.652 [2024-07-15 16:10:18.318961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.652 qpair failed and we were unable to recover it. 00:28:49.652 [2024-07-15 16:10:18.319181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.652 [2024-07-15 16:10:18.319192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.652 qpair failed and we were unable to recover it. 00:28:49.652 [2024-07-15 16:10:18.319349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.652 [2024-07-15 16:10:18.319385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.652 qpair failed and we were unable to recover it. 00:28:49.652 [2024-07-15 16:10:18.319531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.652 [2024-07-15 16:10:18.319562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.652 qpair failed and we were unable to recover it. 00:28:49.652 [2024-07-15 16:10:18.319850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.652 [2024-07-15 16:10:18.319881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.652 qpair failed and we were unable to recover it. 00:28:49.652 [2024-07-15 16:10:18.320099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.652 [2024-07-15 16:10:18.320130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.652 qpair failed and we were unable to recover it. 00:28:49.652 [2024-07-15 16:10:18.320269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.652 [2024-07-15 16:10:18.320281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.652 qpair failed and we were unable to recover it. 00:28:49.652 [2024-07-15 16:10:18.320462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.652 [2024-07-15 16:10:18.320493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.652 qpair failed and we were unable to recover it. 00:28:49.652 [2024-07-15 16:10:18.320699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.652 [2024-07-15 16:10:18.320730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.652 qpair failed and we were unable to recover it. 00:28:49.652 [2024-07-15 16:10:18.320930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.652 [2024-07-15 16:10:18.320961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.652 qpair failed and we were unable to recover it. 00:28:49.652 [2024-07-15 16:10:18.321100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.652 [2024-07-15 16:10:18.321111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.652 qpair failed and we were unable to recover it. 00:28:49.652 [2024-07-15 16:10:18.321270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.652 [2024-07-15 16:10:18.321282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.652 qpair failed and we were unable to recover it. 00:28:49.652 [2024-07-15 16:10:18.321382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.652 [2024-07-15 16:10:18.321392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.652 qpair failed and we were unable to recover it. 00:28:49.652 [2024-07-15 16:10:18.321602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.652 [2024-07-15 16:10:18.321633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.652 qpair failed and we were unable to recover it. 00:28:49.652 [2024-07-15 16:10:18.321894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.652 [2024-07-15 16:10:18.321925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.652 qpair failed and we were unable to recover it. 00:28:49.652 [2024-07-15 16:10:18.322054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.652 [2024-07-15 16:10:18.322085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.652 qpair failed and we were unable to recover it. 00:28:49.652 [2024-07-15 16:10:18.322299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.652 [2024-07-15 16:10:18.322311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.652 qpair failed and we were unable to recover it. 00:28:49.652 [2024-07-15 16:10:18.322450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.652 [2024-07-15 16:10:18.322461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.652 qpair failed and we were unable to recover it. 00:28:49.652 [2024-07-15 16:10:18.322556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.653 [2024-07-15 16:10:18.322566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.653 qpair failed and we were unable to recover it. 00:28:49.653 [2024-07-15 16:10:18.322691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.653 [2024-07-15 16:10:18.322703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.653 qpair failed and we were unable to recover it. 00:28:49.653 [2024-07-15 16:10:18.322971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.653 [2024-07-15 16:10:18.323002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.653 qpair failed and we were unable to recover it. 00:28:49.653 [2024-07-15 16:10:18.323131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.653 [2024-07-15 16:10:18.323161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.653 qpair failed and we were unable to recover it. 00:28:49.653 [2024-07-15 16:10:18.323321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.653 [2024-07-15 16:10:18.323354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.653 qpair failed and we were unable to recover it. 00:28:49.653 [2024-07-15 16:10:18.323618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.653 [2024-07-15 16:10:18.323648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.653 qpair failed and we were unable to recover it. 00:28:49.653 [2024-07-15 16:10:18.323892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.653 [2024-07-15 16:10:18.323922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.653 qpair failed and we were unable to recover it. 00:28:49.653 [2024-07-15 16:10:18.324186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.653 [2024-07-15 16:10:18.324222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.653 qpair failed and we were unable to recover it. 00:28:49.653 [2024-07-15 16:10:18.324525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.653 [2024-07-15 16:10:18.324556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.653 qpair failed and we were unable to recover it. 00:28:49.653 [2024-07-15 16:10:18.324714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.653 [2024-07-15 16:10:18.324745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.653 qpair failed and we were unable to recover it. 00:28:49.653 [2024-07-15 16:10:18.324905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.653 [2024-07-15 16:10:18.324935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.653 qpair failed and we were unable to recover it. 00:28:49.653 [2024-07-15 16:10:18.325135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.653 [2024-07-15 16:10:18.325166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.653 qpair failed and we were unable to recover it. 00:28:49.653 [2024-07-15 16:10:18.325398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.653 [2024-07-15 16:10:18.325430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.653 qpair failed and we were unable to recover it. 00:28:49.653 [2024-07-15 16:10:18.325577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.653 [2024-07-15 16:10:18.325607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.653 qpair failed and we were unable to recover it. 00:28:49.653 [2024-07-15 16:10:18.325817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.653 [2024-07-15 16:10:18.325847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.653 qpair failed and we were unable to recover it. 00:28:49.653 [2024-07-15 16:10:18.325996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.653 [2024-07-15 16:10:18.326027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.653 qpair failed and we were unable to recover it. 00:28:49.653 [2024-07-15 16:10:18.326253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.653 [2024-07-15 16:10:18.326294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.653 qpair failed and we were unable to recover it. 00:28:49.653 [2024-07-15 16:10:18.326440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.653 [2024-07-15 16:10:18.326452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.653 qpair failed and we were unable to recover it. 00:28:49.653 [2024-07-15 16:10:18.326676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.653 [2024-07-15 16:10:18.326707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.653 qpair failed and we were unable to recover it. 00:28:49.653 [2024-07-15 16:10:18.326833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.653 [2024-07-15 16:10:18.326864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.653 qpair failed and we were unable to recover it. 00:28:49.653 [2024-07-15 16:10:18.327152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.653 [2024-07-15 16:10:18.327183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.653 qpair failed and we were unable to recover it. 00:28:49.653 [2024-07-15 16:10:18.327391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.653 [2024-07-15 16:10:18.327404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.653 qpair failed and we were unable to recover it. 00:28:49.653 [2024-07-15 16:10:18.327521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.653 [2024-07-15 16:10:18.327552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.653 qpair failed and we were unable to recover it. 00:28:49.653 [2024-07-15 16:10:18.327769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.653 [2024-07-15 16:10:18.327799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.653 qpair failed and we were unable to recover it. 00:28:49.653 [2024-07-15 16:10:18.327976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.653 [2024-07-15 16:10:18.328007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.653 qpair failed and we were unable to recover it. 00:28:49.653 [2024-07-15 16:10:18.328212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.653 [2024-07-15 16:10:18.328276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.653 qpair failed and we were unable to recover it. 00:28:49.653 [2024-07-15 16:10:18.328498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.653 [2024-07-15 16:10:18.328510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.653 qpair failed and we were unable to recover it. 00:28:49.653 [2024-07-15 16:10:18.328736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.653 [2024-07-15 16:10:18.328749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.653 qpair failed and we were unable to recover it. 00:28:49.653 [2024-07-15 16:10:18.328969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.653 [2024-07-15 16:10:18.328980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.653 qpair failed and we were unable to recover it. 00:28:49.653 [2024-07-15 16:10:18.329231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.653 [2024-07-15 16:10:18.329274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.653 qpair failed and we were unable to recover it. 00:28:49.653 [2024-07-15 16:10:18.329424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.653 [2024-07-15 16:10:18.329454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.653 qpair failed and we were unable to recover it. 00:28:49.653 [2024-07-15 16:10:18.329685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.653 [2024-07-15 16:10:18.329716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.653 qpair failed and we were unable to recover it. 00:28:49.653 [2024-07-15 16:10:18.329980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.653 [2024-07-15 16:10:18.330012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.653 qpair failed and we were unable to recover it. 00:28:49.653 [2024-07-15 16:10:18.330158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.653 [2024-07-15 16:10:18.330189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.653 qpair failed and we were unable to recover it. 00:28:49.653 [2024-07-15 16:10:18.330399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.653 [2024-07-15 16:10:18.330431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.653 qpair failed and we were unable to recover it. 00:28:49.653 [2024-07-15 16:10:18.330636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.653 [2024-07-15 16:10:18.330666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.653 qpair failed and we were unable to recover it. 00:28:49.653 [2024-07-15 16:10:18.330805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.653 [2024-07-15 16:10:18.330836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.653 qpair failed and we were unable to recover it. 00:28:49.653 [2024-07-15 16:10:18.330979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.653 [2024-07-15 16:10:18.331009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.653 qpair failed and we were unable to recover it. 00:28:49.653 [2024-07-15 16:10:18.331273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.653 [2024-07-15 16:10:18.331316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.653 qpair failed and we were unable to recover it. 00:28:49.653 [2024-07-15 16:10:18.331493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.653 [2024-07-15 16:10:18.331505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.653 qpair failed and we were unable to recover it. 00:28:49.653 [2024-07-15 16:10:18.331601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.653 [2024-07-15 16:10:18.331612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.653 qpair failed and we were unable to recover it. 00:28:49.653 [2024-07-15 16:10:18.331775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.654 [2024-07-15 16:10:18.331787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.654 qpair failed and we were unable to recover it. 00:28:49.654 [2024-07-15 16:10:18.331934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.654 [2024-07-15 16:10:18.331946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.654 qpair failed and we were unable to recover it. 00:28:49.654 [2024-07-15 16:10:18.332106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.654 [2024-07-15 16:10:18.332136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.654 qpair failed and we were unable to recover it. 00:28:49.654 [2024-07-15 16:10:18.332351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.654 [2024-07-15 16:10:18.332383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.654 qpair failed and we were unable to recover it. 00:28:49.654 [2024-07-15 16:10:18.332532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.654 [2024-07-15 16:10:18.332563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.654 qpair failed and we were unable to recover it. 00:28:49.654 [2024-07-15 16:10:18.332793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.654 [2024-07-15 16:10:18.332805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.654 qpair failed and we were unable to recover it. 00:28:49.654 [2024-07-15 16:10:18.332985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.654 [2024-07-15 16:10:18.332999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.654 qpair failed and we were unable to recover it. 00:28:49.654 [2024-07-15 16:10:18.333149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.654 [2024-07-15 16:10:18.333161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.654 qpair failed and we were unable to recover it. 00:28:49.654 [2024-07-15 16:10:18.333408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.654 [2024-07-15 16:10:18.333439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.654 qpair failed and we were unable to recover it. 00:28:49.654 [2024-07-15 16:10:18.333652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.654 [2024-07-15 16:10:18.333682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.654 qpair failed and we were unable to recover it. 00:28:49.654 [2024-07-15 16:10:18.333857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.654 [2024-07-15 16:10:18.333888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.654 qpair failed and we were unable to recover it. 00:28:49.654 [2024-07-15 16:10:18.334169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.654 [2024-07-15 16:10:18.334200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.654 qpair failed and we were unable to recover it. 00:28:49.654 [2024-07-15 16:10:18.334343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.654 [2024-07-15 16:10:18.334375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.654 qpair failed and we were unable to recover it. 00:28:49.654 [2024-07-15 16:10:18.334613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.654 [2024-07-15 16:10:18.334644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.654 qpair failed and we were unable to recover it. 00:28:49.654 [2024-07-15 16:10:18.334796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.654 [2024-07-15 16:10:18.334827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.654 qpair failed and we were unable to recover it. 00:28:49.654 [2024-07-15 16:10:18.335025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.654 [2024-07-15 16:10:18.335056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.654 qpair failed and we were unable to recover it. 00:28:49.654 [2024-07-15 16:10:18.335210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.654 [2024-07-15 16:10:18.335221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.654 qpair failed and we were unable to recover it. 00:28:49.654 [2024-07-15 16:10:18.335387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.654 [2024-07-15 16:10:18.335416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.654 qpair failed and we were unable to recover it. 00:28:49.654 [2024-07-15 16:10:18.335632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.654 [2024-07-15 16:10:18.335662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.654 qpair failed and we were unable to recover it. 00:28:49.654 [2024-07-15 16:10:18.335901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.654 [2024-07-15 16:10:18.335931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.654 qpair failed and we were unable to recover it. 00:28:49.654 [2024-07-15 16:10:18.336267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.654 [2024-07-15 16:10:18.336300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.654 qpair failed and we were unable to recover it. 00:28:49.654 [2024-07-15 16:10:18.336548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.654 [2024-07-15 16:10:18.336578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.654 qpair failed and we were unable to recover it. 00:28:49.654 [2024-07-15 16:10:18.336888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.654 [2024-07-15 16:10:18.336918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.654 qpair failed and we were unable to recover it. 00:28:49.654 [2024-07-15 16:10:18.337073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.654 [2024-07-15 16:10:18.337103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.654 qpair failed and we were unable to recover it. 00:28:49.654 [2024-07-15 16:10:18.337300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.654 [2024-07-15 16:10:18.337331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.654 qpair failed and we were unable to recover it. 00:28:49.654 [2024-07-15 16:10:18.337593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.654 [2024-07-15 16:10:18.337623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.654 qpair failed and we were unable to recover it. 00:28:49.654 [2024-07-15 16:10:18.337845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.654 [2024-07-15 16:10:18.337876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.654 qpair failed and we were unable to recover it. 00:28:49.654 [2024-07-15 16:10:18.338082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.654 [2024-07-15 16:10:18.338113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.654 qpair failed and we were unable to recover it. 00:28:49.654 [2024-07-15 16:10:18.338397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.654 [2024-07-15 16:10:18.338409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.654 qpair failed and we were unable to recover it. 00:28:49.654 [2024-07-15 16:10:18.338582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.654 [2024-07-15 16:10:18.338612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.654 qpair failed and we were unable to recover it. 00:28:49.654 [2024-07-15 16:10:18.338877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.654 [2024-07-15 16:10:18.338907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.654 qpair failed and we were unable to recover it. 00:28:49.654 [2024-07-15 16:10:18.339114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.654 [2024-07-15 16:10:18.339146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.654 qpair failed and we were unable to recover it. 00:28:49.654 [2024-07-15 16:10:18.339361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.654 [2024-07-15 16:10:18.339392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.654 qpair failed and we were unable to recover it. 00:28:49.654 [2024-07-15 16:10:18.339597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.654 [2024-07-15 16:10:18.339628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.654 qpair failed and we were unable to recover it. 00:28:49.654 [2024-07-15 16:10:18.339784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.654 [2024-07-15 16:10:18.339795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.654 qpair failed and we were unable to recover it. 00:28:49.654 [2024-07-15 16:10:18.339966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.654 [2024-07-15 16:10:18.339997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.654 qpair failed and we were unable to recover it. 00:28:49.654 [2024-07-15 16:10:18.340280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.654 [2024-07-15 16:10:18.340312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.654 qpair failed and we were unable to recover it. 00:28:49.654 [2024-07-15 16:10:18.340522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.654 [2024-07-15 16:10:18.340553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.654 qpair failed and we were unable to recover it. 00:28:49.654 [2024-07-15 16:10:18.340840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.654 [2024-07-15 16:10:18.340870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.654 qpair failed and we were unable to recover it. 00:28:49.654 [2024-07-15 16:10:18.341005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.654 [2024-07-15 16:10:18.341036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.654 qpair failed and we were unable to recover it. 00:28:49.654 [2024-07-15 16:10:18.341196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.655 [2024-07-15 16:10:18.341235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.655 qpair failed and we were unable to recover it. 00:28:49.655 [2024-07-15 16:10:18.341370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.655 [2024-07-15 16:10:18.341381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.655 qpair failed and we were unable to recover it. 00:28:49.655 [2024-07-15 16:10:18.341630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.655 [2024-07-15 16:10:18.341661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.655 qpair failed and we were unable to recover it. 00:28:49.655 [2024-07-15 16:10:18.341818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.655 [2024-07-15 16:10:18.341848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.655 qpair failed and we were unable to recover it. 00:28:49.655 [2024-07-15 16:10:18.342111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.655 [2024-07-15 16:10:18.342141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.655 qpair failed and we were unable to recover it. 00:28:49.655 [2024-07-15 16:10:18.342275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.655 [2024-07-15 16:10:18.342287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.655 qpair failed and we were unable to recover it. 00:28:49.655 [2024-07-15 16:10:18.342515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.655 [2024-07-15 16:10:18.342550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.655 qpair failed and we were unable to recover it. 00:28:49.655 [2024-07-15 16:10:18.342764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.655 [2024-07-15 16:10:18.342795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.655 qpair failed and we were unable to recover it. 00:28:49.655 [2024-07-15 16:10:18.342999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.655 [2024-07-15 16:10:18.343029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.655 qpair failed and we were unable to recover it. 00:28:49.655 [2024-07-15 16:10:18.343293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.655 [2024-07-15 16:10:18.343324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.655 qpair failed and we were unable to recover it. 00:28:49.655 [2024-07-15 16:10:18.343547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.655 [2024-07-15 16:10:18.343577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.655 qpair failed and we were unable to recover it. 00:28:49.655 [2024-07-15 16:10:18.343764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.655 [2024-07-15 16:10:18.343777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.655 qpair failed and we were unable to recover it. 00:28:49.655 [2024-07-15 16:10:18.344002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.655 [2024-07-15 16:10:18.344032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.655 qpair failed and we were unable to recover it. 00:28:49.655 [2024-07-15 16:10:18.344237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.655 [2024-07-15 16:10:18.344248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.655 qpair failed and we were unable to recover it. 00:28:49.655 [2024-07-15 16:10:18.344360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.655 [2024-07-15 16:10:18.344371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.655 qpair failed and we were unable to recover it. 00:28:49.655 [2024-07-15 16:10:18.344549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.655 [2024-07-15 16:10:18.344580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.655 qpair failed and we were unable to recover it. 00:28:49.655 [2024-07-15 16:10:18.344781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.655 [2024-07-15 16:10:18.344812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.655 qpair failed and we were unable to recover it. 00:28:49.655 [2024-07-15 16:10:18.344973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.655 [2024-07-15 16:10:18.345003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.655 qpair failed and we were unable to recover it. 00:28:49.655 [2024-07-15 16:10:18.345233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.655 [2024-07-15 16:10:18.345265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.655 qpair failed and we were unable to recover it. 00:28:49.655 [2024-07-15 16:10:18.345531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.655 [2024-07-15 16:10:18.345562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.655 qpair failed and we were unable to recover it. 00:28:49.655 [2024-07-15 16:10:18.345721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.655 [2024-07-15 16:10:18.345752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.655 qpair failed and we were unable to recover it. 00:28:49.655 [2024-07-15 16:10:18.345912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.655 [2024-07-15 16:10:18.345943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.655 qpair failed and we were unable to recover it. 00:28:49.655 [2024-07-15 16:10:18.346151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.655 [2024-07-15 16:10:18.346182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.655 qpair failed and we were unable to recover it. 00:28:49.655 [2024-07-15 16:10:18.346407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.655 [2024-07-15 16:10:18.346420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.655 qpair failed and we were unable to recover it. 00:28:49.655 [2024-07-15 16:10:18.346522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.655 [2024-07-15 16:10:18.346553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.655 qpair failed and we were unable to recover it. 00:28:49.655 [2024-07-15 16:10:18.346760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.655 [2024-07-15 16:10:18.346790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.655 qpair failed and we were unable to recover it. 00:28:49.655 [2024-07-15 16:10:18.346931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.655 [2024-07-15 16:10:18.346962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.655 qpair failed and we were unable to recover it. 00:28:49.655 [2024-07-15 16:10:18.347102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.655 [2024-07-15 16:10:18.347139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.655 qpair failed and we were unable to recover it. 00:28:49.655 [2024-07-15 16:10:18.347293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.655 [2024-07-15 16:10:18.347305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.655 qpair failed and we were unable to recover it. 00:28:49.655 [2024-07-15 16:10:18.347482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.655 [2024-07-15 16:10:18.347514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.655 qpair failed and we were unable to recover it. 00:28:49.655 [2024-07-15 16:10:18.347777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.655 [2024-07-15 16:10:18.347807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.655 qpair failed and we were unable to recover it. 00:28:49.655 [2024-07-15 16:10:18.348026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.655 [2024-07-15 16:10:18.348058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.655 qpair failed and we were unable to recover it. 00:28:49.655 [2024-07-15 16:10:18.348269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.655 [2024-07-15 16:10:18.348300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.655 qpair failed and we were unable to recover it. 00:28:49.655 [2024-07-15 16:10:18.348440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.655 [2024-07-15 16:10:18.348471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.655 qpair failed and we were unable to recover it. 00:28:49.655 [2024-07-15 16:10:18.348682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.656 [2024-07-15 16:10:18.348694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.656 qpair failed and we were unable to recover it. 00:28:49.656 [2024-07-15 16:10:18.348927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.656 [2024-07-15 16:10:18.348939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.656 qpair failed and we were unable to recover it. 00:28:49.656 [2024-07-15 16:10:18.349123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.656 [2024-07-15 16:10:18.349154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.656 qpair failed and we were unable to recover it. 00:28:49.656 [2024-07-15 16:10:18.349309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.656 [2024-07-15 16:10:18.349340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.656 qpair failed and we were unable to recover it. 00:28:49.656 [2024-07-15 16:10:18.349477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.656 [2024-07-15 16:10:18.349508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.656 qpair failed and we were unable to recover it. 00:28:49.656 [2024-07-15 16:10:18.349652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.656 [2024-07-15 16:10:18.349689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.656 qpair failed and we were unable to recover it. 00:28:49.656 [2024-07-15 16:10:18.349786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.656 [2024-07-15 16:10:18.349796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.656 qpair failed and we were unable to recover it. 00:28:49.656 [2024-07-15 16:10:18.350063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.656 [2024-07-15 16:10:18.350094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.656 qpair failed and we were unable to recover it. 00:28:49.656 [2024-07-15 16:10:18.350356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.656 [2024-07-15 16:10:18.350387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.656 qpair failed and we were unable to recover it. 00:28:49.656 [2024-07-15 16:10:18.350599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.656 [2024-07-15 16:10:18.350630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.656 qpair failed and we were unable to recover it. 00:28:49.656 [2024-07-15 16:10:18.350806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.656 [2024-07-15 16:10:18.350837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.656 qpair failed and we were unable to recover it. 00:28:49.656 [2024-07-15 16:10:18.351051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.656 [2024-07-15 16:10:18.351081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.656 qpair failed and we were unable to recover it. 00:28:49.656 [2024-07-15 16:10:18.351386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.656 [2024-07-15 16:10:18.351399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.656 qpair failed and we were unable to recover it. 00:28:49.656 [2024-07-15 16:10:18.351614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.656 [2024-07-15 16:10:18.351626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.656 qpair failed and we were unable to recover it. 00:28:49.656 [2024-07-15 16:10:18.351739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.656 [2024-07-15 16:10:18.351750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.656 qpair failed and we were unable to recover it. 00:28:49.656 [2024-07-15 16:10:18.351910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.656 [2024-07-15 16:10:18.351940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.656 qpair failed and we were unable to recover it. 00:28:49.656 [2024-07-15 16:10:18.352137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.656 [2024-07-15 16:10:18.352167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.656 qpair failed and we were unable to recover it. 00:28:49.656 [2024-07-15 16:10:18.352400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.656 [2024-07-15 16:10:18.352440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.656 qpair failed and we were unable to recover it. 00:28:49.656 [2024-07-15 16:10:18.352595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.656 [2024-07-15 16:10:18.352606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.656 qpair failed and we were unable to recover it. 00:28:49.656 [2024-07-15 16:10:18.352718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.656 [2024-07-15 16:10:18.352730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.656 qpair failed and we were unable to recover it. 00:28:49.656 [2024-07-15 16:10:18.352842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.656 [2024-07-15 16:10:18.352855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.656 qpair failed and we were unable to recover it. 00:28:49.656 [2024-07-15 16:10:18.352959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.656 [2024-07-15 16:10:18.352990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.656 qpair failed and we were unable to recover it. 00:28:49.656 [2024-07-15 16:10:18.353207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.656 [2024-07-15 16:10:18.353245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.656 qpair failed and we were unable to recover it. 00:28:49.656 [2024-07-15 16:10:18.353393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.656 [2024-07-15 16:10:18.353424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.656 qpair failed and we were unable to recover it. 00:28:49.656 [2024-07-15 16:10:18.353659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.656 [2024-07-15 16:10:18.353691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.656 qpair failed and we were unable to recover it. 00:28:49.656 [2024-07-15 16:10:18.353904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.656 [2024-07-15 16:10:18.353934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.656 qpair failed and we were unable to recover it. 00:28:49.656 [2024-07-15 16:10:18.354234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.656 [2024-07-15 16:10:18.354266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.656 qpair failed and we were unable to recover it. 00:28:49.656 [2024-07-15 16:10:18.354462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.656 [2024-07-15 16:10:18.354493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.656 qpair failed and we were unable to recover it. 00:28:49.656 [2024-07-15 16:10:18.354646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.656 [2024-07-15 16:10:18.354676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.656 qpair failed and we were unable to recover it. 00:28:49.656 [2024-07-15 16:10:18.354888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.656 [2024-07-15 16:10:18.354918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.656 qpair failed and we were unable to recover it. 00:28:49.656 [2024-07-15 16:10:18.355120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.656 [2024-07-15 16:10:18.355151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.656 qpair failed and we were unable to recover it. 00:28:49.656 [2024-07-15 16:10:18.355357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.656 [2024-07-15 16:10:18.355389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.656 qpair failed and we were unable to recover it. 00:28:49.656 [2024-07-15 16:10:18.355611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.656 [2024-07-15 16:10:18.355623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.656 qpair failed and we were unable to recover it. 00:28:49.656 [2024-07-15 16:10:18.355762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.656 [2024-07-15 16:10:18.355793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.656 qpair failed and we were unable to recover it. 00:28:49.656 [2024-07-15 16:10:18.356083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.656 [2024-07-15 16:10:18.356115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.656 qpair failed and we were unable to recover it. 00:28:49.656 [2024-07-15 16:10:18.356264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.656 [2024-07-15 16:10:18.356296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.656 qpair failed and we were unable to recover it. 00:28:49.656 [2024-07-15 16:10:18.356570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.656 [2024-07-15 16:10:18.356613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.656 qpair failed and we were unable to recover it. 00:28:49.656 [2024-07-15 16:10:18.356806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.656 [2024-07-15 16:10:18.356818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.656 qpair failed and we were unable to recover it. 00:28:49.656 [2024-07-15 16:10:18.357009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.656 [2024-07-15 16:10:18.357021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.656 qpair failed and we were unable to recover it. 00:28:49.656 [2024-07-15 16:10:18.357128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.656 [2024-07-15 16:10:18.357140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.656 qpair failed and we were unable to recover it. 00:28:49.656 [2024-07-15 16:10:18.357334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.656 [2024-07-15 16:10:18.357366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.656 qpair failed and we were unable to recover it. 00:28:49.657 [2024-07-15 16:10:18.357569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.657 [2024-07-15 16:10:18.357599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.657 qpair failed and we were unable to recover it. 00:28:49.657 [2024-07-15 16:10:18.357745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.657 [2024-07-15 16:10:18.357775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.657 qpair failed and we were unable to recover it. 00:28:49.657 [2024-07-15 16:10:18.357999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.657 [2024-07-15 16:10:18.358030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.657 qpair failed and we were unable to recover it. 00:28:49.657 [2024-07-15 16:10:18.358291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.657 [2024-07-15 16:10:18.358334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.657 qpair failed and we were unable to recover it. 00:28:49.657 [2024-07-15 16:10:18.358489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.657 [2024-07-15 16:10:18.358500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.657 qpair failed and we were unable to recover it. 00:28:49.657 [2024-07-15 16:10:18.358652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.657 [2024-07-15 16:10:18.358682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.657 qpair failed and we were unable to recover it. 00:28:49.657 [2024-07-15 16:10:18.358904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.657 [2024-07-15 16:10:18.358935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.657 qpair failed and we were unable to recover it. 00:28:49.657 [2024-07-15 16:10:18.359177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.657 [2024-07-15 16:10:18.359208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.657 qpair failed and we were unable to recover it. 00:28:49.657 [2024-07-15 16:10:18.359445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.657 [2024-07-15 16:10:18.359477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.657 qpair failed and we were unable to recover it. 00:28:49.657 [2024-07-15 16:10:18.359630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.657 [2024-07-15 16:10:18.359641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.657 qpair failed and we were unable to recover it. 00:28:49.657 [2024-07-15 16:10:18.359888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.657 [2024-07-15 16:10:18.359918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.657 qpair failed and we were unable to recover it. 00:28:49.657 [2024-07-15 16:10:18.360181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.657 [2024-07-15 16:10:18.360217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.657 qpair failed and we were unable to recover it. 00:28:49.657 [2024-07-15 16:10:18.360380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.657 [2024-07-15 16:10:18.360392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.657 qpair failed and we were unable to recover it. 00:28:49.657 [2024-07-15 16:10:18.360528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.657 [2024-07-15 16:10:18.360540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.657 qpair failed and we were unable to recover it. 00:28:49.657 [2024-07-15 16:10:18.360788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.657 [2024-07-15 16:10:18.360818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.657 qpair failed and we were unable to recover it. 00:28:49.657 [2024-07-15 16:10:18.360958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.657 [2024-07-15 16:10:18.360988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.657 qpair failed and we were unable to recover it. 00:28:49.657 [2024-07-15 16:10:18.361283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.657 [2024-07-15 16:10:18.361315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.657 qpair failed and we were unable to recover it. 00:28:49.657 [2024-07-15 16:10:18.361528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.657 [2024-07-15 16:10:18.361559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.657 qpair failed and we were unable to recover it. 00:28:49.657 [2024-07-15 16:10:18.361757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.657 [2024-07-15 16:10:18.361787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.657 qpair failed and we were unable to recover it. 00:28:49.657 [2024-07-15 16:10:18.361932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.657 [2024-07-15 16:10:18.361963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.657 qpair failed and we were unable to recover it. 00:28:49.657 [2024-07-15 16:10:18.362184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.657 [2024-07-15 16:10:18.362216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.657 qpair failed and we were unable to recover it. 00:28:49.657 [2024-07-15 16:10:18.362544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.657 [2024-07-15 16:10:18.362574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.657 qpair failed and we were unable to recover it. 00:28:49.657 [2024-07-15 16:10:18.362770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.657 [2024-07-15 16:10:18.362800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.657 qpair failed and we were unable to recover it. 00:28:49.657 [2024-07-15 16:10:18.362999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.657 [2024-07-15 16:10:18.363030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.657 qpair failed and we were unable to recover it. 00:28:49.657 [2024-07-15 16:10:18.363242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.657 [2024-07-15 16:10:18.363274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.657 qpair failed and we were unable to recover it. 00:28:49.657 [2024-07-15 16:10:18.363559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.657 [2024-07-15 16:10:18.363589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.657 qpair failed and we were unable to recover it. 00:28:49.657 [2024-07-15 16:10:18.363789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.657 [2024-07-15 16:10:18.363818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.657 qpair failed and we were unable to recover it. 00:28:49.657 [2024-07-15 16:10:18.364082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.657 [2024-07-15 16:10:18.364113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.657 qpair failed and we were unable to recover it. 00:28:49.657 [2024-07-15 16:10:18.364290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.657 [2024-07-15 16:10:18.364322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.657 qpair failed and we were unable to recover it. 00:28:49.657 [2024-07-15 16:10:18.364521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.657 [2024-07-15 16:10:18.364552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.657 qpair failed and we were unable to recover it. 00:28:49.657 [2024-07-15 16:10:18.364829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.657 [2024-07-15 16:10:18.364841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.657 qpair failed and we were unable to recover it. 00:28:49.657 [2024-07-15 16:10:18.365004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.657 [2024-07-15 16:10:18.365016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.657 qpair failed and we were unable to recover it. 00:28:49.657 [2024-07-15 16:10:18.365169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.657 [2024-07-15 16:10:18.365180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.657 qpair failed and we were unable to recover it. 00:28:49.657 [2024-07-15 16:10:18.365380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.657 [2024-07-15 16:10:18.365412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.657 qpair failed and we were unable to recover it. 00:28:49.657 [2024-07-15 16:10:18.365554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.657 [2024-07-15 16:10:18.365585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.657 qpair failed and we were unable to recover it. 00:28:49.657 [2024-07-15 16:10:18.365790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.657 [2024-07-15 16:10:18.365820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.657 qpair failed and we were unable to recover it. 00:28:49.657 [2024-07-15 16:10:18.365966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.657 [2024-07-15 16:10:18.365996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.657 qpair failed and we were unable to recover it. 00:28:49.657 [2024-07-15 16:10:18.366238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.657 [2024-07-15 16:10:18.366269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.657 qpair failed and we were unable to recover it. 00:28:49.657 [2024-07-15 16:10:18.366428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.657 [2024-07-15 16:10:18.366460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.657 qpair failed and we were unable to recover it. 00:28:49.657 [2024-07-15 16:10:18.366654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.657 [2024-07-15 16:10:18.366666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.657 qpair failed and we were unable to recover it. 00:28:49.657 [2024-07-15 16:10:18.366772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.657 [2024-07-15 16:10:18.366783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.658 qpair failed and we were unable to recover it. 00:28:49.658 [2024-07-15 16:10:18.367015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.658 [2024-07-15 16:10:18.367027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.658 qpair failed and we were unable to recover it. 00:28:49.658 [2024-07-15 16:10:18.367125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.658 [2024-07-15 16:10:18.367135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.658 qpair failed and we were unable to recover it. 00:28:49.658 [2024-07-15 16:10:18.367366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.658 [2024-07-15 16:10:18.367398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.658 qpair failed and we were unable to recover it. 00:28:49.658 [2024-07-15 16:10:18.367598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.658 [2024-07-15 16:10:18.367630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.658 qpair failed and we were unable to recover it. 00:28:49.658 [2024-07-15 16:10:18.367851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.658 [2024-07-15 16:10:18.367882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.658 qpair failed and we were unable to recover it. 00:28:49.658 [2024-07-15 16:10:18.368088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.658 [2024-07-15 16:10:18.368118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.658 qpair failed and we were unable to recover it. 00:28:49.658 [2024-07-15 16:10:18.368399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.658 [2024-07-15 16:10:18.368411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.658 qpair failed and we were unable to recover it. 00:28:49.658 [2024-07-15 16:10:18.368687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.658 [2024-07-15 16:10:18.368718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.658 qpair failed and we were unable to recover it. 00:28:49.658 [2024-07-15 16:10:18.369008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.658 [2024-07-15 16:10:18.369040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.658 qpair failed and we were unable to recover it. 00:28:49.658 [2024-07-15 16:10:18.369320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.658 [2024-07-15 16:10:18.369364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.658 qpair failed and we were unable to recover it. 00:28:49.658 [2024-07-15 16:10:18.369657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.658 [2024-07-15 16:10:18.369693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.658 qpair failed and we were unable to recover it. 00:28:49.658 [2024-07-15 16:10:18.369891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.658 [2024-07-15 16:10:18.369922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.658 qpair failed and we were unable to recover it. 00:28:49.658 [2024-07-15 16:10:18.370189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.658 [2024-07-15 16:10:18.370220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.658 qpair failed and we were unable to recover it. 00:28:49.658 [2024-07-15 16:10:18.370384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.658 [2024-07-15 16:10:18.370396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.658 qpair failed and we were unable to recover it. 00:28:49.658 [2024-07-15 16:10:18.370672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.658 [2024-07-15 16:10:18.370702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.658 qpair failed and we were unable to recover it. 00:28:49.658 [2024-07-15 16:10:18.370902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.658 [2024-07-15 16:10:18.370934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.658 qpair failed and we were unable to recover it. 00:28:49.658 [2024-07-15 16:10:18.371245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.658 [2024-07-15 16:10:18.371277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.658 qpair failed and we were unable to recover it. 00:28:49.658 [2024-07-15 16:10:18.371502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.658 [2024-07-15 16:10:18.371533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.658 qpair failed and we were unable to recover it. 00:28:49.658 [2024-07-15 16:10:18.371848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.658 [2024-07-15 16:10:18.371860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.658 qpair failed and we were unable to recover it. 00:28:49.658 [2024-07-15 16:10:18.372104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.658 [2024-07-15 16:10:18.372115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.658 qpair failed and we were unable to recover it. 00:28:49.658 [2024-07-15 16:10:18.372302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.658 [2024-07-15 16:10:18.372335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.658 qpair failed and we were unable to recover it. 00:28:49.658 [2024-07-15 16:10:18.372627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.658 [2024-07-15 16:10:18.372657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.658 qpair failed and we were unable to recover it. 00:28:49.658 [2024-07-15 16:10:18.372802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.658 [2024-07-15 16:10:18.372813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.658 qpair failed and we were unable to recover it. 00:28:49.658 [2024-07-15 16:10:18.373059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.658 [2024-07-15 16:10:18.373089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.658 qpair failed and we were unable to recover it. 00:28:49.658 [2024-07-15 16:10:18.373365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.658 [2024-07-15 16:10:18.373397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.658 qpair failed and we were unable to recover it. 00:28:49.658 [2024-07-15 16:10:18.373602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.658 [2024-07-15 16:10:18.373633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.658 qpair failed and we were unable to recover it. 00:28:49.658 [2024-07-15 16:10:18.373920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.658 [2024-07-15 16:10:18.373951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.658 qpair failed and we were unable to recover it. 00:28:49.658 [2024-07-15 16:10:18.374166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.658 [2024-07-15 16:10:18.374196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.658 qpair failed and we were unable to recover it. 00:28:49.658 [2024-07-15 16:10:18.374418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.658 [2024-07-15 16:10:18.374450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.658 qpair failed and we were unable to recover it. 00:28:49.658 [2024-07-15 16:10:18.374741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.658 [2024-07-15 16:10:18.374772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.658 qpair failed and we were unable to recover it. 00:28:49.658 [2024-07-15 16:10:18.375005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.658 [2024-07-15 16:10:18.375037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.658 qpair failed and we were unable to recover it. 00:28:49.658 [2024-07-15 16:10:18.375193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.658 [2024-07-15 16:10:18.375233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.658 qpair failed and we were unable to recover it. 00:28:49.658 [2024-07-15 16:10:18.375522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.658 [2024-07-15 16:10:18.375553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.658 qpair failed and we were unable to recover it. 00:28:49.658 [2024-07-15 16:10:18.375819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.658 [2024-07-15 16:10:18.375850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.658 qpair failed and we were unable to recover it. 00:28:49.658 [2024-07-15 16:10:18.376090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.658 [2024-07-15 16:10:18.376121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.658 qpair failed and we were unable to recover it. 00:28:49.658 [2024-07-15 16:10:18.376347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.658 [2024-07-15 16:10:18.376380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.658 qpair failed and we were unable to recover it. 00:28:49.658 [2024-07-15 16:10:18.376657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.658 [2024-07-15 16:10:18.376668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.658 qpair failed and we were unable to recover it. 00:28:49.658 [2024-07-15 16:10:18.376914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.658 [2024-07-15 16:10:18.376925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.658 qpair failed and we were unable to recover it. 00:28:49.658 [2024-07-15 16:10:18.377172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.658 [2024-07-15 16:10:18.377184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.658 qpair failed and we were unable to recover it. 00:28:49.658 [2024-07-15 16:10:18.377306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.658 [2024-07-15 16:10:18.377318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.658 qpair failed and we were unable to recover it. 00:28:49.658 [2024-07-15 16:10:18.377517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.659 [2024-07-15 16:10:18.377529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.659 qpair failed and we were unable to recover it. 00:28:49.659 [2024-07-15 16:10:18.377726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.659 [2024-07-15 16:10:18.377738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.659 qpair failed and we were unable to recover it. 00:28:49.659 [2024-07-15 16:10:18.377993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.659 [2024-07-15 16:10:18.378005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.659 qpair failed and we were unable to recover it. 00:28:49.659 [2024-07-15 16:10:18.378271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.659 [2024-07-15 16:10:18.378283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.659 qpair failed and we were unable to recover it. 00:28:49.659 [2024-07-15 16:10:18.378392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.659 [2024-07-15 16:10:18.378404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.659 qpair failed and we were unable to recover it. 00:28:49.659 [2024-07-15 16:10:18.378509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.659 [2024-07-15 16:10:18.378520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.659 qpair failed and we were unable to recover it. 00:28:49.659 [2024-07-15 16:10:18.378673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.659 [2024-07-15 16:10:18.378685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.659 qpair failed and we were unable to recover it. 00:28:49.659 [2024-07-15 16:10:18.378783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.659 [2024-07-15 16:10:18.378793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.659 qpair failed and we were unable to recover it. 00:28:49.659 [2024-07-15 16:10:18.379041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.659 [2024-07-15 16:10:18.379052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.659 qpair failed and we were unable to recover it. 00:28:49.659 [2024-07-15 16:10:18.379228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.659 [2024-07-15 16:10:18.379240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.659 qpair failed and we were unable to recover it. 00:28:49.659 [2024-07-15 16:10:18.379357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.659 [2024-07-15 16:10:18.379372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.659 qpair failed and we were unable to recover it. 00:28:49.659 [2024-07-15 16:10:18.379557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.659 [2024-07-15 16:10:18.379570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.659 qpair failed and we were unable to recover it. 00:28:49.659 [2024-07-15 16:10:18.379791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.659 [2024-07-15 16:10:18.379802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.659 qpair failed and we were unable to recover it. 00:28:49.659 [2024-07-15 16:10:18.379982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.659 [2024-07-15 16:10:18.379994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.659 qpair failed and we were unable to recover it. 00:28:49.659 [2024-07-15 16:10:18.380182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.659 [2024-07-15 16:10:18.380193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.659 qpair failed and we were unable to recover it. 00:28:49.659 [2024-07-15 16:10:18.380303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.659 [2024-07-15 16:10:18.380315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.659 qpair failed and we were unable to recover it. 00:28:49.659 [2024-07-15 16:10:18.380490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.659 [2024-07-15 16:10:18.380502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.659 qpair failed and we were unable to recover it. 00:28:49.659 [2024-07-15 16:10:18.380701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.659 [2024-07-15 16:10:18.380712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.659 qpair failed and we were unable to recover it. 00:28:49.659 [2024-07-15 16:10:18.380877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.659 [2024-07-15 16:10:18.380889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.659 qpair failed and we were unable to recover it. 00:28:49.659 [2024-07-15 16:10:18.381113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.659 [2024-07-15 16:10:18.381125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.659 qpair failed and we were unable to recover it. 00:28:49.659 [2024-07-15 16:10:18.381368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.659 [2024-07-15 16:10:18.381381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.659 qpair failed and we were unable to recover it. 00:28:49.659 [2024-07-15 16:10:18.381558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.659 [2024-07-15 16:10:18.381569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.659 qpair failed and we were unable to recover it. 00:28:49.659 [2024-07-15 16:10:18.381687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.659 [2024-07-15 16:10:18.381717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.659 qpair failed and we were unable to recover it. 00:28:49.659 [2024-07-15 16:10:18.381865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.659 [2024-07-15 16:10:18.381896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.659 qpair failed and we were unable to recover it. 00:28:49.659 [2024-07-15 16:10:18.382141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.659 [2024-07-15 16:10:18.382171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.659 qpair failed and we were unable to recover it. 00:28:49.659 [2024-07-15 16:10:18.382473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.659 [2024-07-15 16:10:18.382485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.659 qpair failed and we were unable to recover it. 00:28:49.659 [2024-07-15 16:10:18.382658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.659 [2024-07-15 16:10:18.382670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.659 qpair failed and we were unable to recover it. 00:28:49.659 [2024-07-15 16:10:18.382927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.659 [2024-07-15 16:10:18.382957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.659 qpair failed and we were unable to recover it. 00:28:49.659 [2024-07-15 16:10:18.383221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.659 [2024-07-15 16:10:18.383261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.659 qpair failed and we were unable to recover it. 00:28:49.659 [2024-07-15 16:10:18.383482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.659 [2024-07-15 16:10:18.383493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.659 qpair failed and we were unable to recover it. 00:28:49.659 [2024-07-15 16:10:18.383746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.659 [2024-07-15 16:10:18.383776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.659 qpair failed and we were unable to recover it. 00:28:49.659 [2024-07-15 16:10:18.383993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.659 [2024-07-15 16:10:18.384023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.659 qpair failed and we were unable to recover it. 00:28:49.659 [2024-07-15 16:10:18.384207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.659 [2024-07-15 16:10:18.384255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.659 qpair failed and we were unable to recover it. 00:28:49.659 [2024-07-15 16:10:18.384450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.659 [2024-07-15 16:10:18.384461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.659 qpair failed and we were unable to recover it. 00:28:49.659 [2024-07-15 16:10:18.384700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.659 [2024-07-15 16:10:18.384732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.659 qpair failed and we were unable to recover it. 00:28:49.659 [2024-07-15 16:10:18.384941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.659 [2024-07-15 16:10:18.384972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.659 qpair failed and we were unable to recover it. 00:28:49.659 [2024-07-15 16:10:18.385117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.659 [2024-07-15 16:10:18.385147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.659 qpair failed and we were unable to recover it. 00:28:49.659 [2024-07-15 16:10:18.385320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.659 [2024-07-15 16:10:18.385358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.659 qpair failed and we were unable to recover it. 00:28:49.659 [2024-07-15 16:10:18.385518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.659 [2024-07-15 16:10:18.385549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.659 qpair failed and we were unable to recover it. 00:28:49.659 [2024-07-15 16:10:18.385774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.659 [2024-07-15 16:10:18.385804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.659 qpair failed and we were unable to recover it. 00:28:49.659 [2024-07-15 16:10:18.385969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.659 [2024-07-15 16:10:18.385999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.659 qpair failed and we were unable to recover it. 00:28:49.659 [2024-07-15 16:10:18.386190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.660 [2024-07-15 16:10:18.386223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.660 qpair failed and we were unable to recover it. 00:28:49.660 [2024-07-15 16:10:18.386447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.660 [2024-07-15 16:10:18.386459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.660 qpair failed and we were unable to recover it. 00:28:49.660 [2024-07-15 16:10:18.386768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.660 [2024-07-15 16:10:18.386799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.660 qpair failed and we were unable to recover it. 00:28:49.660 [2024-07-15 16:10:18.387086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.660 [2024-07-15 16:10:18.387117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.660 qpair failed and we were unable to recover it. 00:28:49.660 [2024-07-15 16:10:18.387330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.660 [2024-07-15 16:10:18.387363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.660 qpair failed and we were unable to recover it. 00:28:49.660 [2024-07-15 16:10:18.387602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.660 [2024-07-15 16:10:18.387633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.660 qpair failed and we were unable to recover it. 00:28:49.660 [2024-07-15 16:10:18.387848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.660 [2024-07-15 16:10:18.387860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.660 qpair failed and we were unable to recover it. 00:28:49.660 [2024-07-15 16:10:18.388024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.660 [2024-07-15 16:10:18.388036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.660 qpair failed and we were unable to recover it. 00:28:49.660 [2024-07-15 16:10:18.388259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.660 [2024-07-15 16:10:18.388270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.660 qpair failed and we were unable to recover it. 00:28:49.660 [2024-07-15 16:10:18.388444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.660 [2024-07-15 16:10:18.388456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.660 qpair failed and we were unable to recover it. 00:28:49.660 [2024-07-15 16:10:18.388616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.660 [2024-07-15 16:10:18.388647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.660 qpair failed and we were unable to recover it. 00:28:49.660 [2024-07-15 16:10:18.388923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.660 [2024-07-15 16:10:18.388954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.660 qpair failed and we were unable to recover it. 00:28:49.660 [2024-07-15 16:10:18.389223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.660 [2024-07-15 16:10:18.389285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.660 qpair failed and we were unable to recover it. 00:28:49.660 [2024-07-15 16:10:18.389503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.660 [2024-07-15 16:10:18.389515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.660 qpair failed and we were unable to recover it. 00:28:49.660 [2024-07-15 16:10:18.389692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.660 [2024-07-15 16:10:18.389723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.660 qpair failed and we were unable to recover it. 00:28:49.660 [2024-07-15 16:10:18.389996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.660 [2024-07-15 16:10:18.390026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.660 qpair failed and we were unable to recover it. 00:28:49.660 [2024-07-15 16:10:18.390191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.660 [2024-07-15 16:10:18.390221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.660 qpair failed and we were unable to recover it. 00:28:49.660 [2024-07-15 16:10:18.390471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.660 [2024-07-15 16:10:18.390503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.660 qpair failed and we were unable to recover it. 00:28:49.660 [2024-07-15 16:10:18.390701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a7000 is same with the state(5) to be set 00:28:49.660 [2024-07-15 16:10:18.390982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.660 [2024-07-15 16:10:18.391018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.660 qpair failed and we were unable to recover it. 00:28:49.660 [2024-07-15 16:10:18.391259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.660 [2024-07-15 16:10:18.391295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.660 qpair failed and we were unable to recover it. 00:28:49.660 [2024-07-15 16:10:18.391504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.660 [2024-07-15 16:10:18.391539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.660 qpair failed and we were unable to recover it. 00:28:49.660 [2024-07-15 16:10:18.391647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.660 [2024-07-15 16:10:18.391661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.660 qpair failed and we were unable to recover it. 00:28:49.660 [2024-07-15 16:10:18.391895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.660 [2024-07-15 16:10:18.391931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.660 qpair failed and we were unable to recover it. 00:28:49.660 [2024-07-15 16:10:18.392139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.660 [2024-07-15 16:10:18.392170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.660 qpair failed and we were unable to recover it. 00:28:49.660 [2024-07-15 16:10:18.392382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.660 [2024-07-15 16:10:18.392415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.660 qpair failed and we were unable to recover it. 00:28:49.660 [2024-07-15 16:10:18.392650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.660 [2024-07-15 16:10:18.392662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.660 qpair failed and we were unable to recover it. 00:28:49.660 [2024-07-15 16:10:18.392825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.660 [2024-07-15 16:10:18.392856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.660 qpair failed and we were unable to recover it. 00:28:49.660 [2024-07-15 16:10:18.393168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.660 [2024-07-15 16:10:18.393198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.660 qpair failed and we were unable to recover it. 00:28:49.660 [2024-07-15 16:10:18.393456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.660 [2024-07-15 16:10:18.393475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.660 qpair failed and we were unable to recover it. 00:28:49.660 [2024-07-15 16:10:18.393717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.660 [2024-07-15 16:10:18.393749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.660 qpair failed and we were unable to recover it. 00:28:49.660 [2024-07-15 16:10:18.394017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.660 [2024-07-15 16:10:18.394048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.660 qpair failed and we were unable to recover it. 00:28:49.660 [2024-07-15 16:10:18.394341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.660 [2024-07-15 16:10:18.394374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.660 qpair failed and we were unable to recover it. 00:28:49.660 [2024-07-15 16:10:18.394583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.660 [2024-07-15 16:10:18.394615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.660 qpair failed and we were unable to recover it. 00:28:49.660 [2024-07-15 16:10:18.394824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.660 [2024-07-15 16:10:18.394840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.660 qpair failed and we were unable to recover it. 00:28:49.660 [2024-07-15 16:10:18.395008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.661 [2024-07-15 16:10:18.395039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.661 qpair failed and we were unable to recover it. 00:28:49.661 [2024-07-15 16:10:18.395254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.661 [2024-07-15 16:10:18.395287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.661 qpair failed and we were unable to recover it. 00:28:49.661 [2024-07-15 16:10:18.395534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.661 [2024-07-15 16:10:18.395565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.661 qpair failed and we were unable to recover it. 00:28:49.661 [2024-07-15 16:10:18.395774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.661 [2024-07-15 16:10:18.395805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.661 qpair failed and we were unable to recover it. 00:28:49.661 [2024-07-15 16:10:18.396115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.661 [2024-07-15 16:10:18.396145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.661 qpair failed and we were unable to recover it. 00:28:49.661 [2024-07-15 16:10:18.396369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.661 [2024-07-15 16:10:18.396385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.661 qpair failed and we were unable to recover it. 00:28:49.661 [2024-07-15 16:10:18.396544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.661 [2024-07-15 16:10:18.396559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.661 qpair failed and we were unable to recover it. 00:28:49.661 [2024-07-15 16:10:18.396738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.661 [2024-07-15 16:10:18.396753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.661 qpair failed and we were unable to recover it. 00:28:49.661 [2024-07-15 16:10:18.396893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.661 [2024-07-15 16:10:18.396924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.661 qpair failed and we were unable to recover it. 00:28:49.661 [2024-07-15 16:10:18.397215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.661 [2024-07-15 16:10:18.397258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.661 qpair failed and we were unable to recover it. 00:28:49.661 [2024-07-15 16:10:18.397441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.661 [2024-07-15 16:10:18.397456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.661 qpair failed and we were unable to recover it. 00:28:49.661 [2024-07-15 16:10:18.397634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.661 [2024-07-15 16:10:18.397665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.661 qpair failed and we were unable to recover it. 00:28:49.661 [2024-07-15 16:10:18.397929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.661 [2024-07-15 16:10:18.397960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.661 qpair failed and we were unable to recover it. 00:28:49.661 [2024-07-15 16:10:18.398246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.661 [2024-07-15 16:10:18.398281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.661 qpair failed and we were unable to recover it. 00:28:49.661 [2024-07-15 16:10:18.398571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.661 [2024-07-15 16:10:18.398586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.661 qpair failed and we were unable to recover it. 00:28:49.661 [2024-07-15 16:10:18.398897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.661 [2024-07-15 16:10:18.398912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.661 qpair failed and we were unable to recover it. 00:28:49.661 [2024-07-15 16:10:18.399164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.661 [2024-07-15 16:10:18.399176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.661 qpair failed and we were unable to recover it. 00:28:49.661 [2024-07-15 16:10:18.399352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.661 [2024-07-15 16:10:18.399365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.661 qpair failed and we were unable to recover it. 00:28:49.661 [2024-07-15 16:10:18.399547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.661 [2024-07-15 16:10:18.399559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.661 qpair failed and we were unable to recover it. 00:28:49.661 [2024-07-15 16:10:18.399799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.661 [2024-07-15 16:10:18.399830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.661 qpair failed and we were unable to recover it. 00:28:49.661 [2024-07-15 16:10:18.400126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.661 [2024-07-15 16:10:18.400158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.661 qpair failed and we were unable to recover it. 00:28:49.661 [2024-07-15 16:10:18.400345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.661 [2024-07-15 16:10:18.400392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.661 qpair failed and we were unable to recover it. 00:28:49.661 [2024-07-15 16:10:18.400661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.661 [2024-07-15 16:10:18.400692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.661 qpair failed and we were unable to recover it. 00:28:49.661 [2024-07-15 16:10:18.400963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.661 [2024-07-15 16:10:18.400974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.661 qpair failed and we were unable to recover it. 00:28:49.661 [2024-07-15 16:10:18.401132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.661 [2024-07-15 16:10:18.401145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.661 qpair failed and we were unable to recover it. 00:28:49.661 [2024-07-15 16:10:18.401319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.661 [2024-07-15 16:10:18.401330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.661 qpair failed and we were unable to recover it. 00:28:49.661 [2024-07-15 16:10:18.401499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.661 [2024-07-15 16:10:18.401530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.661 qpair failed and we were unable to recover it. 00:28:49.661 [2024-07-15 16:10:18.401822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.661 [2024-07-15 16:10:18.401852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.661 qpair failed and we were unable to recover it. 00:28:49.661 [2024-07-15 16:10:18.402017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.661 [2024-07-15 16:10:18.402053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.661 qpair failed and we were unable to recover it. 00:28:49.661 [2024-07-15 16:10:18.402313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.661 [2024-07-15 16:10:18.402344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.661 qpair failed and we were unable to recover it. 00:28:49.661 [2024-07-15 16:10:18.402492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.661 [2024-07-15 16:10:18.402503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.661 qpair failed and we were unable to recover it. 00:28:49.661 [2024-07-15 16:10:18.402742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.661 [2024-07-15 16:10:18.402773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.661 qpair failed and we were unable to recover it. 00:28:49.661 [2024-07-15 16:10:18.403012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.661 [2024-07-15 16:10:18.403042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.661 qpair failed and we were unable to recover it. 00:28:49.661 [2024-07-15 16:10:18.403338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.661 [2024-07-15 16:10:18.403369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.661 qpair failed and we were unable to recover it. 00:28:49.661 [2024-07-15 16:10:18.403572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.661 [2024-07-15 16:10:18.403584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.661 qpair failed and we were unable to recover it. 00:28:49.661 [2024-07-15 16:10:18.403747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.661 [2024-07-15 16:10:18.403777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.661 qpair failed and we were unable to recover it. 00:28:49.661 [2024-07-15 16:10:18.404071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.661 [2024-07-15 16:10:18.404102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.661 qpair failed and we were unable to recover it. 00:28:49.661 [2024-07-15 16:10:18.404314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.661 [2024-07-15 16:10:18.404345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.661 qpair failed and we were unable to recover it. 00:28:49.661 [2024-07-15 16:10:18.404564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.661 [2024-07-15 16:10:18.404594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.661 qpair failed and we were unable to recover it. 00:28:49.661 [2024-07-15 16:10:18.404889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.661 [2024-07-15 16:10:18.404919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.661 qpair failed and we were unable to recover it. 00:28:49.661 [2024-07-15 16:10:18.405117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.661 [2024-07-15 16:10:18.405148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.661 qpair failed and we were unable to recover it. 00:28:49.661 [2024-07-15 16:10:18.405417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.662 [2024-07-15 16:10:18.405449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.662 qpair failed and we were unable to recover it. 00:28:49.662 [2024-07-15 16:10:18.405617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.662 [2024-07-15 16:10:18.405649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.662 qpair failed and we were unable to recover it. 00:28:49.662 [2024-07-15 16:10:18.405790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.662 [2024-07-15 16:10:18.405801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.662 qpair failed and we were unable to recover it. 00:28:49.662 [2024-07-15 16:10:18.405980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.662 [2024-07-15 16:10:18.405992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.662 qpair failed and we were unable to recover it. 00:28:49.662 [2024-07-15 16:10:18.406157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.662 [2024-07-15 16:10:18.406168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.662 qpair failed and we were unable to recover it. 00:28:49.662 [2024-07-15 16:10:18.406337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.662 [2024-07-15 16:10:18.406349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.662 qpair failed and we were unable to recover it. 00:28:49.662 [2024-07-15 16:10:18.406567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.662 [2024-07-15 16:10:18.406580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.662 qpair failed and we were unable to recover it. 00:28:49.662 [2024-07-15 16:10:18.406826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.662 [2024-07-15 16:10:18.406839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.662 qpair failed and we were unable to recover it. 00:28:49.662 [2024-07-15 16:10:18.407041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.662 [2024-07-15 16:10:18.407072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.662 qpair failed and we were unable to recover it. 00:28:49.662 [2024-07-15 16:10:18.407292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.662 [2024-07-15 16:10:18.407324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.662 qpair failed and we were unable to recover it. 00:28:49.662 [2024-07-15 16:10:18.407612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.662 [2024-07-15 16:10:18.407642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.662 qpair failed and we were unable to recover it. 00:28:49.662 [2024-07-15 16:10:18.407912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.662 [2024-07-15 16:10:18.407943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.662 qpair failed and we were unable to recover it. 00:28:49.662 [2024-07-15 16:10:18.408170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.662 [2024-07-15 16:10:18.408201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.662 qpair failed and we were unable to recover it. 00:28:49.662 [2024-07-15 16:10:18.408424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.662 [2024-07-15 16:10:18.408457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.662 qpair failed and we were unable to recover it. 00:28:49.662 [2024-07-15 16:10:18.408666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.662 [2024-07-15 16:10:18.408696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.662 qpair failed and we were unable to recover it. 00:28:49.662 [2024-07-15 16:10:18.408950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.662 [2024-07-15 16:10:18.408962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.662 qpair failed and we were unable to recover it. 00:28:49.662 [2024-07-15 16:10:18.409136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.662 [2024-07-15 16:10:18.409148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.662 qpair failed and we were unable to recover it. 00:28:49.662 [2024-07-15 16:10:18.409397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.662 [2024-07-15 16:10:18.409429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.662 qpair failed and we were unable to recover it. 00:28:49.662 [2024-07-15 16:10:18.409582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.662 [2024-07-15 16:10:18.409594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.662 qpair failed and we were unable to recover it. 00:28:49.662 [2024-07-15 16:10:18.409822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.662 [2024-07-15 16:10:18.409852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.662 qpair failed and we were unable to recover it. 00:28:49.662 [2024-07-15 16:10:18.410137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.662 [2024-07-15 16:10:18.410167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.662 qpair failed and we were unable to recover it. 00:28:49.662 [2024-07-15 16:10:18.410470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.662 [2024-07-15 16:10:18.410507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.662 qpair failed and we were unable to recover it. 00:28:49.662 [2024-07-15 16:10:18.410748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.662 [2024-07-15 16:10:18.410779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.662 qpair failed and we were unable to recover it. 00:28:49.662 [2024-07-15 16:10:18.411054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.662 [2024-07-15 16:10:18.411086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.662 qpair failed and we were unable to recover it. 00:28:49.662 [2024-07-15 16:10:18.411242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.662 [2024-07-15 16:10:18.411274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.662 qpair failed and we were unable to recover it. 00:28:49.662 [2024-07-15 16:10:18.411488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.662 [2024-07-15 16:10:18.411519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.662 qpair failed and we were unable to recover it. 00:28:49.662 [2024-07-15 16:10:18.411742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.662 [2024-07-15 16:10:18.411773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.662 qpair failed and we were unable to recover it. 00:28:49.662 [2024-07-15 16:10:18.412038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.662 [2024-07-15 16:10:18.412075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.662 qpair failed and we were unable to recover it. 00:28:49.662 [2024-07-15 16:10:18.412292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.662 [2024-07-15 16:10:18.412324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.662 qpair failed and we were unable to recover it. 00:28:49.662 [2024-07-15 16:10:18.412530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.662 [2024-07-15 16:10:18.412562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.662 qpair failed and we were unable to recover it. 00:28:49.662 [2024-07-15 16:10:18.412754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.662 [2024-07-15 16:10:18.412766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.662 qpair failed and we were unable to recover it. 00:28:49.662 [2024-07-15 16:10:18.412966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.662 [2024-07-15 16:10:18.412996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.662 qpair failed and we were unable to recover it. 00:28:49.662 [2024-07-15 16:10:18.413281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.662 [2024-07-15 16:10:18.413313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.662 qpair failed and we were unable to recover it. 00:28:49.662 [2024-07-15 16:10:18.413600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.662 [2024-07-15 16:10:18.413631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.662 qpair failed and we were unable to recover it. 00:28:49.662 [2024-07-15 16:10:18.413848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.662 [2024-07-15 16:10:18.413879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.662 qpair failed and we were unable to recover it. 00:28:49.662 [2024-07-15 16:10:18.414167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.662 [2024-07-15 16:10:18.414198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.662 qpair failed and we were unable to recover it. 00:28:49.662 [2024-07-15 16:10:18.414412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.662 [2024-07-15 16:10:18.414444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.662 qpair failed and we were unable to recover it. 00:28:49.662 [2024-07-15 16:10:18.414708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.662 [2024-07-15 16:10:18.414739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.662 qpair failed and we were unable to recover it. 00:28:49.662 [2024-07-15 16:10:18.414933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.662 [2024-07-15 16:10:18.414945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.662 qpair failed and we were unable to recover it. 00:28:49.662 [2024-07-15 16:10:18.415110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.662 [2024-07-15 16:10:18.415122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.662 qpair failed and we were unable to recover it. 00:28:49.662 [2024-07-15 16:10:18.415398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.662 [2024-07-15 16:10:18.415438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.662 qpair failed and we were unable to recover it. 00:28:49.663 [2024-07-15 16:10:18.415665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.663 [2024-07-15 16:10:18.415676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.663 qpair failed and we were unable to recover it. 00:28:49.663 [2024-07-15 16:10:18.415920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.663 [2024-07-15 16:10:18.415950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.663 qpair failed and we were unable to recover it. 00:28:49.663 [2024-07-15 16:10:18.416241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.663 [2024-07-15 16:10:18.416274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.663 qpair failed and we were unable to recover it. 00:28:49.663 [2024-07-15 16:10:18.416542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.663 [2024-07-15 16:10:18.416573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.663 qpair failed and we were unable to recover it. 00:28:49.663 [2024-07-15 16:10:18.416842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.663 [2024-07-15 16:10:18.416872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.663 qpair failed and we were unable to recover it. 00:28:49.663 [2024-07-15 16:10:18.417139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.663 [2024-07-15 16:10:18.417169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.663 qpair failed and we were unable to recover it. 00:28:49.663 [2024-07-15 16:10:18.417423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.663 [2024-07-15 16:10:18.417455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.663 qpair failed and we were unable to recover it. 00:28:49.663 [2024-07-15 16:10:18.417728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.663 [2024-07-15 16:10:18.417766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.663 qpair failed and we were unable to recover it. 00:28:49.663 [2024-07-15 16:10:18.417876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.663 [2024-07-15 16:10:18.417888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.663 qpair failed and we were unable to recover it. 00:28:49.663 [2024-07-15 16:10:18.418104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.663 [2024-07-15 16:10:18.418135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.663 qpair failed and we were unable to recover it. 00:28:49.663 [2024-07-15 16:10:18.418335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.663 [2024-07-15 16:10:18.418368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.663 qpair failed and we were unable to recover it. 00:28:49.663 [2024-07-15 16:10:18.418581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.663 [2024-07-15 16:10:18.418612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.663 qpair failed and we were unable to recover it. 00:28:49.663 [2024-07-15 16:10:18.418809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.663 [2024-07-15 16:10:18.418820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.663 qpair failed and we were unable to recover it. 00:28:49.663 [2024-07-15 16:10:18.419127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.663 [2024-07-15 16:10:18.419158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.663 qpair failed and we were unable to recover it. 00:28:49.663 [2024-07-15 16:10:18.419425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.663 [2024-07-15 16:10:18.419457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.663 qpair failed and we were unable to recover it. 00:28:49.663 [2024-07-15 16:10:18.419719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.663 [2024-07-15 16:10:18.419730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.663 qpair failed and we were unable to recover it. 00:28:49.663 [2024-07-15 16:10:18.419840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.663 [2024-07-15 16:10:18.419852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.663 qpair failed and we were unable to recover it. 00:28:49.663 [2024-07-15 16:10:18.420098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.663 [2024-07-15 16:10:18.420129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.663 qpair failed and we were unable to recover it. 00:28:49.663 [2024-07-15 16:10:18.420297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.663 [2024-07-15 16:10:18.420329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.663 qpair failed and we were unable to recover it. 00:28:49.663 [2024-07-15 16:10:18.420627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.663 [2024-07-15 16:10:18.420658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.663 qpair failed and we were unable to recover it. 00:28:49.663 [2024-07-15 16:10:18.420901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.663 [2024-07-15 16:10:18.420930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.663 qpair failed and we were unable to recover it. 00:28:49.663 [2024-07-15 16:10:18.421206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.663 [2024-07-15 16:10:18.421246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.663 qpair failed and we were unable to recover it. 00:28:49.663 [2024-07-15 16:10:18.421457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.663 [2024-07-15 16:10:18.421488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.663 qpair failed and we were unable to recover it. 00:28:49.663 [2024-07-15 16:10:18.421746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.663 [2024-07-15 16:10:18.421758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.663 qpair failed and we were unable to recover it. 00:28:49.663 [2024-07-15 16:10:18.421980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.663 [2024-07-15 16:10:18.421993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.663 qpair failed and we were unable to recover it. 00:28:49.663 [2024-07-15 16:10:18.422163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.663 [2024-07-15 16:10:18.422176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.663 qpair failed and we were unable to recover it. 00:28:49.663 [2024-07-15 16:10:18.422367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.663 [2024-07-15 16:10:18.422383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.663 qpair failed and we were unable to recover it. 00:28:49.663 [2024-07-15 16:10:18.422539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.663 [2024-07-15 16:10:18.422551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.663 qpair failed and we were unable to recover it. 00:28:49.663 [2024-07-15 16:10:18.422756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.663 [2024-07-15 16:10:18.422787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.663 qpair failed and we were unable to recover it. 00:28:49.663 [2024-07-15 16:10:18.423005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.663 [2024-07-15 16:10:18.423036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.663 qpair failed and we were unable to recover it. 00:28:49.663 [2024-07-15 16:10:18.423262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.663 [2024-07-15 16:10:18.423294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.663 qpair failed and we were unable to recover it. 00:28:49.663 [2024-07-15 16:10:18.423493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.663 [2024-07-15 16:10:18.423532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.663 qpair failed and we were unable to recover it. 00:28:49.663 [2024-07-15 16:10:18.423756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.663 [2024-07-15 16:10:18.423778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.663 qpair failed and we were unable to recover it. 00:28:49.663 [2024-07-15 16:10:18.423980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.663 [2024-07-15 16:10:18.424010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.663 qpair failed and we were unable to recover it. 00:28:49.663 [2024-07-15 16:10:18.424250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.663 [2024-07-15 16:10:18.424281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.663 qpair failed and we were unable to recover it. 00:28:49.663 [2024-07-15 16:10:18.424563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.663 [2024-07-15 16:10:18.424574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.663 qpair failed and we were unable to recover it. 00:28:49.663 [2024-07-15 16:10:18.424685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.663 [2024-07-15 16:10:18.424696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.663 qpair failed and we were unable to recover it. 00:28:49.663 [2024-07-15 16:10:18.424867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.663 [2024-07-15 16:10:18.424879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.663 qpair failed and we were unable to recover it. 00:28:49.663 [2024-07-15 16:10:18.425139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.663 [2024-07-15 16:10:18.425151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.663 qpair failed and we were unable to recover it. 00:28:49.663 [2024-07-15 16:10:18.425403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.663 [2024-07-15 16:10:18.425442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.663 qpair failed and we were unable to recover it. 00:28:49.663 [2024-07-15 16:10:18.425753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.663 [2024-07-15 16:10:18.425784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.664 qpair failed and we were unable to recover it. 00:28:49.664 [2024-07-15 16:10:18.426088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.664 [2024-07-15 16:10:18.426119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.664 qpair failed and we were unable to recover it. 00:28:49.664 [2024-07-15 16:10:18.426347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.664 [2024-07-15 16:10:18.426379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.664 qpair failed and we were unable to recover it. 00:28:49.664 [2024-07-15 16:10:18.426580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.664 [2024-07-15 16:10:18.426611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.664 qpair failed and we were unable to recover it. 00:28:49.664 [2024-07-15 16:10:18.426893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.664 [2024-07-15 16:10:18.426905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.664 qpair failed and we were unable to recover it. 00:28:49.664 [2024-07-15 16:10:18.427072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.664 [2024-07-15 16:10:18.427084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.664 qpair failed and we were unable to recover it. 00:28:49.664 [2024-07-15 16:10:18.427288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.664 [2024-07-15 16:10:18.427319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.664 qpair failed and we were unable to recover it. 00:28:49.664 [2024-07-15 16:10:18.427546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.664 [2024-07-15 16:10:18.427558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.664 qpair failed and we were unable to recover it. 00:28:49.664 [2024-07-15 16:10:18.427747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.664 [2024-07-15 16:10:18.427778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.664 qpair failed and we were unable to recover it. 00:28:49.664 [2024-07-15 16:10:18.428055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.664 [2024-07-15 16:10:18.428086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.664 qpair failed and we were unable to recover it. 00:28:49.664 [2024-07-15 16:10:18.428241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.664 [2024-07-15 16:10:18.428274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.664 qpair failed and we were unable to recover it. 00:28:49.664 [2024-07-15 16:10:18.428541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.664 [2024-07-15 16:10:18.428572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.664 qpair failed and we were unable to recover it. 00:28:49.664 [2024-07-15 16:10:18.428817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.664 [2024-07-15 16:10:18.428828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.664 qpair failed and we were unable to recover it. 00:28:49.664 [2024-07-15 16:10:18.429014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.664 [2024-07-15 16:10:18.429026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.664 qpair failed and we were unable to recover it. 00:28:49.664 [2024-07-15 16:10:18.429254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.664 [2024-07-15 16:10:18.429286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.664 qpair failed and we were unable to recover it. 00:28:49.664 [2024-07-15 16:10:18.429502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.664 [2024-07-15 16:10:18.429533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.664 qpair failed and we were unable to recover it. 00:28:49.664 [2024-07-15 16:10:18.429730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.664 [2024-07-15 16:10:18.429760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.664 qpair failed and we were unable to recover it. 00:28:49.664 [2024-07-15 16:10:18.429965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.664 [2024-07-15 16:10:18.429995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.664 qpair failed and we were unable to recover it. 00:28:49.664 [2024-07-15 16:10:18.430270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.664 [2024-07-15 16:10:18.430302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.664 qpair failed and we were unable to recover it. 00:28:49.664 [2024-07-15 16:10:18.430522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.664 [2024-07-15 16:10:18.430552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.664 qpair failed and we were unable to recover it. 00:28:49.664 [2024-07-15 16:10:18.430756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.664 [2024-07-15 16:10:18.430786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.664 qpair failed and we were unable to recover it. 00:28:49.664 [2024-07-15 16:10:18.431093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.664 [2024-07-15 16:10:18.431124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.664 qpair failed and we were unable to recover it. 00:28:49.664 [2024-07-15 16:10:18.431409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.664 [2024-07-15 16:10:18.431441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.664 qpair failed and we were unable to recover it. 00:28:49.664 [2024-07-15 16:10:18.431643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.664 [2024-07-15 16:10:18.431679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.664 qpair failed and we were unable to recover it. 00:28:49.664 [2024-07-15 16:10:18.431960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.664 [2024-07-15 16:10:18.431991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.664 qpair failed and we were unable to recover it. 00:28:49.664 [2024-07-15 16:10:18.432213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.664 [2024-07-15 16:10:18.432253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.664 qpair failed and we were unable to recover it. 00:28:49.664 [2024-07-15 16:10:18.432478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.664 [2024-07-15 16:10:18.432514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.664 qpair failed and we were unable to recover it. 00:28:49.664 [2024-07-15 16:10:18.432789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.664 [2024-07-15 16:10:18.432820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.664 qpair failed and we were unable to recover it. 00:28:49.664 [2024-07-15 16:10:18.433037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.664 [2024-07-15 16:10:18.433067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.664 qpair failed and we were unable to recover it. 00:28:49.664 [2024-07-15 16:10:18.433331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.664 [2024-07-15 16:10:18.433363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.664 qpair failed and we were unable to recover it. 00:28:49.664 [2024-07-15 16:10:18.433569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.664 [2024-07-15 16:10:18.433612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.664 qpair failed and we were unable to recover it. 00:28:49.664 [2024-07-15 16:10:18.433815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.664 [2024-07-15 16:10:18.433827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.664 qpair failed and we were unable to recover it. 00:28:49.664 [2024-07-15 16:10:18.433932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.664 [2024-07-15 16:10:18.433945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.664 qpair failed and we were unable to recover it. 00:28:49.664 [2024-07-15 16:10:18.434130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.664 [2024-07-15 16:10:18.434143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.664 qpair failed and we were unable to recover it. 00:28:49.664 [2024-07-15 16:10:18.434327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.664 [2024-07-15 16:10:18.434339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.664 qpair failed and we were unable to recover it. 00:28:49.664 [2024-07-15 16:10:18.434583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.665 [2024-07-15 16:10:18.434595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.665 qpair failed and we were unable to recover it. 00:28:49.665 [2024-07-15 16:10:18.434808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.665 [2024-07-15 16:10:18.434838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.665 qpair failed and we were unable to recover it. 00:28:49.665 [2024-07-15 16:10:18.435104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.665 [2024-07-15 16:10:18.435135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.665 qpair failed and we were unable to recover it. 00:28:49.665 [2024-07-15 16:10:18.435446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.665 [2024-07-15 16:10:18.435478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.665 qpair failed and we were unable to recover it. 00:28:49.665 [2024-07-15 16:10:18.435706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.665 [2024-07-15 16:10:18.435738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.665 qpair failed and we were unable to recover it. 00:28:49.665 [2024-07-15 16:10:18.436036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.665 [2024-07-15 16:10:18.436049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.665 qpair failed and we were unable to recover it. 00:28:49.665 [2024-07-15 16:10:18.436273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.665 [2024-07-15 16:10:18.436306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.665 qpair failed and we were unable to recover it. 00:28:49.665 [2024-07-15 16:10:18.436594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.665 [2024-07-15 16:10:18.436626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.665 qpair failed and we were unable to recover it. 00:28:49.665 [2024-07-15 16:10:18.436854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.665 [2024-07-15 16:10:18.436866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.665 qpair failed and we were unable to recover it. 00:28:49.665 [2024-07-15 16:10:18.436970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.665 [2024-07-15 16:10:18.436982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.665 qpair failed and we were unable to recover it. 00:28:49.665 [2024-07-15 16:10:18.437204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.665 [2024-07-15 16:10:18.437216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.665 qpair failed and we were unable to recover it. 00:28:49.665 [2024-07-15 16:10:18.437400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.665 [2024-07-15 16:10:18.437432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.665 qpair failed and we were unable to recover it. 00:28:49.665 [2024-07-15 16:10:18.437652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.665 [2024-07-15 16:10:18.437683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.665 qpair failed and we were unable to recover it. 00:28:49.665 [2024-07-15 16:10:18.437893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.665 [2024-07-15 16:10:18.437924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.665 qpair failed and we were unable to recover it. 00:28:49.665 [2024-07-15 16:10:18.438218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.665 [2024-07-15 16:10:18.438279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.665 qpair failed and we were unable to recover it. 00:28:49.665 [2024-07-15 16:10:18.438497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.665 [2024-07-15 16:10:18.438527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.665 qpair failed and we were unable to recover it. 00:28:49.665 [2024-07-15 16:10:18.438731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.665 [2024-07-15 16:10:18.438762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.665 qpair failed and we were unable to recover it. 00:28:49.665 [2024-07-15 16:10:18.439073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.665 [2024-07-15 16:10:18.439109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.665 qpair failed and we were unable to recover it. 00:28:49.665 [2024-07-15 16:10:18.439428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.665 [2024-07-15 16:10:18.439499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.665 qpair failed and we were unable to recover it. 00:28:49.665 [2024-07-15 16:10:18.439808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.665 [2024-07-15 16:10:18.439825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.665 qpair failed and we were unable to recover it. 00:28:49.665 [2024-07-15 16:10:18.440072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.665 [2024-07-15 16:10:18.440088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.665 qpair failed and we were unable to recover it. 00:28:49.665 [2024-07-15 16:10:18.440256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.665 [2024-07-15 16:10:18.440272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.665 qpair failed and we were unable to recover it. 00:28:49.665 [2024-07-15 16:10:18.440397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.665 [2024-07-15 16:10:18.440413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.665 qpair failed and we were unable to recover it. 00:28:49.665 [2024-07-15 16:10:18.440593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.665 [2024-07-15 16:10:18.440609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.665 qpair failed and we were unable to recover it. 00:28:49.665 [2024-07-15 16:10:18.440814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.665 [2024-07-15 16:10:18.440829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.665 qpair failed and we were unable to recover it. 00:28:49.665 [2024-07-15 16:10:18.441024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.665 [2024-07-15 16:10:18.441039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.665 qpair failed and we were unable to recover it. 00:28:49.665 [2024-07-15 16:10:18.441296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.665 [2024-07-15 16:10:18.441312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.665 qpair failed and we were unable to recover it. 00:28:49.665 [2024-07-15 16:10:18.441420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.665 [2024-07-15 16:10:18.441435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.665 qpair failed and we were unable to recover it. 00:28:49.665 [2024-07-15 16:10:18.441665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.665 [2024-07-15 16:10:18.441681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.665 qpair failed and we were unable to recover it. 00:28:49.665 [2024-07-15 16:10:18.441937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.665 [2024-07-15 16:10:18.441968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.665 qpair failed and we were unable to recover it. 00:28:49.665 [2024-07-15 16:10:18.442182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.665 [2024-07-15 16:10:18.442213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.665 qpair failed and we were unable to recover it. 00:28:49.665 [2024-07-15 16:10:18.442512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.665 [2024-07-15 16:10:18.442544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.665 qpair failed and we were unable to recover it. 00:28:49.665 [2024-07-15 16:10:18.442768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.665 [2024-07-15 16:10:18.442784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.665 qpair failed and we were unable to recover it. 00:28:49.665 [2024-07-15 16:10:18.442954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.665 [2024-07-15 16:10:18.442985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.665 qpair failed and we were unable to recover it. 00:28:49.665 [2024-07-15 16:10:18.443202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.665 [2024-07-15 16:10:18.443243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.665 qpair failed and we were unable to recover it. 00:28:49.665 [2024-07-15 16:10:18.443516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.665 [2024-07-15 16:10:18.443548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.665 qpair failed and we were unable to recover it. 00:28:49.665 [2024-07-15 16:10:18.443835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.665 [2024-07-15 16:10:18.443850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.665 qpair failed and we were unable to recover it. 00:28:49.665 [2024-07-15 16:10:18.444033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.665 [2024-07-15 16:10:18.444048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.665 qpair failed and we were unable to recover it. 00:28:49.665 [2024-07-15 16:10:18.444165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.665 [2024-07-15 16:10:18.444196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.665 qpair failed and we were unable to recover it. 00:28:49.665 [2024-07-15 16:10:18.444538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.665 [2024-07-15 16:10:18.444574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.665 qpair failed and we were unable to recover it. 00:28:49.665 [2024-07-15 16:10:18.444872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.665 [2024-07-15 16:10:18.444902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.665 qpair failed and we were unable to recover it. 00:28:49.665 [2024-07-15 16:10:18.445190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.666 [2024-07-15 16:10:18.445220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.666 qpair failed and we were unable to recover it. 00:28:49.666 [2024-07-15 16:10:18.445447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.666 [2024-07-15 16:10:18.445478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.666 qpair failed and we were unable to recover it. 00:28:49.666 [2024-07-15 16:10:18.445707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.666 [2024-07-15 16:10:18.445737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.666 qpair failed and we were unable to recover it. 00:28:49.666 [2024-07-15 16:10:18.446023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.666 [2024-07-15 16:10:18.446036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.666 qpair failed and we were unable to recover it. 00:28:49.666 [2024-07-15 16:10:18.446197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.666 [2024-07-15 16:10:18.446210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.666 qpair failed and we were unable to recover it. 00:28:49.666 [2024-07-15 16:10:18.446388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.666 [2024-07-15 16:10:18.446400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.666 qpair failed and we were unable to recover it. 00:28:49.666 [2024-07-15 16:10:18.446527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.666 [2024-07-15 16:10:18.446558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.666 qpair failed and we were unable to recover it. 00:28:49.666 [2024-07-15 16:10:18.446729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.666 [2024-07-15 16:10:18.446759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.666 qpair failed and we were unable to recover it. 00:28:49.666 [2024-07-15 16:10:18.446966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.666 [2024-07-15 16:10:18.446997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.666 qpair failed and we were unable to recover it. 00:28:49.666 [2024-07-15 16:10:18.447208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.666 [2024-07-15 16:10:18.447251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.666 qpair failed and we were unable to recover it. 00:28:49.666 [2024-07-15 16:10:18.447452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.666 [2024-07-15 16:10:18.447482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.666 qpair failed and we were unable to recover it. 00:28:49.666 [2024-07-15 16:10:18.447745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.666 [2024-07-15 16:10:18.447775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.666 qpair failed and we were unable to recover it. 00:28:49.666 [2024-07-15 16:10:18.448055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.666 [2024-07-15 16:10:18.448085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.666 qpair failed and we were unable to recover it. 00:28:49.666 [2024-07-15 16:10:18.448415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.666 [2024-07-15 16:10:18.448448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.666 qpair failed and we were unable to recover it. 00:28:49.666 [2024-07-15 16:10:18.448707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.666 [2024-07-15 16:10:18.448738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.666 qpair failed and we were unable to recover it. 00:28:49.666 [2024-07-15 16:10:18.449004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.666 [2024-07-15 16:10:18.449035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.666 qpair failed and we were unable to recover it. 00:28:49.666 [2024-07-15 16:10:18.449302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.666 [2024-07-15 16:10:18.449334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.666 qpair failed and we were unable to recover it. 00:28:49.666 [2024-07-15 16:10:18.449574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.666 [2024-07-15 16:10:18.449605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.666 qpair failed and we were unable to recover it. 00:28:49.666 [2024-07-15 16:10:18.449820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.666 [2024-07-15 16:10:18.449851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.666 qpair failed and we were unable to recover it. 00:28:49.666 [2024-07-15 16:10:18.450037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.666 [2024-07-15 16:10:18.450049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.666 qpair failed and we were unable to recover it. 00:28:49.666 [2024-07-15 16:10:18.450246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.666 [2024-07-15 16:10:18.450258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.666 qpair failed and we were unable to recover it. 00:28:49.666 [2024-07-15 16:10:18.450374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.666 [2024-07-15 16:10:18.450405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.666 qpair failed and we were unable to recover it. 00:28:49.666 [2024-07-15 16:10:18.450671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.666 [2024-07-15 16:10:18.450702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.666 qpair failed and we were unable to recover it. 00:28:49.666 [2024-07-15 16:10:18.450984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.666 [2024-07-15 16:10:18.450997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.666 qpair failed and we were unable to recover it. 00:28:49.666 [2024-07-15 16:10:18.451252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.666 [2024-07-15 16:10:18.451284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.666 qpair failed and we were unable to recover it. 00:28:49.666 [2024-07-15 16:10:18.451482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.666 [2024-07-15 16:10:18.451513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.666 qpair failed and we were unable to recover it. 00:28:49.666 [2024-07-15 16:10:18.451765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.666 [2024-07-15 16:10:18.451796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.666 qpair failed and we were unable to recover it. 00:28:49.666 [2024-07-15 16:10:18.452014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.666 [2024-07-15 16:10:18.452044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.666 qpair failed and we were unable to recover it. 00:28:49.666 [2024-07-15 16:10:18.452313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.666 [2024-07-15 16:10:18.452345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.666 qpair failed and we were unable to recover it. 00:28:49.666 [2024-07-15 16:10:18.452543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.666 [2024-07-15 16:10:18.452575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.666 qpair failed and we were unable to recover it. 00:28:49.666 [2024-07-15 16:10:18.452698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.666 [2024-07-15 16:10:18.452729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.666 qpair failed and we were unable to recover it. 00:28:49.666 [2024-07-15 16:10:18.453040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.666 [2024-07-15 16:10:18.453071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.666 qpair failed and we were unable to recover it. 00:28:49.666 [2024-07-15 16:10:18.453385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.666 [2024-07-15 16:10:18.453417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.666 qpair failed and we were unable to recover it. 00:28:49.666 [2024-07-15 16:10:18.453706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.666 [2024-07-15 16:10:18.453737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.666 qpair failed and we were unable to recover it. 00:28:49.666 [2024-07-15 16:10:18.453957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.666 [2024-07-15 16:10:18.453988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.666 qpair failed and we were unable to recover it. 00:28:49.666 [2024-07-15 16:10:18.454277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.666 [2024-07-15 16:10:18.454309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.666 qpair failed and we were unable to recover it. 00:28:49.666 [2024-07-15 16:10:18.454511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.666 [2024-07-15 16:10:18.454541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.666 qpair failed and we were unable to recover it. 00:28:49.666 [2024-07-15 16:10:18.454742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.666 [2024-07-15 16:10:18.454754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.666 qpair failed and we were unable to recover it. 00:28:49.666 [2024-07-15 16:10:18.454843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.666 [2024-07-15 16:10:18.454853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.666 qpair failed and we were unable to recover it. 00:28:49.666 [2024-07-15 16:10:18.454947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.666 [2024-07-15 16:10:18.454958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.666 qpair failed and we were unable to recover it. 00:28:49.666 [2024-07-15 16:10:18.455144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.666 [2024-07-15 16:10:18.455175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.666 qpair failed and we were unable to recover it. 00:28:49.667 [2024-07-15 16:10:18.455474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.667 [2024-07-15 16:10:18.455506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.667 qpair failed and we were unable to recover it. 00:28:49.667 [2024-07-15 16:10:18.455795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.667 [2024-07-15 16:10:18.455826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.667 qpair failed and we were unable to recover it. 00:28:49.667 [2024-07-15 16:10:18.456127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.667 [2024-07-15 16:10:18.456138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.667 qpair failed and we were unable to recover it. 00:28:49.667 [2024-07-15 16:10:18.456313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.667 [2024-07-15 16:10:18.456328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.667 qpair failed and we were unable to recover it. 00:28:49.667 [2024-07-15 16:10:18.456550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.667 [2024-07-15 16:10:18.456563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.667 qpair failed and we were unable to recover it. 00:28:49.667 [2024-07-15 16:10:18.456794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.667 [2024-07-15 16:10:18.456806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.667 qpair failed and we were unable to recover it. 00:28:49.667 [2024-07-15 16:10:18.457096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.667 [2024-07-15 16:10:18.457126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.667 qpair failed and we were unable to recover it. 00:28:49.667 [2024-07-15 16:10:18.457455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.667 [2024-07-15 16:10:18.457486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.667 qpair failed and we were unable to recover it. 00:28:49.667 [2024-07-15 16:10:18.457754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.667 [2024-07-15 16:10:18.457794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.667 qpair failed and we were unable to recover it. 00:28:49.667 [2024-07-15 16:10:18.458046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.667 [2024-07-15 16:10:18.458058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.667 qpair failed and we were unable to recover it. 00:28:49.667 [2024-07-15 16:10:18.458287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.667 [2024-07-15 16:10:18.458300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.667 qpair failed and we were unable to recover it. 00:28:49.667 [2024-07-15 16:10:18.458470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.667 [2024-07-15 16:10:18.458483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.667 qpair failed and we were unable to recover it. 00:28:49.667 [2024-07-15 16:10:18.458731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.667 [2024-07-15 16:10:18.458762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.667 qpair failed and we were unable to recover it. 00:28:49.667 [2024-07-15 16:10:18.458965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.667 [2024-07-15 16:10:18.458995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.667 qpair failed and we were unable to recover it. 00:28:49.667 [2024-07-15 16:10:18.459210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.667 [2024-07-15 16:10:18.459264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.667 qpair failed and we were unable to recover it. 00:28:49.667 [2024-07-15 16:10:18.459480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.667 [2024-07-15 16:10:18.459511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.667 qpair failed and we were unable to recover it. 00:28:49.667 [2024-07-15 16:10:18.459659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.667 [2024-07-15 16:10:18.459671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.667 qpair failed and we were unable to recover it. 00:28:49.667 [2024-07-15 16:10:18.459874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.667 [2024-07-15 16:10:18.459905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.667 qpair failed and we were unable to recover it. 00:28:49.667 [2024-07-15 16:10:18.460175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.667 [2024-07-15 16:10:18.460206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.667 qpair failed and we were unable to recover it. 00:28:49.667 [2024-07-15 16:10:18.460437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.667 [2024-07-15 16:10:18.460469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.667 qpair failed and we were unable to recover it. 00:28:49.667 [2024-07-15 16:10:18.460758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.667 [2024-07-15 16:10:18.460789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.667 qpair failed and we were unable to recover it. 00:28:49.667 [2024-07-15 16:10:18.461029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.667 [2024-07-15 16:10:18.461060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.667 qpair failed and we were unable to recover it. 00:28:49.667 [2024-07-15 16:10:18.461245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.667 [2024-07-15 16:10:18.461277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.667 qpair failed and we were unable to recover it. 00:28:49.667 [2024-07-15 16:10:18.461481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.667 [2024-07-15 16:10:18.461511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.667 qpair failed and we were unable to recover it. 00:28:49.667 [2024-07-15 16:10:18.461803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.667 [2024-07-15 16:10:18.461834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.667 qpair failed and we were unable to recover it. 00:28:49.667 [2024-07-15 16:10:18.462125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.667 [2024-07-15 16:10:18.462168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.667 qpair failed and we were unable to recover it. 00:28:49.667 [2024-07-15 16:10:18.462443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.667 [2024-07-15 16:10:18.462476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.667 qpair failed and we were unable to recover it. 00:28:49.667 [2024-07-15 16:10:18.462678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.667 [2024-07-15 16:10:18.462708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.667 qpair failed and we were unable to recover it. 00:28:49.667 [2024-07-15 16:10:18.462871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.667 [2024-07-15 16:10:18.462899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.667 qpair failed and we were unable to recover it. 00:28:49.667 [2024-07-15 16:10:18.463149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.667 [2024-07-15 16:10:18.463181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.667 qpair failed and we were unable to recover it. 00:28:49.667 [2024-07-15 16:10:18.463420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.667 [2024-07-15 16:10:18.463452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.667 qpair failed and we were unable to recover it. 00:28:49.667 [2024-07-15 16:10:18.463649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.667 [2024-07-15 16:10:18.463662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.667 qpair failed and we were unable to recover it. 00:28:49.667 [2024-07-15 16:10:18.463917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.667 [2024-07-15 16:10:18.463948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.667 qpair failed and we were unable to recover it. 00:28:49.667 [2024-07-15 16:10:18.464165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.667 [2024-07-15 16:10:18.464197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.667 qpair failed and we were unable to recover it. 00:28:49.667 [2024-07-15 16:10:18.464500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.667 [2024-07-15 16:10:18.464531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.667 qpair failed and we were unable to recover it. 00:28:49.667 [2024-07-15 16:10:18.464772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.667 [2024-07-15 16:10:18.464808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.667 qpair failed and we were unable to recover it. 00:28:49.667 [2024-07-15 16:10:18.465006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.667 [2024-07-15 16:10:18.465019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.667 qpair failed and we were unable to recover it. 00:28:49.667 [2024-07-15 16:10:18.465271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.667 [2024-07-15 16:10:18.465283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.667 qpair failed and we were unable to recover it. 00:28:49.667 [2024-07-15 16:10:18.465481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.667 [2024-07-15 16:10:18.465493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.667 qpair failed and we were unable to recover it. 00:28:49.667 [2024-07-15 16:10:18.465705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.667 [2024-07-15 16:10:18.465717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.667 qpair failed and we were unable to recover it. 00:28:49.667 [2024-07-15 16:10:18.465896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.667 [2024-07-15 16:10:18.465907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.668 qpair failed and we were unable to recover it. 00:28:49.668 [2024-07-15 16:10:18.466122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.668 [2024-07-15 16:10:18.466153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.668 qpair failed and we were unable to recover it. 00:28:49.668 [2024-07-15 16:10:18.466354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.668 [2024-07-15 16:10:18.466386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.668 qpair failed and we were unable to recover it. 00:28:49.668 [2024-07-15 16:10:18.466594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.668 [2024-07-15 16:10:18.466630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.668 qpair failed and we were unable to recover it. 00:28:49.668 [2024-07-15 16:10:18.466867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.668 [2024-07-15 16:10:18.466898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.668 qpair failed and we were unable to recover it. 00:28:49.668 [2024-07-15 16:10:18.467134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.668 [2024-07-15 16:10:18.467164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.668 qpair failed and we were unable to recover it. 00:28:49.668 [2024-07-15 16:10:18.467382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.668 [2024-07-15 16:10:18.467416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.668 qpair failed and we were unable to recover it. 00:28:49.668 [2024-07-15 16:10:18.467683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.668 [2024-07-15 16:10:18.467714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.668 qpair failed and we were unable to recover it. 00:28:49.668 [2024-07-15 16:10:18.467849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.668 [2024-07-15 16:10:18.467861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.668 qpair failed and we were unable to recover it. 00:28:49.668 [2024-07-15 16:10:18.468112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.668 [2024-07-15 16:10:18.468144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.668 qpair failed and we were unable to recover it. 00:28:49.668 [2024-07-15 16:10:18.468352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.668 [2024-07-15 16:10:18.468384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.668 qpair failed and we were unable to recover it. 00:28:49.668 [2024-07-15 16:10:18.468617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.668 [2024-07-15 16:10:18.468648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.668 qpair failed and we were unable to recover it. 00:28:49.668 [2024-07-15 16:10:18.468921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.668 [2024-07-15 16:10:18.468934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.668 qpair failed and we were unable to recover it. 00:28:49.668 [2024-07-15 16:10:18.469050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.668 [2024-07-15 16:10:18.469063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.668 qpair failed and we were unable to recover it. 00:28:49.668 [2024-07-15 16:10:18.469241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.668 [2024-07-15 16:10:18.469254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.668 qpair failed and we were unable to recover it. 00:28:49.668 [2024-07-15 16:10:18.469513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.668 [2024-07-15 16:10:18.469551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.668 qpair failed and we were unable to recover it. 00:28:49.668 [2024-07-15 16:10:18.469783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.668 [2024-07-15 16:10:18.469817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.668 qpair failed and we were unable to recover it. 00:28:49.668 [2024-07-15 16:10:18.470079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.668 [2024-07-15 16:10:18.470110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.668 qpair failed and we were unable to recover it. 00:28:49.668 [2024-07-15 16:10:18.470319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.668 [2024-07-15 16:10:18.470351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.668 qpair failed and we were unable to recover it. 00:28:49.668 [2024-07-15 16:10:18.470585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.668 [2024-07-15 16:10:18.470615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.668 qpair failed and we were unable to recover it. 00:28:49.668 [2024-07-15 16:10:18.470906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.668 [2024-07-15 16:10:18.470917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.668 qpair failed and we were unable to recover it. 00:28:49.668 [2024-07-15 16:10:18.471028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.668 [2024-07-15 16:10:18.471040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.668 qpair failed and we were unable to recover it. 00:28:49.668 [2024-07-15 16:10:18.471294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.668 [2024-07-15 16:10:18.471326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.668 qpair failed and we were unable to recover it. 00:28:49.668 [2024-07-15 16:10:18.471538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.668 [2024-07-15 16:10:18.471569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.668 qpair failed and we were unable to recover it. 00:28:49.668 [2024-07-15 16:10:18.471859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.668 [2024-07-15 16:10:18.471890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.668 qpair failed and we were unable to recover it. 00:28:49.668 [2024-07-15 16:10:18.472193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.668 [2024-07-15 16:10:18.472233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.668 qpair failed and we were unable to recover it. 00:28:49.668 [2024-07-15 16:10:18.472515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.668 [2024-07-15 16:10:18.472546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.668 qpair failed and we were unable to recover it. 00:28:49.668 [2024-07-15 16:10:18.472846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.668 [2024-07-15 16:10:18.472877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.668 qpair failed and we were unable to recover it. 00:28:49.668 [2024-07-15 16:10:18.473145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.668 [2024-07-15 16:10:18.473176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.668 qpair failed and we were unable to recover it. 00:28:49.668 [2024-07-15 16:10:18.473411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.668 [2024-07-15 16:10:18.473442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.668 qpair failed and we were unable to recover it. 00:28:49.668 [2024-07-15 16:10:18.473624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.668 [2024-07-15 16:10:18.473657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.668 qpair failed and we were unable to recover it. 00:28:49.668 [2024-07-15 16:10:18.473953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.668 [2024-07-15 16:10:18.473983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.668 qpair failed and we were unable to recover it. 00:28:49.668 [2024-07-15 16:10:18.474181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.668 [2024-07-15 16:10:18.474212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.668 qpair failed and we were unable to recover it. 00:28:49.668 [2024-07-15 16:10:18.474428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.668 [2024-07-15 16:10:18.474460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.668 qpair failed and we were unable to recover it. 00:28:49.668 [2024-07-15 16:10:18.474724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.668 [2024-07-15 16:10:18.474763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.668 qpair failed and we were unable to recover it. 00:28:49.669 [2024-07-15 16:10:18.474938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.669 [2024-07-15 16:10:18.474951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.669 qpair failed and we were unable to recover it. 00:28:49.669 [2024-07-15 16:10:18.475115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.669 [2024-07-15 16:10:18.475146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.669 qpair failed and we were unable to recover it. 00:28:49.669 [2024-07-15 16:10:18.475337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.669 [2024-07-15 16:10:18.475370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.669 qpair failed and we were unable to recover it. 00:28:49.669 [2024-07-15 16:10:18.475638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.669 [2024-07-15 16:10:18.475650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.669 qpair failed and we were unable to recover it. 00:28:49.669 [2024-07-15 16:10:18.475826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.669 [2024-07-15 16:10:18.475857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.669 qpair failed and we were unable to recover it. 00:28:49.669 [2024-07-15 16:10:18.476172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.669 [2024-07-15 16:10:18.476204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.669 qpair failed and we were unable to recover it. 00:28:49.669 [2024-07-15 16:10:18.476561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.669 [2024-07-15 16:10:18.476592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.669 qpair failed and we were unable to recover it. 00:28:49.669 [2024-07-15 16:10:18.476880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.669 [2024-07-15 16:10:18.476911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.669 qpair failed and we were unable to recover it. 00:28:49.669 [2024-07-15 16:10:18.477178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.669 [2024-07-15 16:10:18.477215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.669 qpair failed and we were unable to recover it. 00:28:49.669 [2024-07-15 16:10:18.477518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.669 [2024-07-15 16:10:18.477549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.669 qpair failed and we were unable to recover it. 00:28:49.669 [2024-07-15 16:10:18.477699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.669 [2024-07-15 16:10:18.477731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.669 qpair failed and we were unable to recover it. 00:28:49.669 [2024-07-15 16:10:18.477953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.669 [2024-07-15 16:10:18.477985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.669 qpair failed and we were unable to recover it. 00:28:49.669 [2024-07-15 16:10:18.478201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.669 [2024-07-15 16:10:18.478213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.669 qpair failed and we were unable to recover it. 00:28:49.669 [2024-07-15 16:10:18.478385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.669 [2024-07-15 16:10:18.478417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.669 qpair failed and we were unable to recover it. 00:28:49.669 [2024-07-15 16:10:18.478723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.669 [2024-07-15 16:10:18.478754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.669 qpair failed and we were unable to recover it. 00:28:49.669 [2024-07-15 16:10:18.479046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.669 [2024-07-15 16:10:18.479078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.669 qpair failed and we were unable to recover it. 00:28:49.669 [2024-07-15 16:10:18.479299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.669 [2024-07-15 16:10:18.479332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.669 qpair failed and we were unable to recover it. 00:28:49.669 [2024-07-15 16:10:18.479617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.669 [2024-07-15 16:10:18.479648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.669 qpair failed and we were unable to recover it. 00:28:49.669 [2024-07-15 16:10:18.479869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.669 [2024-07-15 16:10:18.479901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.669 qpair failed and we were unable to recover it. 00:28:49.669 [2024-07-15 16:10:18.480060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.669 [2024-07-15 16:10:18.480091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.669 qpair failed and we were unable to recover it. 00:28:49.669 [2024-07-15 16:10:18.480379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.669 [2024-07-15 16:10:18.480411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.669 qpair failed and we were unable to recover it. 00:28:49.669 [2024-07-15 16:10:18.480574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.669 [2024-07-15 16:10:18.480605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.669 qpair failed and we were unable to recover it. 00:28:49.669 [2024-07-15 16:10:18.480742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.669 [2024-07-15 16:10:18.480755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.669 qpair failed and we were unable to recover it. 00:28:49.669 [2024-07-15 16:10:18.481030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.669 [2024-07-15 16:10:18.481042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.669 qpair failed and we were unable to recover it. 00:28:49.669 [2024-07-15 16:10:18.481237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.669 [2024-07-15 16:10:18.481249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.669 qpair failed and we were unable to recover it. 00:28:49.669 [2024-07-15 16:10:18.481360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.669 [2024-07-15 16:10:18.481371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.669 qpair failed and we were unable to recover it. 00:28:49.669 [2024-07-15 16:10:18.481465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.669 [2024-07-15 16:10:18.481476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.669 qpair failed and we were unable to recover it. 00:28:49.669 [2024-07-15 16:10:18.481708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.669 [2024-07-15 16:10:18.481723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.669 qpair failed and we were unable to recover it. 00:28:49.669 [2024-07-15 16:10:18.481859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.669 [2024-07-15 16:10:18.481871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.669 qpair failed and we were unable to recover it. 00:28:49.669 [2024-07-15 16:10:18.482057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.669 [2024-07-15 16:10:18.482088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.669 qpair failed and we were unable to recover it. 00:28:49.669 [2024-07-15 16:10:18.482308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.669 [2024-07-15 16:10:18.482341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.669 qpair failed and we were unable to recover it. 00:28:49.669 [2024-07-15 16:10:18.482520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.669 [2024-07-15 16:10:18.482552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.669 qpair failed and we were unable to recover it. 00:28:49.669 [2024-07-15 16:10:18.482815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.669 [2024-07-15 16:10:18.482827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.669 qpair failed and we were unable to recover it. 00:28:49.669 [2024-07-15 16:10:18.483076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.669 [2024-07-15 16:10:18.483108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.669 qpair failed and we were unable to recover it. 00:28:49.669 [2024-07-15 16:10:18.483258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.669 [2024-07-15 16:10:18.483291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.669 qpair failed and we were unable to recover it. 00:28:49.669 [2024-07-15 16:10:18.483514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.669 [2024-07-15 16:10:18.483546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.669 qpair failed and we were unable to recover it. 00:28:49.669 [2024-07-15 16:10:18.483769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.669 [2024-07-15 16:10:18.483801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.669 qpair failed and we were unable to recover it. 00:28:49.669 [2024-07-15 16:10:18.484009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.669 [2024-07-15 16:10:18.484021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.669 qpair failed and we were unable to recover it. 00:28:49.669 [2024-07-15 16:10:18.484252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.669 [2024-07-15 16:10:18.484284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.669 qpair failed and we were unable to recover it. 00:28:49.669 [2024-07-15 16:10:18.484502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.669 [2024-07-15 16:10:18.484533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.669 qpair failed and we were unable to recover it. 00:28:49.669 [2024-07-15 16:10:18.484767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.670 [2024-07-15 16:10:18.484779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.670 qpair failed and we were unable to recover it. 00:28:49.670 [2024-07-15 16:10:18.484993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.670 [2024-07-15 16:10:18.485024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.670 qpair failed and we were unable to recover it. 00:28:49.670 [2024-07-15 16:10:18.485300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.670 [2024-07-15 16:10:18.485332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.670 qpair failed and we were unable to recover it. 00:28:49.670 [2024-07-15 16:10:18.485623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.670 [2024-07-15 16:10:18.485654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.670 qpair failed and we were unable to recover it. 00:28:49.670 [2024-07-15 16:10:18.485920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.670 [2024-07-15 16:10:18.485951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.670 qpair failed and we were unable to recover it. 00:28:49.670 [2024-07-15 16:10:18.486306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.670 [2024-07-15 16:10:18.486320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.670 qpair failed and we were unable to recover it. 00:28:49.670 [2024-07-15 16:10:18.486565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.670 [2024-07-15 16:10:18.486577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.670 qpair failed and we were unable to recover it. 00:28:49.670 [2024-07-15 16:10:18.486730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.670 [2024-07-15 16:10:18.486740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.670 qpair failed and we were unable to recover it. 00:28:49.670 [2024-07-15 16:10:18.486923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.670 [2024-07-15 16:10:18.486957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.670 qpair failed and we were unable to recover it. 00:28:49.670 [2024-07-15 16:10:18.487157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.670 [2024-07-15 16:10:18.487185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.670 qpair failed and we were unable to recover it. 00:28:49.670 [2024-07-15 16:10:18.487356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.670 [2024-07-15 16:10:18.487385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.670 qpair failed and we were unable to recover it. 00:28:49.670 [2024-07-15 16:10:18.487606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.670 [2024-07-15 16:10:18.487635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.670 qpair failed and we were unable to recover it. 00:28:49.670 [2024-07-15 16:10:18.487911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.670 [2024-07-15 16:10:18.487939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.670 qpair failed and we were unable to recover it. 00:28:49.670 [2024-07-15 16:10:18.488205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.670 [2024-07-15 16:10:18.488244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.670 qpair failed and we were unable to recover it. 00:28:49.670 [2024-07-15 16:10:18.488398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.670 [2024-07-15 16:10:18.488426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.670 qpair failed and we were unable to recover it. 00:28:49.670 [2024-07-15 16:10:18.488660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.670 [2024-07-15 16:10:18.488687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.670 qpair failed and we were unable to recover it. 00:28:49.670 [2024-07-15 16:10:18.488838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.670 [2024-07-15 16:10:18.488847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.670 qpair failed and we were unable to recover it. 00:28:49.670 [2024-07-15 16:10:18.489080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.670 [2024-07-15 16:10:18.489108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.670 qpair failed and we were unable to recover it. 00:28:49.670 [2024-07-15 16:10:18.489250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.670 [2024-07-15 16:10:18.489280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.670 qpair failed and we were unable to recover it. 00:28:49.670 [2024-07-15 16:10:18.489588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.670 [2024-07-15 16:10:18.489617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.670 qpair failed and we were unable to recover it. 00:28:49.670 [2024-07-15 16:10:18.489818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.670 [2024-07-15 16:10:18.489846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.670 qpair failed and we were unable to recover it. 00:28:49.670 [2024-07-15 16:10:18.490118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.670 [2024-07-15 16:10:18.490128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.670 qpair failed and we were unable to recover it. 00:28:49.670 [2024-07-15 16:10:18.490379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.670 [2024-07-15 16:10:18.490392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.670 qpair failed and we were unable to recover it. 00:28:49.670 [2024-07-15 16:10:18.490571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.670 [2024-07-15 16:10:18.490581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.670 qpair failed and we were unable to recover it. 00:28:49.670 [2024-07-15 16:10:18.490692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.670 [2024-07-15 16:10:18.490720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.670 qpair failed and we were unable to recover it. 00:28:49.670 [2024-07-15 16:10:18.491032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.670 [2024-07-15 16:10:18.491061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.670 qpair failed and we were unable to recover it. 00:28:49.670 [2024-07-15 16:10:18.491349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.670 [2024-07-15 16:10:18.491380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.670 qpair failed and we were unable to recover it. 00:28:49.670 [2024-07-15 16:10:18.491683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.670 [2024-07-15 16:10:18.491714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.670 qpair failed and we were unable to recover it. 00:28:49.670 [2024-07-15 16:10:18.491983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.670 [2024-07-15 16:10:18.492014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.670 qpair failed and we were unable to recover it. 00:28:49.670 [2024-07-15 16:10:18.492286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.670 [2024-07-15 16:10:18.492318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.670 qpair failed and we were unable to recover it. 00:28:49.670 [2024-07-15 16:10:18.492525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.670 [2024-07-15 16:10:18.492556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.670 qpair failed and we were unable to recover it. 00:28:49.670 [2024-07-15 16:10:18.492711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.670 [2024-07-15 16:10:18.492724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.670 qpair failed and we were unable to recover it. 00:28:49.670 [2024-07-15 16:10:18.492900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.670 [2024-07-15 16:10:18.492913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.670 qpair failed and we were unable to recover it. 00:28:49.670 [2024-07-15 16:10:18.493168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.670 [2024-07-15 16:10:18.493199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.670 qpair failed and we were unable to recover it. 00:28:49.670 [2024-07-15 16:10:18.493529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.670 [2024-07-15 16:10:18.493561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.670 qpair failed and we were unable to recover it. 00:28:49.670 [2024-07-15 16:10:18.493839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.670 [2024-07-15 16:10:18.493867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.670 qpair failed and we were unable to recover it. 00:28:49.670 [2024-07-15 16:10:18.494030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.670 [2024-07-15 16:10:18.494075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.670 qpair failed and we were unable to recover it. 00:28:49.670 [2024-07-15 16:10:18.494369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.670 [2024-07-15 16:10:18.494402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.670 qpair failed and we were unable to recover it. 00:28:49.670 [2024-07-15 16:10:18.494567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.670 [2024-07-15 16:10:18.494598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.670 qpair failed and we were unable to recover it. 00:28:49.670 [2024-07-15 16:10:18.494804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.670 [2024-07-15 16:10:18.494835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.670 qpair failed and we were unable to recover it. 00:28:49.670 [2024-07-15 16:10:18.495068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.670 [2024-07-15 16:10:18.495080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.670 qpair failed and we were unable to recover it. 00:28:49.670 [2024-07-15 16:10:18.495203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.671 [2024-07-15 16:10:18.495246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.671 qpair failed and we were unable to recover it. 00:28:49.671 [2024-07-15 16:10:18.495454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.671 [2024-07-15 16:10:18.495485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.671 qpair failed and we were unable to recover it. 00:28:49.671 [2024-07-15 16:10:18.495643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.671 [2024-07-15 16:10:18.495674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.671 qpair failed and we were unable to recover it. 00:28:49.671 [2024-07-15 16:10:18.495901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.671 [2024-07-15 16:10:18.495932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.671 qpair failed and we were unable to recover it. 00:28:49.671 [2024-07-15 16:10:18.496133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.671 [2024-07-15 16:10:18.496164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.671 qpair failed and we were unable to recover it. 00:28:49.671 [2024-07-15 16:10:18.496400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.671 [2024-07-15 16:10:18.496432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.671 qpair failed and we were unable to recover it. 00:28:49.671 [2024-07-15 16:10:18.496648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.671 [2024-07-15 16:10:18.496679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.671 qpair failed and we were unable to recover it. 00:28:49.671 [2024-07-15 16:10:18.496981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.671 [2024-07-15 16:10:18.497017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.671 qpair failed and we were unable to recover it. 00:28:49.671 [2024-07-15 16:10:18.497307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.671 [2024-07-15 16:10:18.497339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.671 qpair failed and we were unable to recover it. 00:28:49.671 [2024-07-15 16:10:18.497511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.671 [2024-07-15 16:10:18.497542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.671 qpair failed and we were unable to recover it. 00:28:49.671 [2024-07-15 16:10:18.497832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.671 [2024-07-15 16:10:18.497863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.671 qpair failed and we were unable to recover it. 00:28:49.671 [2024-07-15 16:10:18.498126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.671 [2024-07-15 16:10:18.498137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.671 qpair failed and we were unable to recover it. 00:28:49.671 [2024-07-15 16:10:18.498313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.671 [2024-07-15 16:10:18.498325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.671 qpair failed and we were unable to recover it. 00:28:49.671 [2024-07-15 16:10:18.498524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.671 [2024-07-15 16:10:18.498555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.671 qpair failed and we were unable to recover it. 00:28:49.671 [2024-07-15 16:10:18.498774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.671 [2024-07-15 16:10:18.498805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.671 qpair failed and we were unable to recover it. 00:28:49.671 [2024-07-15 16:10:18.499019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.671 [2024-07-15 16:10:18.499050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.671 qpair failed and we were unable to recover it. 00:28:49.671 [2024-07-15 16:10:18.499362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.671 [2024-07-15 16:10:18.499405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.671 qpair failed and we were unable to recover it. 00:28:49.671 [2024-07-15 16:10:18.499634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.671 [2024-07-15 16:10:18.499665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.671 qpair failed and we were unable to recover it. 00:28:49.671 [2024-07-15 16:10:18.499959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.671 [2024-07-15 16:10:18.499991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.671 qpair failed and we were unable to recover it. 00:28:49.671 [2024-07-15 16:10:18.500236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.671 [2024-07-15 16:10:18.500269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.671 qpair failed and we were unable to recover it. 00:28:49.671 [2024-07-15 16:10:18.500498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.671 [2024-07-15 16:10:18.500530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.671 qpair failed and we were unable to recover it. 00:28:49.671 [2024-07-15 16:10:18.500759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.671 [2024-07-15 16:10:18.500798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.671 qpair failed and we were unable to recover it. 00:28:49.671 [2024-07-15 16:10:18.500908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.671 [2024-07-15 16:10:18.500921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.671 qpair failed and we were unable to recover it. 00:28:49.671 [2024-07-15 16:10:18.501077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.671 [2024-07-15 16:10:18.501089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.671 qpair failed and we were unable to recover it. 00:28:49.671 [2024-07-15 16:10:18.501234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.671 [2024-07-15 16:10:18.501247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.671 qpair failed and we were unable to recover it. 00:28:49.671 [2024-07-15 16:10:18.501508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.671 [2024-07-15 16:10:18.501540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.671 qpair failed and we were unable to recover it. 00:28:49.671 [2024-07-15 16:10:18.501800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.671 [2024-07-15 16:10:18.501831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.671 qpair failed and we were unable to recover it. 00:28:49.671 [2024-07-15 16:10:18.501998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.671 [2024-07-15 16:10:18.502029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.671 qpair failed and we were unable to recover it. 00:28:49.671 [2024-07-15 16:10:18.502346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.671 [2024-07-15 16:10:18.502379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.671 qpair failed and we were unable to recover it. 00:28:49.671 [2024-07-15 16:10:18.502596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.671 [2024-07-15 16:10:18.502627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.671 qpair failed and we were unable to recover it. 00:28:49.671 [2024-07-15 16:10:18.502994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.671 [2024-07-15 16:10:18.503026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.671 qpair failed and we were unable to recover it. 00:28:49.671 [2024-07-15 16:10:18.503255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.671 [2024-07-15 16:10:18.503287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.671 qpair failed and we were unable to recover it. 00:28:49.671 [2024-07-15 16:10:18.503550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.671 [2024-07-15 16:10:18.503581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.671 qpair failed and we were unable to recover it. 00:28:49.671 [2024-07-15 16:10:18.503787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.671 [2024-07-15 16:10:18.503799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.671 qpair failed and we were unable to recover it. 00:28:49.671 [2024-07-15 16:10:18.504051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.671 [2024-07-15 16:10:18.504083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.671 qpair failed and we were unable to recover it. 00:28:49.671 [2024-07-15 16:10:18.504305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.671 [2024-07-15 16:10:18.504350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.671 qpair failed and we were unable to recover it. 00:28:49.671 [2024-07-15 16:10:18.504556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.671 [2024-07-15 16:10:18.504594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.671 qpair failed and we were unable to recover it. 00:28:49.671 [2024-07-15 16:10:18.505347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.671 [2024-07-15 16:10:18.505372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.671 qpair failed and we were unable to recover it. 00:28:49.671 [2024-07-15 16:10:18.505663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.671 [2024-07-15 16:10:18.505697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.671 qpair failed and we were unable to recover it. 00:28:49.671 [2024-07-15 16:10:18.506015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.671 [2024-07-15 16:10:18.506047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.672 qpair failed and we were unable to recover it. 00:28:49.672 [2024-07-15 16:10:18.506237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.672 [2024-07-15 16:10:18.506250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.672 qpair failed and we were unable to recover it. 00:28:49.672 [2024-07-15 16:10:18.506370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.672 [2024-07-15 16:10:18.506381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.672 qpair failed and we were unable to recover it. 00:28:49.672 [2024-07-15 16:10:18.506558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.672 [2024-07-15 16:10:18.506572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.672 qpair failed and we were unable to recover it. 00:28:49.672 [2024-07-15 16:10:18.506764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.672 [2024-07-15 16:10:18.506777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.672 qpair failed and we were unable to recover it. 00:28:49.672 [2024-07-15 16:10:18.506906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.672 [2024-07-15 16:10:18.506918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.672 qpair failed and we were unable to recover it. 00:28:49.672 [2024-07-15 16:10:18.507110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.672 [2024-07-15 16:10:18.507123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.672 qpair failed and we were unable to recover it. 00:28:49.672 [2024-07-15 16:10:18.507305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.672 [2024-07-15 16:10:18.507318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.672 qpair failed and we were unable to recover it. 00:28:49.672 [2024-07-15 16:10:18.507494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.672 [2024-07-15 16:10:18.507510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.672 qpair failed and we were unable to recover it. 00:28:49.672 [2024-07-15 16:10:18.507685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.672 [2024-07-15 16:10:18.507697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.672 qpair failed and we were unable to recover it. 00:28:49.672 [2024-07-15 16:10:18.507872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.672 [2024-07-15 16:10:18.507885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.672 qpair failed and we were unable to recover it. 00:28:49.672 [2024-07-15 16:10:18.507991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.672 [2024-07-15 16:10:18.508004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.672 qpair failed and we were unable to recover it. 00:28:49.672 [2024-07-15 16:10:18.508252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.672 [2024-07-15 16:10:18.508265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.672 qpair failed and we were unable to recover it. 00:28:49.672 [2024-07-15 16:10:18.508437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.672 [2024-07-15 16:10:18.508468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.672 qpair failed and we were unable to recover it. 00:28:49.672 [2024-07-15 16:10:18.508683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.672 [2024-07-15 16:10:18.508714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.672 qpair failed and we were unable to recover it. 00:28:49.672 [2024-07-15 16:10:18.508953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.672 [2024-07-15 16:10:18.508965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.672 qpair failed and we were unable to recover it. 00:28:49.672 [2024-07-15 16:10:18.509166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.672 [2024-07-15 16:10:18.509197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.672 qpair failed and we were unable to recover it. 00:28:49.672 [2024-07-15 16:10:18.509430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.672 [2024-07-15 16:10:18.509463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.672 qpair failed and we were unable to recover it. 00:28:49.672 [2024-07-15 16:10:18.509623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.672 [2024-07-15 16:10:18.509655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.672 qpair failed and we were unable to recover it. 00:28:49.672 [2024-07-15 16:10:18.509824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.672 [2024-07-15 16:10:18.509856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.672 qpair failed and we were unable to recover it. 00:28:49.672 [2024-07-15 16:10:18.510153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.672 [2024-07-15 16:10:18.510184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.672 qpair failed and we were unable to recover it. 00:28:49.672 [2024-07-15 16:10:18.510437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.672 [2024-07-15 16:10:18.510470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.672 qpair failed and we were unable to recover it. 00:28:49.672 [2024-07-15 16:10:18.510697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.672 [2024-07-15 16:10:18.510709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.672 qpair failed and we were unable to recover it. 00:28:49.672 [2024-07-15 16:10:18.510868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.672 [2024-07-15 16:10:18.510880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.672 qpair failed and we were unable to recover it. 00:28:49.672 [2024-07-15 16:10:18.511126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.672 [2024-07-15 16:10:18.511138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.672 qpair failed and we were unable to recover it. 00:28:49.672 [2024-07-15 16:10:18.511313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.672 [2024-07-15 16:10:18.511326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.672 qpair failed and we were unable to recover it. 00:28:49.672 [2024-07-15 16:10:18.511565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.672 [2024-07-15 16:10:18.511597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.672 qpair failed and we were unable to recover it. 00:28:49.672 [2024-07-15 16:10:18.511753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.672 [2024-07-15 16:10:18.511765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.672 qpair failed and we were unable to recover it. 00:28:49.672 [2024-07-15 16:10:18.512015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.672 [2024-07-15 16:10:18.512047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.672 qpair failed and we were unable to recover it. 00:28:49.672 [2024-07-15 16:10:18.512386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.672 [2024-07-15 16:10:18.512421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.672 qpair failed and we were unable to recover it. 00:28:49.672 [2024-07-15 16:10:18.512587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.672 [2024-07-15 16:10:18.512619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.672 qpair failed and we were unable to recover it. 00:28:49.672 [2024-07-15 16:10:18.512790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.672 [2024-07-15 16:10:18.512803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.672 qpair failed and we were unable to recover it. 00:28:49.672 [2024-07-15 16:10:18.513086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.672 [2024-07-15 16:10:18.513099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.672 qpair failed and we were unable to recover it. 00:28:49.672 [2024-07-15 16:10:18.513277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.672 [2024-07-15 16:10:18.513310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.672 qpair failed and we were unable to recover it. 00:28:49.672 [2024-07-15 16:10:18.513482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.672 [2024-07-15 16:10:18.513513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.672 qpair failed and we were unable to recover it. 00:28:49.672 [2024-07-15 16:10:18.513795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.672 [2024-07-15 16:10:18.513827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.672 qpair failed and we were unable to recover it. 00:28:49.672 [2024-07-15 16:10:18.514070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.672 [2024-07-15 16:10:18.514102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.672 qpair failed and we were unable to recover it. 00:28:49.672 [2024-07-15 16:10:18.514367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.672 [2024-07-15 16:10:18.514400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.672 qpair failed and we were unable to recover it. 00:28:49.672 [2024-07-15 16:10:18.514688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.672 [2024-07-15 16:10:18.514719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.672 qpair failed and we were unable to recover it. 00:28:49.672 [2024-07-15 16:10:18.515027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.672 [2024-07-15 16:10:18.515065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.672 qpair failed and we were unable to recover it. 00:28:49.672 [2024-07-15 16:10:18.515208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.672 [2024-07-15 16:10:18.515220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.672 qpair failed and we were unable to recover it. 00:28:49.672 [2024-07-15 16:10:18.515336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.673 [2024-07-15 16:10:18.515349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.673 qpair failed and we were unable to recover it. 00:28:49.673 [2024-07-15 16:10:18.515460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.673 [2024-07-15 16:10:18.515473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.673 qpair failed and we were unable to recover it. 00:28:49.673 [2024-07-15 16:10:18.515655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.673 [2024-07-15 16:10:18.515687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.673 qpair failed and we were unable to recover it. 00:28:49.673 [2024-07-15 16:10:18.515899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.673 [2024-07-15 16:10:18.515931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.673 qpair failed and we were unable to recover it. 00:28:49.673 [2024-07-15 16:10:18.516148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.673 [2024-07-15 16:10:18.516181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.673 qpair failed and we were unable to recover it. 00:28:49.673 [2024-07-15 16:10:18.516463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.673 [2024-07-15 16:10:18.516496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.673 qpair failed and we were unable to recover it. 00:28:49.673 [2024-07-15 16:10:18.516710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.673 [2024-07-15 16:10:18.516742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.673 qpair failed and we were unable to recover it. 00:28:49.673 [2024-07-15 16:10:18.516962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.673 [2024-07-15 16:10:18.517000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.673 qpair failed and we were unable to recover it. 00:28:49.673 [2024-07-15 16:10:18.517284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.673 [2024-07-15 16:10:18.517298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.673 qpair failed and we were unable to recover it. 00:28:49.673 [2024-07-15 16:10:18.517529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.673 [2024-07-15 16:10:18.517542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.673 qpair failed and we were unable to recover it. 00:28:49.673 [2024-07-15 16:10:18.517719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.673 [2024-07-15 16:10:18.517731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.673 qpair failed and we were unable to recover it. 00:28:49.673 [2024-07-15 16:10:18.517848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.673 [2024-07-15 16:10:18.517872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.673 qpair failed and we were unable to recover it. 00:28:49.673 [2024-07-15 16:10:18.518078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.673 [2024-07-15 16:10:18.518118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.673 qpair failed and we were unable to recover it. 00:28:49.673 [2024-07-15 16:10:18.518428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.673 [2024-07-15 16:10:18.518462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.673 qpair failed and we were unable to recover it. 00:28:49.673 [2024-07-15 16:10:18.518604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.673 [2024-07-15 16:10:18.518636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.673 qpair failed and we were unable to recover it. 00:28:49.673 [2024-07-15 16:10:18.518863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.673 [2024-07-15 16:10:18.518897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.673 qpair failed and we were unable to recover it. 00:28:49.673 [2024-07-15 16:10:18.519112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.673 [2024-07-15 16:10:18.519146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.673 qpair failed and we were unable to recover it. 00:28:49.673 [2024-07-15 16:10:18.519351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.673 [2024-07-15 16:10:18.519384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.673 qpair failed and we were unable to recover it. 00:28:49.673 [2024-07-15 16:10:18.519685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.673 [2024-07-15 16:10:18.519722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.673 qpair failed and we were unable to recover it. 00:28:49.673 [2024-07-15 16:10:18.519879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.673 [2024-07-15 16:10:18.519892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.673 qpair failed and we were unable to recover it. 00:28:49.673 [2024-07-15 16:10:18.520148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.673 [2024-07-15 16:10:18.520182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.673 qpair failed and we were unable to recover it. 00:28:49.673 [2024-07-15 16:10:18.520430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.673 [2024-07-15 16:10:18.520477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.673 qpair failed and we were unable to recover it. 00:28:49.673 [2024-07-15 16:10:18.520687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.673 [2024-07-15 16:10:18.520727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.673 qpair failed and we were unable to recover it. 00:28:49.673 [2024-07-15 16:10:18.520894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.673 [2024-07-15 16:10:18.520927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.673 qpair failed and we were unable to recover it. 00:28:49.673 [2024-07-15 16:10:18.521221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.673 [2024-07-15 16:10:18.521266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.673 qpair failed and we were unable to recover it. 00:28:49.673 [2024-07-15 16:10:18.521478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.673 [2024-07-15 16:10:18.521512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.673 qpair failed and we were unable to recover it. 00:28:49.673 [2024-07-15 16:10:18.521750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.673 [2024-07-15 16:10:18.521764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.673 qpair failed and we were unable to recover it. 00:28:49.673 [2024-07-15 16:10:18.521939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.673 [2024-07-15 16:10:18.521951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.673 qpair failed and we were unable to recover it. 00:28:49.673 [2024-07-15 16:10:18.522151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.673 [2024-07-15 16:10:18.522182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.673 qpair failed and we were unable to recover it. 00:28:49.673 [2024-07-15 16:10:18.522396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.673 [2024-07-15 16:10:18.522433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.673 qpair failed and we were unable to recover it. 00:28:49.673 [2024-07-15 16:10:18.522648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.673 [2024-07-15 16:10:18.522682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.673 qpair failed and we were unable to recover it. 00:28:49.673 [2024-07-15 16:10:18.522920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.673 [2024-07-15 16:10:18.522933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.673 qpair failed and we were unable to recover it. 00:28:49.673 [2024-07-15 16:10:18.523193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.673 [2024-07-15 16:10:18.523241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.673 qpair failed and we were unable to recover it. 00:28:49.673 [2024-07-15 16:10:18.523519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.673 [2024-07-15 16:10:18.523551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.673 qpair failed and we were unable to recover it. 00:28:49.673 [2024-07-15 16:10:18.523854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.674 [2024-07-15 16:10:18.523926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.674 qpair failed and we were unable to recover it. 00:28:49.674 [2024-07-15 16:10:18.524246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.674 [2024-07-15 16:10:18.524283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.674 qpair failed and we were unable to recover it. 00:28:49.674 [2024-07-15 16:10:18.524504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.674 [2024-07-15 16:10:18.524537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.674 qpair failed and we were unable to recover it. 00:28:49.674 [2024-07-15 16:10:18.524745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.674 [2024-07-15 16:10:18.524762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.674 qpair failed and we were unable to recover it. 00:28:49.674 [2024-07-15 16:10:18.525004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.674 [2024-07-15 16:10:18.525036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.674 qpair failed and we were unable to recover it. 00:28:49.674 [2024-07-15 16:10:18.525384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.674 [2024-07-15 16:10:18.525416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.674 qpair failed and we were unable to recover it. 00:28:49.674 [2024-07-15 16:10:18.525657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.674 [2024-07-15 16:10:18.525690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.674 qpair failed and we were unable to recover it. 00:28:49.674 [2024-07-15 16:10:18.525947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.674 [2024-07-15 16:10:18.525980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.674 qpair failed and we were unable to recover it. 00:28:49.674 [2024-07-15 16:10:18.526193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.674 [2024-07-15 16:10:18.526208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.674 qpair failed and we were unable to recover it. 00:28:49.674 [2024-07-15 16:10:18.526389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.674 [2024-07-15 16:10:18.526405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.674 qpair failed and we were unable to recover it. 00:28:49.674 [2024-07-15 16:10:18.526605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.674 [2024-07-15 16:10:18.526637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.674 qpair failed and we were unable to recover it. 00:28:49.674 [2024-07-15 16:10:18.526853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.674 [2024-07-15 16:10:18.526868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.674 qpair failed and we were unable to recover it. 00:28:49.674 [2024-07-15 16:10:18.527134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.674 [2024-07-15 16:10:18.527167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.674 qpair failed and we were unable to recover it. 00:28:49.674 [2024-07-15 16:10:18.527399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.674 [2024-07-15 16:10:18.527441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.674 qpair failed and we were unable to recover it. 00:28:49.674 [2024-07-15 16:10:18.527758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.674 [2024-07-15 16:10:18.527774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.674 qpair failed and we were unable to recover it. 00:28:49.674 [2024-07-15 16:10:18.527908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.674 [2024-07-15 16:10:18.527924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.674 qpair failed and we were unable to recover it. 00:28:49.674 [2024-07-15 16:10:18.528114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.674 [2024-07-15 16:10:18.528130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.674 qpair failed and we were unable to recover it. 00:28:49.674 [2024-07-15 16:10:18.528306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.674 [2024-07-15 16:10:18.528323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.674 qpair failed and we were unable to recover it. 00:28:49.674 [2024-07-15 16:10:18.528573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.674 [2024-07-15 16:10:18.528589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.674 qpair failed and we were unable to recover it. 00:28:49.674 [2024-07-15 16:10:18.528796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.674 [2024-07-15 16:10:18.528812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.674 qpair failed and we were unable to recover it. 00:28:49.674 [2024-07-15 16:10:18.529098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.674 [2024-07-15 16:10:18.529135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.674 qpair failed and we were unable to recover it. 00:28:49.674 [2024-07-15 16:10:18.529410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.674 [2024-07-15 16:10:18.529442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.674 qpair failed and we were unable to recover it. 00:28:49.674 [2024-07-15 16:10:18.529682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.674 [2024-07-15 16:10:18.529714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.674 qpair failed and we were unable to recover it. 00:28:49.674 [2024-07-15 16:10:18.529883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.674 [2024-07-15 16:10:18.529915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.674 qpair failed and we were unable to recover it. 00:28:49.674 [2024-07-15 16:10:18.530137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.674 [2024-07-15 16:10:18.530168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.674 qpair failed and we were unable to recover it. 00:28:49.674 [2024-07-15 16:10:18.530397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.674 [2024-07-15 16:10:18.530430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.674 qpair failed and we were unable to recover it. 00:28:49.674 [2024-07-15 16:10:18.530650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.674 [2024-07-15 16:10:18.530682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.674 qpair failed and we were unable to recover it. 00:28:49.674 [2024-07-15 16:10:18.530901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.674 [2024-07-15 16:10:18.530934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.674 qpair failed and we were unable to recover it. 00:28:49.674 [2024-07-15 16:10:18.531081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.674 [2024-07-15 16:10:18.531113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.674 qpair failed and we were unable to recover it. 00:28:49.674 [2024-07-15 16:10:18.531340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.674 [2024-07-15 16:10:18.531373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.674 qpair failed and we were unable to recover it. 00:28:49.674 [2024-07-15 16:10:18.531587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.674 [2024-07-15 16:10:18.531619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.674 qpair failed and we were unable to recover it. 00:28:49.674 [2024-07-15 16:10:18.531771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.674 [2024-07-15 16:10:18.531801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.674 qpair failed and we were unable to recover it. 00:28:49.674 [2024-07-15 16:10:18.531969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.674 [2024-07-15 16:10:18.532001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.674 qpair failed and we were unable to recover it. 00:28:49.674 [2024-07-15 16:10:18.532285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.674 [2024-07-15 16:10:18.532321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.674 qpair failed and we were unable to recover it. 00:28:49.674 [2024-07-15 16:10:18.532615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.674 [2024-07-15 16:10:18.532646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.674 qpair failed and we were unable to recover it. 00:28:49.674 [2024-07-15 16:10:18.532938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.674 [2024-07-15 16:10:18.532970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.674 qpair failed and we were unable to recover it. 00:28:49.674 [2024-07-15 16:10:18.533113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.674 [2024-07-15 16:10:18.533144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.674 qpair failed and we were unable to recover it. 00:28:49.674 [2024-07-15 16:10:18.533291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.674 [2024-07-15 16:10:18.533324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.674 qpair failed and we were unable to recover it. 00:28:49.674 [2024-07-15 16:10:18.533596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.674 [2024-07-15 16:10:18.533628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.674 qpair failed and we were unable to recover it. 00:28:49.674 [2024-07-15 16:10:18.533785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.674 [2024-07-15 16:10:18.533817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.674 qpair failed and we were unable to recover it. 00:28:49.674 [2024-07-15 16:10:18.533971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.674 [2024-07-15 16:10:18.534005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.674 qpair failed and we were unable to recover it. 00:28:49.674 [2024-07-15 16:10:18.534221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.675 [2024-07-15 16:10:18.534240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.675 qpair failed and we were unable to recover it. 00:28:49.675 [2024-07-15 16:10:18.534474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.675 [2024-07-15 16:10:18.534506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.675 qpair failed and we were unable to recover it. 00:28:49.675 [2024-07-15 16:10:18.534668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.675 [2024-07-15 16:10:18.534700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.675 qpair failed and we were unable to recover it. 00:28:49.675 [2024-07-15 16:10:18.534852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.675 [2024-07-15 16:10:18.534882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.675 qpair failed and we were unable to recover it. 00:28:49.675 [2024-07-15 16:10:18.535012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.675 [2024-07-15 16:10:18.535026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.675 qpair failed and we were unable to recover it. 00:28:49.675 [2024-07-15 16:10:18.535235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.675 [2024-07-15 16:10:18.535267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.675 qpair failed and we were unable to recover it. 00:28:49.675 [2024-07-15 16:10:18.535497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.675 [2024-07-15 16:10:18.535530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.675 qpair failed and we were unable to recover it. 00:28:49.675 [2024-07-15 16:10:18.535767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.675 [2024-07-15 16:10:18.535799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.675 qpair failed and we were unable to recover it. 00:28:49.675 [2024-07-15 16:10:18.536074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.675 [2024-07-15 16:10:18.536105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.675 qpair failed and we were unable to recover it. 00:28:49.675 [2024-07-15 16:10:18.536259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.675 [2024-07-15 16:10:18.536293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.675 qpair failed and we were unable to recover it. 00:28:49.675 [2024-07-15 16:10:18.536495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.675 [2024-07-15 16:10:18.536527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.675 qpair failed and we were unable to recover it. 00:28:49.675 [2024-07-15 16:10:18.536738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.675 [2024-07-15 16:10:18.536770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.675 qpair failed and we were unable to recover it. 00:28:49.675 [2024-07-15 16:10:18.537044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.675 [2024-07-15 16:10:18.537081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.675 qpair failed and we were unable to recover it. 00:28:49.675 [2024-07-15 16:10:18.537362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.675 [2024-07-15 16:10:18.537395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.675 qpair failed and we were unable to recover it. 00:28:49.675 [2024-07-15 16:10:18.537532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.675 [2024-07-15 16:10:18.537565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.675 qpair failed and we were unable to recover it. 00:28:49.675 [2024-07-15 16:10:18.537780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.675 [2024-07-15 16:10:18.537811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.675 qpair failed and we were unable to recover it. 00:28:49.675 [2024-07-15 16:10:18.537958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.675 [2024-07-15 16:10:18.537989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.675 qpair failed and we were unable to recover it. 00:28:49.675 [2024-07-15 16:10:18.538190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.675 [2024-07-15 16:10:18.538223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.675 qpair failed and we were unable to recover it. 00:28:49.675 [2024-07-15 16:10:18.538395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.675 [2024-07-15 16:10:18.538429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.675 qpair failed and we were unable to recover it. 00:28:49.675 [2024-07-15 16:10:18.538716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.675 [2024-07-15 16:10:18.538748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.675 qpair failed and we were unable to recover it. 00:28:49.675 [2024-07-15 16:10:18.538951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.675 [2024-07-15 16:10:18.538983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.675 qpair failed and we were unable to recover it. 00:28:49.675 [2024-07-15 16:10:18.539186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.675 [2024-07-15 16:10:18.539198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.675 qpair failed and we were unable to recover it. 00:28:49.675 [2024-07-15 16:10:18.539302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.675 [2024-07-15 16:10:18.539313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.675 qpair failed and we were unable to recover it. 00:28:49.675 [2024-07-15 16:10:18.539484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.675 [2024-07-15 16:10:18.539496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.675 qpair failed and we were unable to recover it. 00:28:49.675 [2024-07-15 16:10:18.539592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.675 [2024-07-15 16:10:18.539603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.675 qpair failed and we were unable to recover it. 00:28:49.675 [2024-07-15 16:10:18.539750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.675 [2024-07-15 16:10:18.539763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.675 qpair failed and we were unable to recover it. 00:28:49.675 [2024-07-15 16:10:18.540023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.675 [2024-07-15 16:10:18.540056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.675 qpair failed and we were unable to recover it. 00:28:49.675 [2024-07-15 16:10:18.540265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.675 [2024-07-15 16:10:18.540299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.675 qpair failed and we were unable to recover it. 00:28:49.675 [2024-07-15 16:10:18.540518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.675 [2024-07-15 16:10:18.540549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.675 qpair failed and we were unable to recover it. 00:28:49.675 [2024-07-15 16:10:18.540792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.675 [2024-07-15 16:10:18.540824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.675 qpair failed and we were unable to recover it. 00:28:49.675 [2024-07-15 16:10:18.540974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.675 [2024-07-15 16:10:18.541011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.675 qpair failed and we were unable to recover it. 00:28:49.675 [2024-07-15 16:10:18.541259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.675 [2024-07-15 16:10:18.541272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.675 qpair failed and we were unable to recover it. 00:28:49.675 [2024-07-15 16:10:18.541567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.675 [2024-07-15 16:10:18.541599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.675 qpair failed and we were unable to recover it. 00:28:49.675 [2024-07-15 16:10:18.541819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.675 [2024-07-15 16:10:18.541850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.675 qpair failed and we were unable to recover it. 00:28:49.675 [2024-07-15 16:10:18.542061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.675 [2024-07-15 16:10:18.542102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.675 qpair failed and we were unable to recover it. 00:28:49.675 [2024-07-15 16:10:18.542209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.675 [2024-07-15 16:10:18.542221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.675 qpair failed and we were unable to recover it. 00:28:49.675 [2024-07-15 16:10:18.542335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.675 [2024-07-15 16:10:18.542347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.675 qpair failed and we were unable to recover it. 00:28:49.675 [2024-07-15 16:10:18.542549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.675 [2024-07-15 16:10:18.542562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.675 qpair failed and we were unable to recover it. 00:28:49.675 [2024-07-15 16:10:18.542720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.675 [2024-07-15 16:10:18.542733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.675 qpair failed and we were unable to recover it. 00:28:49.675 [2024-07-15 16:10:18.542869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.675 [2024-07-15 16:10:18.542912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.675 qpair failed and we were unable to recover it. 00:28:49.675 [2024-07-15 16:10:18.543122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.675 [2024-07-15 16:10:18.543194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.675 qpair failed and we were unable to recover it. 00:28:49.676 [2024-07-15 16:10:18.543385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.676 [2024-07-15 16:10:18.543422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.676 qpair failed and we were unable to recover it. 00:28:49.676 [2024-07-15 16:10:18.543580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.676 [2024-07-15 16:10:18.543612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.676 qpair failed and we were unable to recover it. 00:28:49.676 [2024-07-15 16:10:18.543832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.676 [2024-07-15 16:10:18.543864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.676 qpair failed and we were unable to recover it. 00:28:49.676 [2024-07-15 16:10:18.544063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.676 [2024-07-15 16:10:18.544080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.676 qpair failed and we were unable to recover it. 00:28:49.676 [2024-07-15 16:10:18.544369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.676 [2024-07-15 16:10:18.544405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.676 qpair failed and we were unable to recover it. 00:28:49.676 [2024-07-15 16:10:18.544691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.676 [2024-07-15 16:10:18.544723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.676 qpair failed and we were unable to recover it. 00:28:49.676 [2024-07-15 16:10:18.544911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.676 [2024-07-15 16:10:18.544924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.676 qpair failed and we were unable to recover it. 00:28:49.676 [2024-07-15 16:10:18.545064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.676 [2024-07-15 16:10:18.545096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.676 qpair failed and we were unable to recover it. 00:28:49.676 [2024-07-15 16:10:18.545338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.676 [2024-07-15 16:10:18.545371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.676 qpair failed and we were unable to recover it. 00:28:49.676 [2024-07-15 16:10:18.545537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.676 [2024-07-15 16:10:18.545569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.676 qpair failed and we were unable to recover it. 00:28:49.676 [2024-07-15 16:10:18.545843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.676 [2024-07-15 16:10:18.545874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.676 qpair failed and we were unable to recover it. 00:28:49.676 [2024-07-15 16:10:18.546016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.676 [2024-07-15 16:10:18.546032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.676 qpair failed and we were unable to recover it. 00:28:49.676 [2024-07-15 16:10:18.546298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.676 [2024-07-15 16:10:18.546331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.676 qpair failed and we were unable to recover it. 00:28:49.676 [2024-07-15 16:10:18.546615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.676 [2024-07-15 16:10:18.546646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.676 qpair failed and we were unable to recover it. 00:28:49.676 [2024-07-15 16:10:18.546902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.676 [2024-07-15 16:10:18.546933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.676 qpair failed and we were unable to recover it. 00:28:49.676 [2024-07-15 16:10:18.547094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.676 [2024-07-15 16:10:18.547126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.676 qpair failed and we were unable to recover it. 00:28:49.676 [2024-07-15 16:10:18.547359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.676 [2024-07-15 16:10:18.547392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.676 qpair failed and we were unable to recover it. 00:28:49.676 [2024-07-15 16:10:18.547596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.676 [2024-07-15 16:10:18.547628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.676 qpair failed and we were unable to recover it. 00:28:49.676 [2024-07-15 16:10:18.547782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.676 [2024-07-15 16:10:18.547794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.676 qpair failed and we were unable to recover it. 00:28:49.676 [2024-07-15 16:10:18.547952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.676 [2024-07-15 16:10:18.547964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.676 qpair failed and we were unable to recover it. 00:28:49.676 [2024-07-15 16:10:18.548126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.676 [2024-07-15 16:10:18.548157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.676 qpair failed and we were unable to recover it. 00:28:49.676 [2024-07-15 16:10:18.548431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.676 [2024-07-15 16:10:18.548465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.676 qpair failed and we were unable to recover it. 00:28:49.676 [2024-07-15 16:10:18.548674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.676 [2024-07-15 16:10:18.548705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.676 qpair failed and we were unable to recover it. 00:28:49.676 [2024-07-15 16:10:18.548937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.676 [2024-07-15 16:10:18.548949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.676 qpair failed and we were unable to recover it. 00:28:49.676 [2024-07-15 16:10:18.549119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.676 [2024-07-15 16:10:18.549151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.676 qpair failed and we were unable to recover it. 00:28:49.676 [2024-07-15 16:10:18.549302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.676 [2024-07-15 16:10:18.549334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.676 qpair failed and we were unable to recover it. 00:28:49.676 [2024-07-15 16:10:18.549610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.676 [2024-07-15 16:10:18.549641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.676 qpair failed and we were unable to recover it. 00:28:49.676 [2024-07-15 16:10:18.549789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.676 [2024-07-15 16:10:18.549821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.676 qpair failed and we were unable to recover it. 00:28:49.676 [2024-07-15 16:10:18.550118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.676 [2024-07-15 16:10:18.550149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.676 qpair failed and we were unable to recover it. 00:28:49.676 [2024-07-15 16:10:18.550467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.676 [2024-07-15 16:10:18.550500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.676 qpair failed and we were unable to recover it. 00:28:49.676 [2024-07-15 16:10:18.550639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.676 [2024-07-15 16:10:18.550670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.676 qpair failed and we were unable to recover it. 00:28:49.676 [2024-07-15 16:10:18.550883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.676 [2024-07-15 16:10:18.550913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.676 qpair failed and we were unable to recover it. 00:28:49.676 [2024-07-15 16:10:18.551147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.676 [2024-07-15 16:10:18.551159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.676 qpair failed and we were unable to recover it. 00:28:49.676 [2024-07-15 16:10:18.551279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.676 [2024-07-15 16:10:18.551292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.676 qpair failed and we were unable to recover it. 00:28:49.676 [2024-07-15 16:10:18.551533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.676 [2024-07-15 16:10:18.551546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.676 qpair failed and we were unable to recover it. 00:28:49.676 [2024-07-15 16:10:18.551800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.676 [2024-07-15 16:10:18.551813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.676 qpair failed and we were unable to recover it. 00:28:49.676 [2024-07-15 16:10:18.552066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.676 [2024-07-15 16:10:18.552098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.676 qpair failed and we were unable to recover it. 00:28:49.676 [2024-07-15 16:10:18.552305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.676 [2024-07-15 16:10:18.552338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.676 qpair failed and we were unable to recover it. 00:28:49.676 [2024-07-15 16:10:18.552557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.676 [2024-07-15 16:10:18.552631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.676 qpair failed and we were unable to recover it. 00:28:49.676 [2024-07-15 16:10:18.552801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.676 [2024-07-15 16:10:18.552819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.676 qpair failed and we were unable to recover it. 00:28:49.676 [2024-07-15 16:10:18.553025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.676 [2024-07-15 16:10:18.553057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.677 qpair failed and we were unable to recover it. 00:28:49.677 [2024-07-15 16:10:18.553208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.677 [2024-07-15 16:10:18.553250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.677 qpair failed and we were unable to recover it. 00:28:49.677 [2024-07-15 16:10:18.553496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.677 [2024-07-15 16:10:18.553528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.677 qpair failed and we were unable to recover it. 00:28:49.677 [2024-07-15 16:10:18.553749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.677 [2024-07-15 16:10:18.553782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.677 qpair failed and we were unable to recover it. 00:28:49.677 [2024-07-15 16:10:18.554007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.677 [2024-07-15 16:10:18.554039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.677 qpair failed and we were unable to recover it. 00:28:49.677 [2024-07-15 16:10:18.554238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.677 [2024-07-15 16:10:18.554254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.677 qpair failed and we were unable to recover it. 00:28:49.677 [2024-07-15 16:10:18.554446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.677 [2024-07-15 16:10:18.554478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.677 qpair failed and we were unable to recover it. 00:28:49.677 [2024-07-15 16:10:18.554694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.677 [2024-07-15 16:10:18.554725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.677 qpair failed and we were unable to recover it. 00:28:49.677 [2024-07-15 16:10:18.554931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.677 [2024-07-15 16:10:18.554947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.677 qpair failed and we were unable to recover it. 00:28:49.677 [2024-07-15 16:10:18.555073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.677 [2024-07-15 16:10:18.555090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.677 qpair failed and we were unable to recover it. 00:28:49.677 [2024-07-15 16:10:18.555241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.677 [2024-07-15 16:10:18.555258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.677 qpair failed and we were unable to recover it. 00:28:49.677 [2024-07-15 16:10:18.555387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.677 [2024-07-15 16:10:18.555419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.677 qpair failed and we were unable to recover it. 00:28:49.677 [2024-07-15 16:10:18.555638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.677 [2024-07-15 16:10:18.555671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.677 qpair failed and we were unable to recover it. 00:28:49.677 [2024-07-15 16:10:18.555881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.677 [2024-07-15 16:10:18.555921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.677 qpair failed and we were unable to recover it. 00:28:49.677 [2024-07-15 16:10:18.556102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.677 [2024-07-15 16:10:18.556118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.677 qpair failed and we were unable to recover it. 00:28:49.677 [2024-07-15 16:10:18.556353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.677 [2024-07-15 16:10:18.556370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.677 qpair failed and we were unable to recover it. 00:28:49.677 [2024-07-15 16:10:18.556544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.677 [2024-07-15 16:10:18.556575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.677 qpair failed and we were unable to recover it. 00:28:49.677 [2024-07-15 16:10:18.556796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.677 [2024-07-15 16:10:18.556827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.677 qpair failed and we were unable to recover it. 00:28:49.677 [2024-07-15 16:10:18.557098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.677 [2024-07-15 16:10:18.557129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.677 qpair failed and we were unable to recover it. 00:28:49.677 [2024-07-15 16:10:18.557365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.677 [2024-07-15 16:10:18.557401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.677 qpair failed and we were unable to recover it. 00:28:49.677 [2024-07-15 16:10:18.557556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.677 [2024-07-15 16:10:18.557587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.677 qpair failed and we were unable to recover it. 00:28:49.677 [2024-07-15 16:10:18.557752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.677 [2024-07-15 16:10:18.557783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.677 qpair failed and we were unable to recover it. 00:28:49.677 [2024-07-15 16:10:18.557983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.677 [2024-07-15 16:10:18.558014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.677 qpair failed and we were unable to recover it. 00:28:49.677 [2024-07-15 16:10:18.558173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.677 [2024-07-15 16:10:18.558204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.677 qpair failed and we were unable to recover it. 00:28:49.677 [2024-07-15 16:10:18.558423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.677 [2024-07-15 16:10:18.558455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.677 qpair failed and we were unable to recover it. 00:28:49.677 [2024-07-15 16:10:18.558660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.677 [2024-07-15 16:10:18.558697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.677 qpair failed and we were unable to recover it. 00:28:49.677 [2024-07-15 16:10:18.558834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.677 [2024-07-15 16:10:18.558865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.677 qpair failed and we were unable to recover it. 00:28:49.677 [2024-07-15 16:10:18.559129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.677 [2024-07-15 16:10:18.559144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.677 qpair failed and we were unable to recover it. 00:28:49.677 [2024-07-15 16:10:18.559331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.677 [2024-07-15 16:10:18.559347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.677 qpair failed and we were unable to recover it. 00:28:49.677 [2024-07-15 16:10:18.559523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.677 [2024-07-15 16:10:18.559539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.677 qpair failed and we were unable to recover it. 00:28:49.677 [2024-07-15 16:10:18.559653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.677 [2024-07-15 16:10:18.559684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.677 qpair failed and we were unable to recover it. 00:28:49.677 [2024-07-15 16:10:18.559808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.677 [2024-07-15 16:10:18.559839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.677 qpair failed and we were unable to recover it. 00:28:49.677 [2024-07-15 16:10:18.560088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.677 [2024-07-15 16:10:18.560119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.677 qpair failed and we were unable to recover it. 00:28:49.677 [2024-07-15 16:10:18.560316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.677 [2024-07-15 16:10:18.560334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.677 qpair failed and we were unable to recover it. 00:28:49.677 [2024-07-15 16:10:18.560468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.677 [2024-07-15 16:10:18.560484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.677 qpair failed and we were unable to recover it. 00:28:49.677 [2024-07-15 16:10:18.560665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.677 [2024-07-15 16:10:18.560680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.677 qpair failed and we were unable to recover it. 00:28:49.678 [2024-07-15 16:10:18.560875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.678 [2024-07-15 16:10:18.560891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.678 qpair failed and we were unable to recover it. 00:28:49.678 [2024-07-15 16:10:18.561015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.678 [2024-07-15 16:10:18.561030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.678 qpair failed and we were unable to recover it. 00:28:49.678 [2024-07-15 16:10:18.561205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.678 [2024-07-15 16:10:18.561221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.678 qpair failed and we were unable to recover it. 00:28:49.678 [2024-07-15 16:10:18.561503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.678 [2024-07-15 16:10:18.561519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.678 qpair failed and we were unable to recover it. 00:28:49.678 [2024-07-15 16:10:18.561647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.678 [2024-07-15 16:10:18.561663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.678 qpair failed and we were unable to recover it. 00:28:49.678 [2024-07-15 16:10:18.561831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.678 [2024-07-15 16:10:18.561848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.678 qpair failed and we were unable to recover it. 00:28:49.678 [2024-07-15 16:10:18.561962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.678 [2024-07-15 16:10:18.561978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.678 qpair failed and we were unable to recover it. 00:28:49.678 [2024-07-15 16:10:18.562149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.678 [2024-07-15 16:10:18.562180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.678 qpair failed and we were unable to recover it. 00:28:49.678 [2024-07-15 16:10:18.562343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.678 [2024-07-15 16:10:18.562376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.678 qpair failed and we were unable to recover it. 00:28:49.678 [2024-07-15 16:10:18.562514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.678 [2024-07-15 16:10:18.562545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.678 qpair failed and we were unable to recover it. 00:28:49.678 [2024-07-15 16:10:18.562763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.678 [2024-07-15 16:10:18.562778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.678 qpair failed and we were unable to recover it. 00:28:49.678 [2024-07-15 16:10:18.562949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.678 [2024-07-15 16:10:18.562966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.678 qpair failed and we were unable to recover it. 00:28:49.678 [2024-07-15 16:10:18.563078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.678 [2024-07-15 16:10:18.563094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.678 qpair failed and we were unable to recover it. 00:28:49.678 [2024-07-15 16:10:18.563230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.678 [2024-07-15 16:10:18.563247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.678 qpair failed and we were unable to recover it. 00:28:49.953 [2024-07-15 16:10:18.563389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.953 [2024-07-15 16:10:18.563403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.953 qpair failed and we were unable to recover it. 00:28:49.953 [2024-07-15 16:10:18.563599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.953 [2024-07-15 16:10:18.563615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.953 qpair failed and we were unable to recover it. 00:28:49.953 [2024-07-15 16:10:18.563768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.953 [2024-07-15 16:10:18.563787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.953 qpair failed and we were unable to recover it. 00:28:49.953 [2024-07-15 16:10:18.563898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.953 [2024-07-15 16:10:18.563912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.953 qpair failed and we were unable to recover it. 00:28:49.953 [2024-07-15 16:10:18.564174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.953 [2024-07-15 16:10:18.564190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.953 qpair failed and we were unable to recover it. 00:28:49.953 [2024-07-15 16:10:18.564291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.953 [2024-07-15 16:10:18.564306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.953 qpair failed and we were unable to recover it. 00:28:49.953 [2024-07-15 16:10:18.564414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.953 [2024-07-15 16:10:18.564429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.953 qpair failed and we were unable to recover it. 00:28:49.953 [2024-07-15 16:10:18.564551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.953 [2024-07-15 16:10:18.564567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.953 qpair failed and we were unable to recover it. 00:28:49.953 [2024-07-15 16:10:18.564745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.953 [2024-07-15 16:10:18.564760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.953 qpair failed and we were unable to recover it. 00:28:49.953 [2024-07-15 16:10:18.564876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.953 [2024-07-15 16:10:18.564891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.953 qpair failed and we were unable to recover it. 00:28:49.953 [2024-07-15 16:10:18.565056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.953 [2024-07-15 16:10:18.565071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.953 qpair failed and we were unable to recover it. 00:28:49.953 [2024-07-15 16:10:18.565237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.954 [2024-07-15 16:10:18.565253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.954 qpair failed and we were unable to recover it. 00:28:49.954 [2024-07-15 16:10:18.565365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.954 [2024-07-15 16:10:18.565381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.954 qpair failed and we were unable to recover it. 00:28:49.954 [2024-07-15 16:10:18.565507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.954 [2024-07-15 16:10:18.565523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.954 qpair failed and we were unable to recover it. 00:28:49.954 [2024-07-15 16:10:18.565650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.954 [2024-07-15 16:10:18.565666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.954 qpair failed and we were unable to recover it. 00:28:49.954 [2024-07-15 16:10:18.565844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.954 [2024-07-15 16:10:18.565860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.954 qpair failed and we were unable to recover it. 00:28:49.954 [2024-07-15 16:10:18.566036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.954 [2024-07-15 16:10:18.566052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.954 qpair failed and we were unable to recover it. 00:28:49.954 [2024-07-15 16:10:18.566238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.954 [2024-07-15 16:10:18.566254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.954 qpair failed and we were unable to recover it. 00:28:49.954 [2024-07-15 16:10:18.566438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.954 [2024-07-15 16:10:18.566453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.954 qpair failed and we were unable to recover it. 00:28:49.954 [2024-07-15 16:10:18.566553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.954 [2024-07-15 16:10:18.566569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.954 qpair failed and we were unable to recover it. 00:28:49.954 [2024-07-15 16:10:18.566677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.954 [2024-07-15 16:10:18.566692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.954 qpair failed and we were unable to recover it. 00:28:49.954 [2024-07-15 16:10:18.566890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.954 [2024-07-15 16:10:18.566908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.954 qpair failed and we were unable to recover it. 00:28:49.954 [2024-07-15 16:10:18.567073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.954 [2024-07-15 16:10:18.567088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.954 qpair failed and we were unable to recover it. 00:28:49.954 [2024-07-15 16:10:18.567340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.954 [2024-07-15 16:10:18.567356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.954 qpair failed and we were unable to recover it. 00:28:49.954 [2024-07-15 16:10:18.567490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.954 [2024-07-15 16:10:18.567520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.954 qpair failed and we were unable to recover it. 00:28:49.954 [2024-07-15 16:10:18.567731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.954 [2024-07-15 16:10:18.567762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.954 qpair failed and we were unable to recover it. 00:28:49.954 [2024-07-15 16:10:18.568061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.954 [2024-07-15 16:10:18.568093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.954 qpair failed and we were unable to recover it. 00:28:49.954 [2024-07-15 16:10:18.568364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.954 [2024-07-15 16:10:18.568397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.954 qpair failed and we were unable to recover it. 00:28:49.954 [2024-07-15 16:10:18.568601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.954 [2024-07-15 16:10:18.568632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.954 qpair failed and we were unable to recover it. 00:28:49.954 [2024-07-15 16:10:18.568835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.954 [2024-07-15 16:10:18.568866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.954 qpair failed and we were unable to recover it. 00:28:49.954 [2024-07-15 16:10:18.569074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.954 [2024-07-15 16:10:18.569105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.954 qpair failed and we were unable to recover it. 00:28:49.954 [2024-07-15 16:10:18.569323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.954 [2024-07-15 16:10:18.569355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.954 qpair failed and we were unable to recover it. 00:28:49.954 [2024-07-15 16:10:18.569509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.954 [2024-07-15 16:10:18.569540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.954 qpair failed and we were unable to recover it. 00:28:49.954 [2024-07-15 16:10:18.569694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.954 [2024-07-15 16:10:18.569724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.954 qpair failed and we were unable to recover it. 00:28:49.954 [2024-07-15 16:10:18.569937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.954 [2024-07-15 16:10:18.569969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.954 qpair failed and we were unable to recover it. 00:28:49.954 [2024-07-15 16:10:18.570180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.954 [2024-07-15 16:10:18.570212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.954 qpair failed and we were unable to recover it. 00:28:49.954 [2024-07-15 16:10:18.570347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.954 [2024-07-15 16:10:18.570380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.954 qpair failed and we were unable to recover it. 00:28:49.954 [2024-07-15 16:10:18.570600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.954 [2024-07-15 16:10:18.570631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.954 qpair failed and we were unable to recover it. 00:28:49.954 [2024-07-15 16:10:18.570949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.954 [2024-07-15 16:10:18.570980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.954 qpair failed and we were unable to recover it. 00:28:49.954 [2024-07-15 16:10:18.571239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.954 [2024-07-15 16:10:18.571272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.954 qpair failed and we were unable to recover it. 00:28:49.954 [2024-07-15 16:10:18.571559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.954 [2024-07-15 16:10:18.571590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.954 qpair failed and we were unable to recover it. 00:28:49.954 [2024-07-15 16:10:18.571889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.954 [2024-07-15 16:10:18.571933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.954 qpair failed and we were unable to recover it. 00:28:49.954 [2024-07-15 16:10:18.572191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.954 [2024-07-15 16:10:18.572246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.954 qpair failed and we were unable to recover it. 00:28:49.954 [2024-07-15 16:10:18.572544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.954 [2024-07-15 16:10:18.572577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.954 qpair failed and we were unable to recover it. 00:28:49.954 [2024-07-15 16:10:18.572755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.954 [2024-07-15 16:10:18.572786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.954 qpair failed and we were unable to recover it. 00:28:49.954 [2024-07-15 16:10:18.572994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.954 [2024-07-15 16:10:18.573009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.954 qpair failed and we were unable to recover it. 00:28:49.954 [2024-07-15 16:10:18.573265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.954 [2024-07-15 16:10:18.573299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.954 qpair failed and we were unable to recover it. 00:28:49.954 [2024-07-15 16:10:18.573596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.954 [2024-07-15 16:10:18.573627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.954 qpair failed and we were unable to recover it. 00:28:49.954 [2024-07-15 16:10:18.573935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.954 [2024-07-15 16:10:18.573966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.954 qpair failed and we were unable to recover it. 00:28:49.954 [2024-07-15 16:10:18.574271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.955 [2024-07-15 16:10:18.574305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.955 qpair failed and we were unable to recover it. 00:28:49.955 [2024-07-15 16:10:18.574503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.955 [2024-07-15 16:10:18.574535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.955 qpair failed and we were unable to recover it. 00:28:49.955 [2024-07-15 16:10:18.574696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.955 [2024-07-15 16:10:18.574727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.955 qpair failed and we were unable to recover it. 00:28:49.955 [2024-07-15 16:10:18.575022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.955 [2024-07-15 16:10:18.575038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.955 qpair failed and we were unable to recover it. 00:28:49.955 [2024-07-15 16:10:18.575313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.955 [2024-07-15 16:10:18.575345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.955 qpair failed and we were unable to recover it. 00:28:49.955 [2024-07-15 16:10:18.575653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.955 [2024-07-15 16:10:18.575684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.955 qpair failed and we were unable to recover it. 00:28:49.955 [2024-07-15 16:10:18.575945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.955 [2024-07-15 16:10:18.575976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.955 qpair failed and we were unable to recover it. 00:28:49.955 [2024-07-15 16:10:18.576246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.955 [2024-07-15 16:10:18.576279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.955 qpair failed and we were unable to recover it. 00:28:49.955 [2024-07-15 16:10:18.576574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.955 [2024-07-15 16:10:18.576605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.955 qpair failed and we were unable to recover it. 00:28:49.955 [2024-07-15 16:10:18.576901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.955 [2024-07-15 16:10:18.576932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.955 qpair failed and we were unable to recover it. 00:28:49.955 [2024-07-15 16:10:18.577192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.955 [2024-07-15 16:10:18.577223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.955 qpair failed and we were unable to recover it. 00:28:49.955 [2024-07-15 16:10:18.577485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.955 [2024-07-15 16:10:18.577502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.955 qpair failed and we were unable to recover it. 00:28:49.955 [2024-07-15 16:10:18.577700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.955 [2024-07-15 16:10:18.577716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.955 qpair failed and we were unable to recover it. 00:28:49.955 [2024-07-15 16:10:18.577893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.955 [2024-07-15 16:10:18.577927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.955 qpair failed and we were unable to recover it. 00:28:49.955 [2024-07-15 16:10:18.578127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.955 [2024-07-15 16:10:18.578158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.955 qpair failed and we were unable to recover it. 00:28:49.955 [2024-07-15 16:10:18.578448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.955 [2024-07-15 16:10:18.578481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.955 qpair failed and we were unable to recover it. 00:28:49.955 [2024-07-15 16:10:18.578718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.955 [2024-07-15 16:10:18.578750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.955 qpair failed and we were unable to recover it. 00:28:49.955 [2024-07-15 16:10:18.578977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.955 [2024-07-15 16:10:18.578993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.955 qpair failed and we were unable to recover it. 00:28:49.955 [2024-07-15 16:10:18.579191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.955 [2024-07-15 16:10:18.579206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.955 qpair failed and we were unable to recover it. 00:28:49.955 [2024-07-15 16:10:18.579455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.955 [2024-07-15 16:10:18.579472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.955 qpair failed and we were unable to recover it. 00:28:49.955 [2024-07-15 16:10:18.579650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.955 [2024-07-15 16:10:18.579667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.955 qpair failed and we were unable to recover it. 00:28:49.955 [2024-07-15 16:10:18.579838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.955 [2024-07-15 16:10:18.579875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.955 qpair failed and we were unable to recover it. 00:28:49.955 [2024-07-15 16:10:18.580059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.955 [2024-07-15 16:10:18.580089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.955 qpair failed and we were unable to recover it. 00:28:49.955 [2024-07-15 16:10:18.580305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.955 [2024-07-15 16:10:18.580321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.955 qpair failed and we were unable to recover it. 00:28:49.955 [2024-07-15 16:10:18.580556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.955 [2024-07-15 16:10:18.580572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.955 qpair failed and we were unable to recover it. 00:28:49.955 [2024-07-15 16:10:18.580696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.955 [2024-07-15 16:10:18.580711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.955 qpair failed and we were unable to recover it. 00:28:49.955 [2024-07-15 16:10:18.580898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.955 [2024-07-15 16:10:18.580915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.955 qpair failed and we were unable to recover it. 00:28:49.955 [2024-07-15 16:10:18.581176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.955 [2024-07-15 16:10:18.581207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.955 qpair failed and we were unable to recover it. 00:28:49.955 [2024-07-15 16:10:18.581542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.955 [2024-07-15 16:10:18.581575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.955 qpair failed and we were unable to recover it. 00:28:49.955 [2024-07-15 16:10:18.581865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.955 [2024-07-15 16:10:18.581896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.955 qpair failed and we were unable to recover it. 00:28:49.955 [2024-07-15 16:10:18.582132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.955 [2024-07-15 16:10:18.582163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.955 qpair failed and we were unable to recover it. 00:28:49.955 [2024-07-15 16:10:18.582361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.955 [2024-07-15 16:10:18.582378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.955 qpair failed and we were unable to recover it. 00:28:49.955 [2024-07-15 16:10:18.582556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.955 [2024-07-15 16:10:18.582586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.955 qpair failed and we were unable to recover it. 00:28:49.955 [2024-07-15 16:10:18.582826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.955 [2024-07-15 16:10:18.582859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.955 qpair failed and we were unable to recover it. 00:28:49.955 [2024-07-15 16:10:18.583091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.955 [2024-07-15 16:10:18.583122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.955 qpair failed and we were unable to recover it. 00:28:49.955 [2024-07-15 16:10:18.583391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.955 [2024-07-15 16:10:18.583423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.955 qpair failed and we were unable to recover it. 00:28:49.956 [2024-07-15 16:10:18.583694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.956 [2024-07-15 16:10:18.583725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.956 qpair failed and we were unable to recover it. 00:28:49.956 [2024-07-15 16:10:18.583968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.956 [2024-07-15 16:10:18.584007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.956 qpair failed and we were unable to recover it. 00:28:49.956 [2024-07-15 16:10:18.584193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.956 [2024-07-15 16:10:18.584209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.956 qpair failed and we were unable to recover it. 00:28:49.956 [2024-07-15 16:10:18.584458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.956 [2024-07-15 16:10:18.584498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.956 qpair failed and we were unable to recover it. 00:28:49.956 [2024-07-15 16:10:18.584639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.956 [2024-07-15 16:10:18.584669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.956 qpair failed and we were unable to recover it. 00:28:49.956 [2024-07-15 16:10:18.584987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.956 [2024-07-15 16:10:18.585000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.956 qpair failed and we were unable to recover it. 00:28:49.956 [2024-07-15 16:10:18.585255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.956 [2024-07-15 16:10:18.585268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.956 qpair failed and we were unable to recover it. 00:28:49.956 [2024-07-15 16:10:18.585514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.956 [2024-07-15 16:10:18.585547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.956 qpair failed and we were unable to recover it. 00:28:49.956 [2024-07-15 16:10:18.585770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.956 [2024-07-15 16:10:18.585802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.956 qpair failed and we were unable to recover it. 00:28:49.956 [2024-07-15 16:10:18.586075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.956 [2024-07-15 16:10:18.586106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.956 qpair failed and we were unable to recover it. 00:28:49.956 [2024-07-15 16:10:18.586393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.956 [2024-07-15 16:10:18.586427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.956 qpair failed and we were unable to recover it. 00:28:49.956 [2024-07-15 16:10:18.586612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.956 [2024-07-15 16:10:18.586644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.956 qpair failed and we were unable to recover it. 00:28:49.956 [2024-07-15 16:10:18.586888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.956 [2024-07-15 16:10:18.586928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.956 qpair failed and we were unable to recover it. 00:28:49.956 [2024-07-15 16:10:18.587163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.956 [2024-07-15 16:10:18.587195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.956 qpair failed and we were unable to recover it. 00:28:49.956 [2024-07-15 16:10:18.587550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.956 [2024-07-15 16:10:18.587623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.956 qpair failed and we were unable to recover it. 00:28:49.956 [2024-07-15 16:10:18.587884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.956 [2024-07-15 16:10:18.587902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.956 qpair failed and we were unable to recover it. 00:28:49.956 [2024-07-15 16:10:18.588198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.956 [2024-07-15 16:10:18.588245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.956 qpair failed and we were unable to recover it. 00:28:49.956 [2024-07-15 16:10:18.588547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.956 [2024-07-15 16:10:18.588579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.956 qpair failed and we were unable to recover it. 00:28:49.956 [2024-07-15 16:10:18.588804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.956 [2024-07-15 16:10:18.588835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.956 qpair failed and we were unable to recover it. 00:28:49.956 [2024-07-15 16:10:18.589108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.956 [2024-07-15 16:10:18.589139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.956 qpair failed and we were unable to recover it. 00:28:49.956 [2024-07-15 16:10:18.589358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.956 [2024-07-15 16:10:18.589392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.956 qpair failed and we were unable to recover it. 00:28:49.956 [2024-07-15 16:10:18.589666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.956 [2024-07-15 16:10:18.589697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.956 qpair failed and we were unable to recover it. 00:28:49.956 [2024-07-15 16:10:18.589998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.956 [2024-07-15 16:10:18.590030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.956 qpair failed and we were unable to recover it. 00:28:49.956 [2024-07-15 16:10:18.590251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.956 [2024-07-15 16:10:18.590283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.956 qpair failed and we were unable to recover it. 00:28:49.956 [2024-07-15 16:10:18.590507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.956 [2024-07-15 16:10:18.590538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.956 qpair failed and we were unable to recover it. 00:28:49.956 [2024-07-15 16:10:18.590789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.956 [2024-07-15 16:10:18.590821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.956 qpair failed and we were unable to recover it. 00:28:49.956 [2024-07-15 16:10:18.591150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.956 [2024-07-15 16:10:18.591182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.956 qpair failed and we were unable to recover it. 00:28:49.956 [2024-07-15 16:10:18.591502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.956 [2024-07-15 16:10:18.591535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.956 qpair failed and we were unable to recover it. 00:28:49.956 [2024-07-15 16:10:18.591750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.956 [2024-07-15 16:10:18.591790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.956 qpair failed and we were unable to recover it. 00:28:49.956 [2024-07-15 16:10:18.592096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.956 [2024-07-15 16:10:18.592111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.956 qpair failed and we were unable to recover it. 00:28:49.956 [2024-07-15 16:10:18.592341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.956 [2024-07-15 16:10:18.592358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.956 qpair failed and we were unable to recover it. 00:28:49.956 [2024-07-15 16:10:18.592536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.956 [2024-07-15 16:10:18.592552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.956 qpair failed and we were unable to recover it. 00:28:49.956 [2024-07-15 16:10:18.592744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.956 [2024-07-15 16:10:18.592776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.956 qpair failed and we were unable to recover it. 00:28:49.956 [2024-07-15 16:10:18.593045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.956 [2024-07-15 16:10:18.593077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.956 qpair failed and we were unable to recover it. 00:28:49.956 [2024-07-15 16:10:18.593295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.956 [2024-07-15 16:10:18.593328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.956 qpair failed and we were unable to recover it. 00:28:49.956 [2024-07-15 16:10:18.593647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.956 [2024-07-15 16:10:18.593678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.956 qpair failed and we were unable to recover it. 00:28:49.956 [2024-07-15 16:10:18.593910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.956 [2024-07-15 16:10:18.593941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.956 qpair failed and we were unable to recover it. 00:28:49.956 [2024-07-15 16:10:18.594122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.956 [2024-07-15 16:10:18.594137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.956 qpair failed and we were unable to recover it. 00:28:49.956 [2024-07-15 16:10:18.594388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.956 [2024-07-15 16:10:18.594421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.956 qpair failed and we were unable to recover it. 00:28:49.956 [2024-07-15 16:10:18.594717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.957 [2024-07-15 16:10:18.594749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.957 qpair failed and we were unable to recover it. 00:28:49.957 [2024-07-15 16:10:18.595046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.957 [2024-07-15 16:10:18.595078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.957 qpair failed and we were unable to recover it. 00:28:49.957 [2024-07-15 16:10:18.595325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.957 [2024-07-15 16:10:18.595357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.957 qpair failed and we were unable to recover it. 00:28:49.957 [2024-07-15 16:10:18.595645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.957 [2024-07-15 16:10:18.595677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.957 qpair failed and we were unable to recover it. 00:28:49.957 [2024-07-15 16:10:18.595961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.957 [2024-07-15 16:10:18.595993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.957 qpair failed and we were unable to recover it. 00:28:49.957 [2024-07-15 16:10:18.596160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.957 [2024-07-15 16:10:18.596176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.957 qpair failed and we were unable to recover it. 00:28:49.957 [2024-07-15 16:10:18.596379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.957 [2024-07-15 16:10:18.596411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.957 qpair failed and we were unable to recover it. 00:28:49.957 [2024-07-15 16:10:18.596645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.957 [2024-07-15 16:10:18.596677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.957 qpair failed and we were unable to recover it. 00:28:49.957 [2024-07-15 16:10:18.596847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.957 [2024-07-15 16:10:18.596892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.957 qpair failed and we were unable to recover it. 00:28:49.957 [2024-07-15 16:10:18.597169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.957 [2024-07-15 16:10:18.597200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.957 qpair failed and we were unable to recover it. 00:28:49.957 [2024-07-15 16:10:18.597521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.957 [2024-07-15 16:10:18.597554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.957 qpair failed and we were unable to recover it. 00:28:49.957 [2024-07-15 16:10:18.597850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.957 [2024-07-15 16:10:18.597882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.957 qpair failed and we were unable to recover it. 00:28:49.957 [2024-07-15 16:10:18.598180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.957 [2024-07-15 16:10:18.598211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.957 qpair failed and we were unable to recover it. 00:28:49.957 [2024-07-15 16:10:18.598447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.957 [2024-07-15 16:10:18.598484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.957 qpair failed and we were unable to recover it. 00:28:49.957 [2024-07-15 16:10:18.598798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.957 [2024-07-15 16:10:18.598840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.957 qpair failed and we were unable to recover it. 00:28:49.957 [2024-07-15 16:10:18.599028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.957 [2024-07-15 16:10:18.599043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.957 qpair failed and we were unable to recover it. 00:28:49.957 [2024-07-15 16:10:18.599208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.957 [2024-07-15 16:10:18.599232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.957 qpair failed and we were unable to recover it. 00:28:49.957 [2024-07-15 16:10:18.599499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.957 [2024-07-15 16:10:18.599515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.957 qpair failed and we were unable to recover it. 00:28:49.957 [2024-07-15 16:10:18.599655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.957 [2024-07-15 16:10:18.599686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.957 qpair failed and we were unable to recover it. 00:28:49.957 [2024-07-15 16:10:18.599859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.957 [2024-07-15 16:10:18.599890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.957 qpair failed and we were unable to recover it. 00:28:49.957 [2024-07-15 16:10:18.600189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.957 [2024-07-15 16:10:18.600220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.957 qpair failed and we were unable to recover it. 00:28:49.957 [2024-07-15 16:10:18.600481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.957 [2024-07-15 16:10:18.600513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.957 qpair failed and we were unable to recover it. 00:28:49.957 [2024-07-15 16:10:18.600831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.957 [2024-07-15 16:10:18.600863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.957 qpair failed and we were unable to recover it. 00:28:49.957 [2024-07-15 16:10:18.601161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.957 [2024-07-15 16:10:18.601192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.957 qpair failed and we were unable to recover it. 00:28:49.957 [2024-07-15 16:10:18.601444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.957 [2024-07-15 16:10:18.601477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.957 qpair failed and we were unable to recover it. 00:28:49.957 [2024-07-15 16:10:18.601780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.957 [2024-07-15 16:10:18.601813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.957 qpair failed and we were unable to recover it. 00:28:49.957 [2024-07-15 16:10:18.602041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.957 [2024-07-15 16:10:18.602072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.957 qpair failed and we were unable to recover it. 00:28:49.957 [2024-07-15 16:10:18.602278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.957 [2024-07-15 16:10:18.602313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.957 qpair failed and we were unable to recover it. 00:28:49.957 [2024-07-15 16:10:18.602635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.957 [2024-07-15 16:10:18.602666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.957 qpair failed and we were unable to recover it. 00:28:49.957 [2024-07-15 16:10:18.602956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.957 [2024-07-15 16:10:18.602987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.957 qpair failed and we were unable to recover it. 00:28:49.957 [2024-07-15 16:10:18.603207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.957 [2024-07-15 16:10:18.603257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.957 qpair failed and we were unable to recover it. 00:28:49.957 [2024-07-15 16:10:18.603505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.957 [2024-07-15 16:10:18.603537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.957 qpair failed and we were unable to recover it. 00:28:49.957 [2024-07-15 16:10:18.603864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.957 [2024-07-15 16:10:18.603896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.957 qpair failed and we were unable to recover it. 00:28:49.957 [2024-07-15 16:10:18.604100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.957 [2024-07-15 16:10:18.604133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.957 qpair failed and we were unable to recover it. 00:28:49.957 [2024-07-15 16:10:18.604427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.957 [2024-07-15 16:10:18.604460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.957 qpair failed and we were unable to recover it. 00:28:49.957 [2024-07-15 16:10:18.604754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.957 [2024-07-15 16:10:18.604785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.957 qpair failed and we were unable to recover it. 00:28:49.957 [2024-07-15 16:10:18.605003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.958 [2024-07-15 16:10:18.605034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.958 qpair failed and we were unable to recover it. 00:28:49.958 [2024-07-15 16:10:18.605304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.958 [2024-07-15 16:10:18.605337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.958 qpair failed and we were unable to recover it. 00:28:49.958 [2024-07-15 16:10:18.605655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.958 [2024-07-15 16:10:18.605687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.958 qpair failed and we were unable to recover it. 00:28:49.958 [2024-07-15 16:10:18.605961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.958 [2024-07-15 16:10:18.605992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.958 qpair failed and we were unable to recover it. 00:28:49.958 [2024-07-15 16:10:18.606313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.958 [2024-07-15 16:10:18.606346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.958 qpair failed and we were unable to recover it. 00:28:49.958 [2024-07-15 16:10:18.606620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.958 [2024-07-15 16:10:18.606652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.958 qpair failed and we were unable to recover it. 00:28:49.958 [2024-07-15 16:10:18.606960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.958 [2024-07-15 16:10:18.606991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.958 qpair failed and we were unable to recover it. 00:28:49.958 [2024-07-15 16:10:18.607205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.958 [2024-07-15 16:10:18.607244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.958 qpair failed and we were unable to recover it. 00:28:49.958 [2024-07-15 16:10:18.607522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.958 [2024-07-15 16:10:18.607554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.958 qpair failed and we were unable to recover it. 00:28:49.958 [2024-07-15 16:10:18.607770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.958 [2024-07-15 16:10:18.607801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.958 qpair failed and we were unable to recover it. 00:28:49.958 [2024-07-15 16:10:18.608052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.958 [2024-07-15 16:10:18.608068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.958 qpair failed and we were unable to recover it. 00:28:49.958 [2024-07-15 16:10:18.608304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.958 [2024-07-15 16:10:18.608329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.958 qpair failed and we were unable to recover it. 00:28:49.958 [2024-07-15 16:10:18.608602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.958 [2024-07-15 16:10:18.608618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.958 qpair failed and we were unable to recover it. 00:28:49.958 [2024-07-15 16:10:18.608853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.958 [2024-07-15 16:10:18.608869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.958 qpair failed and we were unable to recover it. 00:28:49.958 [2024-07-15 16:10:18.609052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.958 [2024-07-15 16:10:18.609067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.958 qpair failed and we were unable to recover it. 00:28:49.958 [2024-07-15 16:10:18.609257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.958 [2024-07-15 16:10:18.609274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.958 qpair failed and we were unable to recover it. 00:28:49.958 [2024-07-15 16:10:18.609508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.958 [2024-07-15 16:10:18.609524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.958 qpair failed and we were unable to recover it. 00:28:49.958 [2024-07-15 16:10:18.609785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.958 [2024-07-15 16:10:18.609804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.958 qpair failed and we were unable to recover it. 00:28:49.958 [2024-07-15 16:10:18.610072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.958 [2024-07-15 16:10:18.610103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.958 qpair failed and we were unable to recover it. 00:28:49.958 [2024-07-15 16:10:18.610325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.958 [2024-07-15 16:10:18.610359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.958 qpair failed and we were unable to recover it. 00:28:49.958 [2024-07-15 16:10:18.610577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.958 [2024-07-15 16:10:18.610609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.958 qpair failed and we were unable to recover it. 00:28:49.958 [2024-07-15 16:10:18.610825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.958 [2024-07-15 16:10:18.610857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.958 qpair failed and we were unable to recover it. 00:28:49.958 [2024-07-15 16:10:18.611153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.958 [2024-07-15 16:10:18.611185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.958 qpair failed and we were unable to recover it. 00:28:49.958 [2024-07-15 16:10:18.611482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.958 [2024-07-15 16:10:18.611515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.958 qpair failed and we were unable to recover it. 00:28:49.958 [2024-07-15 16:10:18.611747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.958 [2024-07-15 16:10:18.611778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.958 qpair failed and we were unable to recover it. 00:28:49.958 [2024-07-15 16:10:18.611931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.958 [2024-07-15 16:10:18.611962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.958 qpair failed and we were unable to recover it. 00:28:49.958 [2024-07-15 16:10:18.612245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.958 [2024-07-15 16:10:18.612278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.958 qpair failed and we were unable to recover it. 00:28:49.958 [2024-07-15 16:10:18.612577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.958 [2024-07-15 16:10:18.612609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.958 qpair failed and we were unable to recover it. 00:28:49.958 [2024-07-15 16:10:18.612908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.958 [2024-07-15 16:10:18.612940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.958 qpair failed and we were unable to recover it. 00:28:49.958 [2024-07-15 16:10:18.613244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.958 [2024-07-15 16:10:18.613277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.958 qpair failed and we were unable to recover it. 00:28:49.958 [2024-07-15 16:10:18.613415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.958 [2024-07-15 16:10:18.613445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.958 qpair failed and we were unable to recover it. 00:28:49.958 [2024-07-15 16:10:18.613775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.959 [2024-07-15 16:10:18.613807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.959 qpair failed and we were unable to recover it. 00:28:49.959 [2024-07-15 16:10:18.614095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.959 [2024-07-15 16:10:18.614128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.959 qpair failed and we were unable to recover it. 00:28:49.959 [2024-07-15 16:10:18.614382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.959 [2024-07-15 16:10:18.614399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.959 qpair failed and we were unable to recover it. 00:28:49.959 [2024-07-15 16:10:18.614523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.959 [2024-07-15 16:10:18.614539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.959 qpair failed and we were unable to recover it. 00:28:49.959 [2024-07-15 16:10:18.614784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.959 [2024-07-15 16:10:18.614799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.959 qpair failed and we were unable to recover it. 00:28:49.959 [2024-07-15 16:10:18.614993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.959 [2024-07-15 16:10:18.615008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.959 qpair failed and we were unable to recover it. 00:28:49.959 [2024-07-15 16:10:18.615196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.959 [2024-07-15 16:10:18.615212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.959 qpair failed and we were unable to recover it. 00:28:49.959 [2024-07-15 16:10:18.615484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.959 [2024-07-15 16:10:18.615500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.959 qpair failed and we were unable to recover it. 00:28:49.959 [2024-07-15 16:10:18.615768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.959 [2024-07-15 16:10:18.615799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.959 qpair failed and we were unable to recover it. 00:28:49.959 [2024-07-15 16:10:18.615999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.959 [2024-07-15 16:10:18.616015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.959 qpair failed and we were unable to recover it. 00:28:49.959 [2024-07-15 16:10:18.616255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.959 [2024-07-15 16:10:18.616289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.959 qpair failed and we were unable to recover it. 00:28:49.959 [2024-07-15 16:10:18.616441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.959 [2024-07-15 16:10:18.616472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.959 qpair failed and we were unable to recover it. 00:28:49.959 [2024-07-15 16:10:18.616678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.959 [2024-07-15 16:10:18.616709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.959 qpair failed and we were unable to recover it. 00:28:49.959 [2024-07-15 16:10:18.617046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.959 [2024-07-15 16:10:18.617079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.959 qpair failed and we were unable to recover it. 00:28:49.959 [2024-07-15 16:10:18.617327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.959 [2024-07-15 16:10:18.617360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.959 qpair failed and we were unable to recover it. 00:28:49.959 [2024-07-15 16:10:18.617657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.959 [2024-07-15 16:10:18.617688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.959 qpair failed and we were unable to recover it. 00:28:49.959 [2024-07-15 16:10:18.617929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.959 [2024-07-15 16:10:18.617961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.959 qpair failed and we were unable to recover it. 00:28:49.959 [2024-07-15 16:10:18.618306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.959 [2024-07-15 16:10:18.618339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.959 qpair failed and we were unable to recover it. 00:28:49.959 [2024-07-15 16:10:18.618640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.959 [2024-07-15 16:10:18.618672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.959 qpair failed and we were unable to recover it. 00:28:49.959 [2024-07-15 16:10:18.618989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.959 [2024-07-15 16:10:18.619020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.959 qpair failed and we were unable to recover it. 00:28:49.959 [2024-07-15 16:10:18.619315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.959 [2024-07-15 16:10:18.619347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.959 qpair failed and we were unable to recover it. 00:28:49.959 [2024-07-15 16:10:18.619647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.959 [2024-07-15 16:10:18.619679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.959 qpair failed and we were unable to recover it. 00:28:49.959 [2024-07-15 16:10:18.619992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.959 [2024-07-15 16:10:18.620025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.959 qpair failed and we were unable to recover it. 00:28:49.959 [2024-07-15 16:10:18.620295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.959 [2024-07-15 16:10:18.620311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.959 qpair failed and we were unable to recover it. 00:28:49.959 [2024-07-15 16:10:18.620523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.959 [2024-07-15 16:10:18.620539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.959 qpair failed and we were unable to recover it. 00:28:49.959 [2024-07-15 16:10:18.620783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.959 [2024-07-15 16:10:18.620815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.959 qpair failed and we were unable to recover it. 00:28:49.959 [2024-07-15 16:10:18.621104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.959 [2024-07-15 16:10:18.621141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.959 qpair failed and we were unable to recover it. 00:28:49.959 [2024-07-15 16:10:18.621361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.959 [2024-07-15 16:10:18.621378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.959 qpair failed and we were unable to recover it. 00:28:49.959 [2024-07-15 16:10:18.621559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.959 [2024-07-15 16:10:18.621575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.959 qpair failed and we were unable to recover it. 00:28:49.959 [2024-07-15 16:10:18.621838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.959 [2024-07-15 16:10:18.621870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.959 qpair failed and we were unable to recover it. 00:28:49.959 [2024-07-15 16:10:18.622083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.959 [2024-07-15 16:10:18.622114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.959 qpair failed and we were unable to recover it. 00:28:49.959 [2024-07-15 16:10:18.622314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.959 [2024-07-15 16:10:18.622330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.959 qpair failed and we were unable to recover it. 00:28:49.959 [2024-07-15 16:10:18.622525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.959 [2024-07-15 16:10:18.622541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.959 qpair failed and we were unable to recover it. 00:28:49.959 [2024-07-15 16:10:18.622831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.959 [2024-07-15 16:10:18.622847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.959 qpair failed and we were unable to recover it. 00:28:49.960 [2024-07-15 16:10:18.623104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.960 [2024-07-15 16:10:18.623120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.960 qpair failed and we were unable to recover it. 00:28:49.960 [2024-07-15 16:10:18.623382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.960 [2024-07-15 16:10:18.623415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.960 qpair failed and we were unable to recover it. 00:28:49.960 [2024-07-15 16:10:18.623715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.960 [2024-07-15 16:10:18.623747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.960 qpair failed and we were unable to recover it. 00:28:49.960 [2024-07-15 16:10:18.624045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.960 [2024-07-15 16:10:18.624076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.960 qpair failed and we were unable to recover it. 00:28:49.960 [2024-07-15 16:10:18.624395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.960 [2024-07-15 16:10:18.624428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.960 qpair failed and we were unable to recover it. 00:28:49.960 [2024-07-15 16:10:18.624706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.960 [2024-07-15 16:10:18.624737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.960 qpair failed and we were unable to recover it. 00:28:49.960 [2024-07-15 16:10:18.624903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.960 [2024-07-15 16:10:18.624935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.960 qpair failed and we were unable to recover it. 00:28:49.960 [2024-07-15 16:10:18.625240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.960 [2024-07-15 16:10:18.625274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.960 qpair failed and we were unable to recover it. 00:28:49.960 [2024-07-15 16:10:18.625574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.960 [2024-07-15 16:10:18.625605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.960 qpair failed and we were unable to recover it. 00:28:49.960 [2024-07-15 16:10:18.625892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.960 [2024-07-15 16:10:18.625924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.960 qpair failed and we were unable to recover it. 00:28:49.960 [2024-07-15 16:10:18.626198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.960 [2024-07-15 16:10:18.626240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.960 qpair failed and we were unable to recover it. 00:28:49.960 [2024-07-15 16:10:18.626491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.960 [2024-07-15 16:10:18.626524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.960 qpair failed and we were unable to recover it. 00:28:49.960 [2024-07-15 16:10:18.626739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.960 [2024-07-15 16:10:18.626771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.960 qpair failed and we were unable to recover it. 00:28:49.960 [2024-07-15 16:10:18.626981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.960 [2024-07-15 16:10:18.627014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.960 qpair failed and we were unable to recover it. 00:28:49.960 [2024-07-15 16:10:18.627254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.960 [2024-07-15 16:10:18.627287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.960 qpair failed and we were unable to recover it. 00:28:49.960 [2024-07-15 16:10:18.627485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.960 [2024-07-15 16:10:18.627503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.960 qpair failed and we were unable to recover it. 00:28:49.960 [2024-07-15 16:10:18.627693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.960 [2024-07-15 16:10:18.627725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.960 qpair failed and we were unable to recover it. 00:28:49.960 [2024-07-15 16:10:18.627878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.960 [2024-07-15 16:10:18.627910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.960 qpair failed and we were unable to recover it. 00:28:49.960 [2024-07-15 16:10:18.628064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.960 [2024-07-15 16:10:18.628095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.960 qpair failed and we were unable to recover it. 00:28:49.960 [2024-07-15 16:10:18.628317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.960 [2024-07-15 16:10:18.628334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.960 qpair failed and we were unable to recover it. 00:28:49.960 [2024-07-15 16:10:18.628530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.960 [2024-07-15 16:10:18.628562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.960 qpair failed and we were unable to recover it. 00:28:49.960 [2024-07-15 16:10:18.628858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.960 [2024-07-15 16:10:18.628891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.960 qpair failed and we were unable to recover it. 00:28:49.960 [2024-07-15 16:10:18.629049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.960 [2024-07-15 16:10:18.629065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.960 qpair failed and we were unable to recover it. 00:28:49.960 [2024-07-15 16:10:18.629309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.960 [2024-07-15 16:10:18.629342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.960 qpair failed and we were unable to recover it. 00:28:49.960 [2024-07-15 16:10:18.629641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.960 [2024-07-15 16:10:18.629673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.960 qpair failed and we were unable to recover it. 00:28:49.960 [2024-07-15 16:10:18.629971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.960 [2024-07-15 16:10:18.630003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.960 qpair failed and we were unable to recover it. 00:28:49.960 [2024-07-15 16:10:18.630374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.960 [2024-07-15 16:10:18.630406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.960 qpair failed and we were unable to recover it. 00:28:49.960 [2024-07-15 16:10:18.630635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.960 [2024-07-15 16:10:18.630667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.960 qpair failed and we were unable to recover it. 00:28:49.960 [2024-07-15 16:10:18.630878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.960 [2024-07-15 16:10:18.630909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.960 qpair failed and we were unable to recover it. 00:28:49.960 [2024-07-15 16:10:18.631180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.960 [2024-07-15 16:10:18.631196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.960 qpair failed and we were unable to recover it. 00:28:49.960 [2024-07-15 16:10:18.631400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.960 [2024-07-15 16:10:18.631417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.960 qpair failed and we were unable to recover it. 00:28:49.960 [2024-07-15 16:10:18.631657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.960 [2024-07-15 16:10:18.631673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.960 qpair failed and we were unable to recover it. 00:28:49.960 [2024-07-15 16:10:18.631864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.960 [2024-07-15 16:10:18.631882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.960 qpair failed and we were unable to recover it. 00:28:49.960 [2024-07-15 16:10:18.632141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.960 [2024-07-15 16:10:18.632158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.960 qpair failed and we were unable to recover it. 00:28:49.960 [2024-07-15 16:10:18.632271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.960 [2024-07-15 16:10:18.632287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.960 qpair failed and we were unable to recover it. 00:28:49.960 [2024-07-15 16:10:18.632501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.960 [2024-07-15 16:10:18.632531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.960 qpair failed and we were unable to recover it. 00:28:49.960 [2024-07-15 16:10:18.632826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.960 [2024-07-15 16:10:18.632858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.960 qpair failed and we were unable to recover it. 00:28:49.960 [2024-07-15 16:10:18.633061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.960 [2024-07-15 16:10:18.633092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.960 qpair failed and we were unable to recover it. 00:28:49.960 [2024-07-15 16:10:18.633310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.961 [2024-07-15 16:10:18.633342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.961 qpair failed and we were unable to recover it. 00:28:49.961 [2024-07-15 16:10:18.633591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.961 [2024-07-15 16:10:18.633622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.961 qpair failed and we were unable to recover it. 00:28:49.961 [2024-07-15 16:10:18.633919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.961 [2024-07-15 16:10:18.633951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.961 qpair failed and we were unable to recover it. 00:28:49.961 [2024-07-15 16:10:18.634255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.961 [2024-07-15 16:10:18.634289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.961 qpair failed and we were unable to recover it. 00:28:49.961 [2024-07-15 16:10:18.634584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.961 [2024-07-15 16:10:18.634615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.961 qpair failed and we were unable to recover it. 00:28:49.961 [2024-07-15 16:10:18.634888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.961 [2024-07-15 16:10:18.634920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.961 qpair failed and we were unable to recover it. 00:28:49.961 [2024-07-15 16:10:18.635140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.961 [2024-07-15 16:10:18.635156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.961 qpair failed and we were unable to recover it. 00:28:49.961 [2024-07-15 16:10:18.635380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.961 [2024-07-15 16:10:18.635397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.961 qpair failed and we were unable to recover it. 00:28:49.961 [2024-07-15 16:10:18.635610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.961 [2024-07-15 16:10:18.635627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.961 qpair failed and we were unable to recover it. 00:28:49.961 [2024-07-15 16:10:18.635908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.961 [2024-07-15 16:10:18.635940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.961 qpair failed and we were unable to recover it. 00:28:49.961 [2024-07-15 16:10:18.636192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.961 [2024-07-15 16:10:18.636223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.961 qpair failed and we were unable to recover it. 00:28:49.961 [2024-07-15 16:10:18.636580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.961 [2024-07-15 16:10:18.636596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.961 qpair failed and we were unable to recover it. 00:28:49.961 [2024-07-15 16:10:18.636785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.961 [2024-07-15 16:10:18.636802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.961 qpair failed and we were unable to recover it. 00:28:49.961 [2024-07-15 16:10:18.637116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.961 [2024-07-15 16:10:18.637132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.961 qpair failed and we were unable to recover it. 00:28:49.961 [2024-07-15 16:10:18.637264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.961 [2024-07-15 16:10:18.637280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.961 qpair failed and we were unable to recover it. 00:28:49.961 [2024-07-15 16:10:18.637409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.961 [2024-07-15 16:10:18.637423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.961 qpair failed and we were unable to recover it. 00:28:49.961 [2024-07-15 16:10:18.637619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.961 [2024-07-15 16:10:18.637636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.961 qpair failed and we were unable to recover it. 00:28:49.961 [2024-07-15 16:10:18.637810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.961 [2024-07-15 16:10:18.637826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.961 qpair failed and we were unable to recover it. 00:28:49.961 [2024-07-15 16:10:18.638050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.961 [2024-07-15 16:10:18.638081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.961 qpair failed and we were unable to recover it. 00:28:49.961 [2024-07-15 16:10:18.638303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.961 [2024-07-15 16:10:18.638337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.961 qpair failed and we were unable to recover it. 00:28:49.961 [2024-07-15 16:10:18.638478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.961 [2024-07-15 16:10:18.638509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.961 qpair failed and we were unable to recover it. 00:28:49.961 [2024-07-15 16:10:18.638720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.961 [2024-07-15 16:10:18.638752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.961 qpair failed and we were unable to recover it. 00:28:49.961 [2024-07-15 16:10:18.639070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.961 [2024-07-15 16:10:18.639102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.961 qpair failed and we were unable to recover it. 00:28:49.961 [2024-07-15 16:10:18.639393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.961 [2024-07-15 16:10:18.639426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.961 qpair failed and we were unable to recover it. 00:28:49.961 [2024-07-15 16:10:18.639633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.961 [2024-07-15 16:10:18.639665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.961 qpair failed and we were unable to recover it. 00:28:49.961 [2024-07-15 16:10:18.639848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.961 [2024-07-15 16:10:18.639880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.961 qpair failed and we were unable to recover it. 00:28:49.961 [2024-07-15 16:10:18.640148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.961 [2024-07-15 16:10:18.640164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.961 qpair failed and we were unable to recover it. 00:28:49.961 [2024-07-15 16:10:18.640368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.961 [2024-07-15 16:10:18.640384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.961 qpair failed and we were unable to recover it. 00:28:49.961 [2024-07-15 16:10:18.640624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.961 [2024-07-15 16:10:18.640640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.961 qpair failed and we were unable to recover it. 00:28:49.961 [2024-07-15 16:10:18.640846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.961 [2024-07-15 16:10:18.640862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.961 qpair failed and we were unable to recover it. 00:28:49.961 [2024-07-15 16:10:18.641160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.961 [2024-07-15 16:10:18.641192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.961 qpair failed and we were unable to recover it. 00:28:49.961 [2024-07-15 16:10:18.641478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.961 [2024-07-15 16:10:18.641512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.961 qpair failed and we were unable to recover it. 00:28:49.961 [2024-07-15 16:10:18.641824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.961 [2024-07-15 16:10:18.641856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.961 qpair failed and we were unable to recover it. 00:28:49.961 [2024-07-15 16:10:18.642139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.961 [2024-07-15 16:10:18.642170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.961 qpair failed and we were unable to recover it. 00:28:49.962 [2024-07-15 16:10:18.642482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.962 [2024-07-15 16:10:18.642502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.962 qpair failed and we were unable to recover it. 00:28:49.962 [2024-07-15 16:10:18.642646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.962 [2024-07-15 16:10:18.642662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.962 qpair failed and we were unable to recover it. 00:28:49.962 [2024-07-15 16:10:18.642768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.962 [2024-07-15 16:10:18.642784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.962 qpair failed and we were unable to recover it. 00:28:49.962 [2024-07-15 16:10:18.643024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.962 [2024-07-15 16:10:18.643041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.962 qpair failed and we were unable to recover it. 00:28:49.962 [2024-07-15 16:10:18.643305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.962 [2024-07-15 16:10:18.643323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.962 qpair failed and we were unable to recover it. 00:28:49.962 [2024-07-15 16:10:18.643441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.962 [2024-07-15 16:10:18.643458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.962 qpair failed and we were unable to recover it. 00:28:49.962 [2024-07-15 16:10:18.643588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.962 [2024-07-15 16:10:18.643604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.962 qpair failed and we were unable to recover it. 00:28:49.962 [2024-07-15 16:10:18.643810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.962 [2024-07-15 16:10:18.643826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.962 qpair failed and we were unable to recover it. 00:28:49.962 [2024-07-15 16:10:18.644012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.962 [2024-07-15 16:10:18.644027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.962 qpair failed and we were unable to recover it. 00:28:49.962 [2024-07-15 16:10:18.644311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.962 [2024-07-15 16:10:18.644328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.962 qpair failed and we were unable to recover it. 00:28:49.962 [2024-07-15 16:10:18.644545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.962 [2024-07-15 16:10:18.644561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.962 qpair failed and we were unable to recover it. 00:28:49.962 [2024-07-15 16:10:18.644808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.962 [2024-07-15 16:10:18.644823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.962 qpair failed and we were unable to recover it. 00:28:49.962 [2024-07-15 16:10:18.645019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.962 [2024-07-15 16:10:18.645035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.962 qpair failed and we were unable to recover it. 00:28:49.962 [2024-07-15 16:10:18.645236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.962 [2024-07-15 16:10:18.645269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.962 qpair failed and we were unable to recover it. 00:28:49.962 [2024-07-15 16:10:18.645559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.962 [2024-07-15 16:10:18.645591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.962 qpair failed and we were unable to recover it. 00:28:49.962 [2024-07-15 16:10:18.645885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.962 [2024-07-15 16:10:18.645917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.962 qpair failed and we were unable to recover it. 00:28:49.962 [2024-07-15 16:10:18.646245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.962 [2024-07-15 16:10:18.646278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.962 qpair failed and we were unable to recover it. 00:28:49.962 [2024-07-15 16:10:18.646499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.962 [2024-07-15 16:10:18.646516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.962 qpair failed and we were unable to recover it. 00:28:49.962 [2024-07-15 16:10:18.646758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.962 [2024-07-15 16:10:18.646774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.962 qpair failed and we were unable to recover it. 00:28:49.962 [2024-07-15 16:10:18.646958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.962 [2024-07-15 16:10:18.646975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.962 qpair failed and we were unable to recover it. 00:28:49.962 [2024-07-15 16:10:18.647193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.962 [2024-07-15 16:10:18.647236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.962 qpair failed and we were unable to recover it. 00:28:49.962 [2024-07-15 16:10:18.647513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.962 [2024-07-15 16:10:18.647545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.962 qpair failed and we were unable to recover it. 00:28:49.962 [2024-07-15 16:10:18.647761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.962 [2024-07-15 16:10:18.647792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.962 qpair failed and we were unable to recover it. 00:28:49.962 [2024-07-15 16:10:18.648076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.962 [2024-07-15 16:10:18.648108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.962 qpair failed and we were unable to recover it. 00:28:49.962 [2024-07-15 16:10:18.648402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.962 [2024-07-15 16:10:18.648418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.962 qpair failed and we were unable to recover it. 00:28:49.962 [2024-07-15 16:10:18.648636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.962 [2024-07-15 16:10:18.648668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.962 qpair failed and we were unable to recover it. 00:28:49.962 [2024-07-15 16:10:18.648941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.962 [2024-07-15 16:10:18.648972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.962 qpair failed and we were unable to recover it. 00:28:49.962 [2024-07-15 16:10:18.649294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.962 [2024-07-15 16:10:18.649327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.962 qpair failed and we were unable to recover it. 00:28:49.962 [2024-07-15 16:10:18.649604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.962 [2024-07-15 16:10:18.649636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.962 qpair failed and we were unable to recover it. 00:28:49.962 [2024-07-15 16:10:18.649887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.962 [2024-07-15 16:10:18.649919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.962 qpair failed and we were unable to recover it. 00:28:49.962 [2024-07-15 16:10:18.650241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.962 [2024-07-15 16:10:18.650273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.962 qpair failed and we were unable to recover it. 00:28:49.962 [2024-07-15 16:10:18.650497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.962 [2024-07-15 16:10:18.650529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.962 qpair failed and we were unable to recover it. 00:28:49.962 [2024-07-15 16:10:18.650825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.962 [2024-07-15 16:10:18.650857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.962 qpair failed and we were unable to recover it. 00:28:49.962 [2024-07-15 16:10:18.651087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.962 [2024-07-15 16:10:18.651118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.962 qpair failed and we were unable to recover it. 00:28:49.962 [2024-07-15 16:10:18.651396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.962 [2024-07-15 16:10:18.651413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.962 qpair failed and we were unable to recover it. 00:28:49.962 [2024-07-15 16:10:18.651599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.962 [2024-07-15 16:10:18.651631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.962 qpair failed and we were unable to recover it. 00:28:49.962 [2024-07-15 16:10:18.651937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.962 [2024-07-15 16:10:18.651969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.962 qpair failed and we were unable to recover it. 00:28:49.962 [2024-07-15 16:10:18.652183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.962 [2024-07-15 16:10:18.652199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.962 qpair failed and we were unable to recover it. 00:28:49.962 [2024-07-15 16:10:18.652467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.963 [2024-07-15 16:10:18.652484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.963 qpair failed and we were unable to recover it. 00:28:49.963 [2024-07-15 16:10:18.652661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.963 [2024-07-15 16:10:18.652678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.963 qpair failed and we were unable to recover it. 00:28:49.963 [2024-07-15 16:10:18.652803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.963 [2024-07-15 16:10:18.652841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.963 qpair failed and we were unable to recover it. 00:28:49.963 [2024-07-15 16:10:18.653071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.963 [2024-07-15 16:10:18.653103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.963 qpair failed and we were unable to recover it. 00:28:49.963 [2024-07-15 16:10:18.653311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.963 [2024-07-15 16:10:18.653344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.963 qpair failed and we were unable to recover it. 00:28:49.963 [2024-07-15 16:10:18.653621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.963 [2024-07-15 16:10:18.653652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.963 qpair failed and we were unable to recover it. 00:28:49.963 [2024-07-15 16:10:18.653787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.963 [2024-07-15 16:10:18.653819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.963 qpair failed and we were unable to recover it. 00:28:49.963 [2024-07-15 16:10:18.654002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.963 [2024-07-15 16:10:18.654032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.963 qpair failed and we were unable to recover it. 00:28:49.963 [2024-07-15 16:10:18.654277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.963 [2024-07-15 16:10:18.654293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.963 qpair failed and we were unable to recover it. 00:28:49.963 [2024-07-15 16:10:18.654536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.963 [2024-07-15 16:10:18.654568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.963 qpair failed and we were unable to recover it. 00:28:49.963 [2024-07-15 16:10:18.654851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.963 [2024-07-15 16:10:18.654883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.963 qpair failed and we were unable to recover it. 00:28:49.963 [2024-07-15 16:10:18.655132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.963 [2024-07-15 16:10:18.655164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.963 qpair failed and we were unable to recover it. 00:28:49.963 [2024-07-15 16:10:18.655384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.963 [2024-07-15 16:10:18.655401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.963 qpair failed and we were unable to recover it. 00:28:49.963 [2024-07-15 16:10:18.655553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.963 [2024-07-15 16:10:18.655584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.963 qpair failed and we were unable to recover it. 00:28:49.963 [2024-07-15 16:10:18.655859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.963 [2024-07-15 16:10:18.655891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.963 qpair failed and we were unable to recover it. 00:28:49.963 [2024-07-15 16:10:18.656062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.963 [2024-07-15 16:10:18.656093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.963 qpair failed and we were unable to recover it. 00:28:49.963 [2024-07-15 16:10:18.656309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.963 [2024-07-15 16:10:18.656325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.963 qpair failed and we were unable to recover it. 00:28:49.963 [2024-07-15 16:10:18.656524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.963 [2024-07-15 16:10:18.656557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.963 qpair failed and we were unable to recover it. 00:28:49.963 [2024-07-15 16:10:18.656768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.963 [2024-07-15 16:10:18.656800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.963 qpair failed and we were unable to recover it. 00:28:49.963 [2024-07-15 16:10:18.657099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.963 [2024-07-15 16:10:18.657131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.963 qpair failed and we were unable to recover it. 00:28:49.963 [2024-07-15 16:10:18.657404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.963 [2024-07-15 16:10:18.657422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.963 qpair failed and we were unable to recover it. 00:28:49.963 [2024-07-15 16:10:18.657617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.963 [2024-07-15 16:10:18.657649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.963 qpair failed and we were unable to recover it. 00:28:49.963 [2024-07-15 16:10:18.657859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.963 [2024-07-15 16:10:18.657890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.963 qpair failed and we were unable to recover it. 00:28:49.963 [2024-07-15 16:10:18.658042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.963 [2024-07-15 16:10:18.658074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.963 qpair failed and we were unable to recover it. 00:28:49.963 [2024-07-15 16:10:18.658322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.963 [2024-07-15 16:10:18.658339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.963 qpair failed and we were unable to recover it. 00:28:49.963 [2024-07-15 16:10:18.658582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.963 [2024-07-15 16:10:18.658599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.963 qpair failed and we were unable to recover it. 00:28:49.963 [2024-07-15 16:10:18.658774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.963 [2024-07-15 16:10:18.658789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.963 qpair failed and we were unable to recover it. 00:28:49.963 [2024-07-15 16:10:18.659007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.963 [2024-07-15 16:10:18.659039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.963 qpair failed and we were unable to recover it. 00:28:49.963 [2024-07-15 16:10:18.659291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.963 [2024-07-15 16:10:18.659308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.963 qpair failed and we were unable to recover it. 00:28:49.963 [2024-07-15 16:10:18.659492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.963 [2024-07-15 16:10:18.659508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.963 qpair failed and we were unable to recover it. 00:28:49.963 [2024-07-15 16:10:18.659797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.963 [2024-07-15 16:10:18.659828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.963 qpair failed and we were unable to recover it. 00:28:49.964 [2024-07-15 16:10:18.660080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.964 [2024-07-15 16:10:18.660113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.964 qpair failed and we were unable to recover it. 00:28:49.964 [2024-07-15 16:10:18.660355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.964 [2024-07-15 16:10:18.660371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.964 qpair failed and we were unable to recover it. 00:28:49.964 [2024-07-15 16:10:18.660583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.964 [2024-07-15 16:10:18.660599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.964 qpair failed and we were unable to recover it. 00:28:49.964 [2024-07-15 16:10:18.660872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.964 [2024-07-15 16:10:18.660904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.964 qpair failed and we were unable to recover it. 00:28:49.964 [2024-07-15 16:10:18.661052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.964 [2024-07-15 16:10:18.661067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.964 qpair failed and we were unable to recover it. 00:28:49.964 [2024-07-15 16:10:18.661279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.964 [2024-07-15 16:10:18.661296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.964 qpair failed and we were unable to recover it. 00:28:49.964 [2024-07-15 16:10:18.661540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.964 [2024-07-15 16:10:18.661572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.964 qpair failed and we were unable to recover it. 00:28:49.964 [2024-07-15 16:10:18.661862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.964 [2024-07-15 16:10:18.661895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.964 qpair failed and we were unable to recover it. 00:28:49.964 [2024-07-15 16:10:18.662104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.964 [2024-07-15 16:10:18.662136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.964 qpair failed and we were unable to recover it. 00:28:49.964 [2024-07-15 16:10:18.662359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.964 [2024-07-15 16:10:18.662391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.964 qpair failed and we were unable to recover it. 00:28:49.964 [2024-07-15 16:10:18.662674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.964 [2024-07-15 16:10:18.662691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.964 qpair failed and we were unable to recover it. 00:28:49.964 [2024-07-15 16:10:18.662952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.964 [2024-07-15 16:10:18.663001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.964 qpair failed and we were unable to recover it. 00:28:49.964 [2024-07-15 16:10:18.663315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.964 [2024-07-15 16:10:18.663348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.964 qpair failed and we were unable to recover it. 00:28:49.964 [2024-07-15 16:10:18.663649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.964 [2024-07-15 16:10:18.663680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.964 qpair failed and we were unable to recover it. 00:28:49.964 [2024-07-15 16:10:18.663854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.964 [2024-07-15 16:10:18.663887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.964 qpair failed and we were unable to recover it. 00:28:49.964 [2024-07-15 16:10:18.664112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.964 [2024-07-15 16:10:18.664143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.964 qpair failed and we were unable to recover it. 00:28:49.964 [2024-07-15 16:10:18.664346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.964 [2024-07-15 16:10:18.664378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.964 qpair failed and we were unable to recover it. 00:28:49.964 [2024-07-15 16:10:18.664549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.964 [2024-07-15 16:10:18.664565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.964 qpair failed and we were unable to recover it. 00:28:49.964 [2024-07-15 16:10:18.664810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.964 [2024-07-15 16:10:18.664843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.964 qpair failed and we were unable to recover it. 00:28:49.964 [2024-07-15 16:10:18.665123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.964 [2024-07-15 16:10:18.665155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.964 qpair failed and we were unable to recover it. 00:28:49.964 [2024-07-15 16:10:18.665321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.964 [2024-07-15 16:10:18.665338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.964 qpair failed and we were unable to recover it. 00:28:49.964 [2024-07-15 16:10:18.665618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.964 [2024-07-15 16:10:18.665651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.964 qpair failed and we were unable to recover it. 00:28:49.964 [2024-07-15 16:10:18.665879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.964 [2024-07-15 16:10:18.665911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.964 qpair failed and we were unable to recover it. 00:28:49.964 [2024-07-15 16:10:18.666123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.964 [2024-07-15 16:10:18.666155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.964 qpair failed and we were unable to recover it. 00:28:49.964 [2024-07-15 16:10:18.666398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.964 [2024-07-15 16:10:18.666415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.964 qpair failed and we were unable to recover it. 00:28:49.964 [2024-07-15 16:10:18.666539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.964 [2024-07-15 16:10:18.666555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.964 qpair failed and we were unable to recover it. 00:28:49.964 [2024-07-15 16:10:18.666779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.964 [2024-07-15 16:10:18.666811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.964 qpair failed and we were unable to recover it. 00:28:49.964 [2024-07-15 16:10:18.667087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.964 [2024-07-15 16:10:18.667119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.964 qpair failed and we were unable to recover it. 00:28:49.964 [2024-07-15 16:10:18.667439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.964 [2024-07-15 16:10:18.667456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.964 qpair failed and we were unable to recover it. 00:28:49.964 [2024-07-15 16:10:18.667657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.964 [2024-07-15 16:10:18.667688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.964 qpair failed and we were unable to recover it. 00:28:49.964 [2024-07-15 16:10:18.667911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.964 [2024-07-15 16:10:18.667943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.964 qpair failed and we were unable to recover it. 00:28:49.964 [2024-07-15 16:10:18.668246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.964 [2024-07-15 16:10:18.668279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.964 qpair failed and we were unable to recover it. 00:28:49.964 [2024-07-15 16:10:18.668451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.964 [2024-07-15 16:10:18.668483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.964 qpair failed and we were unable to recover it. 00:28:49.964 [2024-07-15 16:10:18.668767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.964 [2024-07-15 16:10:18.668799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.964 qpair failed and we were unable to recover it. 00:28:49.964 [2024-07-15 16:10:18.669077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.964 [2024-07-15 16:10:18.669108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.964 qpair failed and we were unable to recover it. 00:28:49.964 [2024-07-15 16:10:18.669249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.964 [2024-07-15 16:10:18.669282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.964 qpair failed and we were unable to recover it. 00:28:49.964 [2024-07-15 16:10:18.669611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.964 [2024-07-15 16:10:18.669627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.964 qpair failed and we were unable to recover it. 00:28:49.964 [2024-07-15 16:10:18.669817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.964 [2024-07-15 16:10:18.669834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.964 qpair failed and we were unable to recover it. 00:28:49.964 [2024-07-15 16:10:18.670112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.964 [2024-07-15 16:10:18.670145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.964 qpair failed and we were unable to recover it. 00:28:49.965 [2024-07-15 16:10:18.670401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.965 [2024-07-15 16:10:18.670418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.965 qpair failed and we were unable to recover it. 00:28:49.965 [2024-07-15 16:10:18.670648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.965 [2024-07-15 16:10:18.670680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.965 qpair failed and we were unable to recover it. 00:28:49.965 [2024-07-15 16:10:18.670911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.965 [2024-07-15 16:10:18.670943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.965 qpair failed and we were unable to recover it. 00:28:49.965 [2024-07-15 16:10:18.671299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.965 [2024-07-15 16:10:18.671332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.965 qpair failed and we were unable to recover it. 00:28:49.965 [2024-07-15 16:10:18.671557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.965 [2024-07-15 16:10:18.671590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.965 qpair failed and we were unable to recover it. 00:28:49.965 [2024-07-15 16:10:18.671839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.965 [2024-07-15 16:10:18.671872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.965 qpair failed and we were unable to recover it. 00:28:49.965 [2024-07-15 16:10:18.672091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.965 [2024-07-15 16:10:18.672123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.965 qpair failed and we were unable to recover it. 00:28:49.965 [2024-07-15 16:10:18.672343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.965 [2024-07-15 16:10:18.672377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.965 qpair failed and we were unable to recover it. 00:28:49.965 [2024-07-15 16:10:18.672680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.965 [2024-07-15 16:10:18.672697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.965 qpair failed and we were unable to recover it. 00:28:49.965 [2024-07-15 16:10:18.672880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.965 [2024-07-15 16:10:18.672897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.965 qpair failed and we were unable to recover it. 00:28:49.965 [2024-07-15 16:10:18.673212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.965 [2024-07-15 16:10:18.673256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.965 qpair failed and we were unable to recover it. 00:28:49.965 [2024-07-15 16:10:18.673484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.965 [2024-07-15 16:10:18.673515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.965 qpair failed and we were unable to recover it. 00:28:49.965 [2024-07-15 16:10:18.673731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.965 [2024-07-15 16:10:18.673768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.965 qpair failed and we were unable to recover it. 00:28:49.965 [2024-07-15 16:10:18.673977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.965 [2024-07-15 16:10:18.674009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.965 qpair failed and we were unable to recover it. 00:28:49.965 [2024-07-15 16:10:18.674310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.965 [2024-07-15 16:10:18.674343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.965 qpair failed and we were unable to recover it. 00:28:49.965 [2024-07-15 16:10:18.674549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.965 [2024-07-15 16:10:18.674565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.965 qpair failed and we were unable to recover it. 00:28:49.965 [2024-07-15 16:10:18.674833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.965 [2024-07-15 16:10:18.674865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.965 qpair failed and we were unable to recover it. 00:28:49.965 [2024-07-15 16:10:18.675169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.965 [2024-07-15 16:10:18.675201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.965 qpair failed and we were unable to recover it. 00:28:49.965 [2024-07-15 16:10:18.675425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.965 [2024-07-15 16:10:18.675464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.965 qpair failed and we were unable to recover it. 00:28:49.965 [2024-07-15 16:10:18.675687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.965 [2024-07-15 16:10:18.675704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.965 qpair failed and we were unable to recover it. 00:28:49.965 [2024-07-15 16:10:18.675882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.965 [2024-07-15 16:10:18.675913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.965 qpair failed and we were unable to recover it. 00:28:49.965 [2024-07-15 16:10:18.676135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.965 [2024-07-15 16:10:18.676167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.965 qpair failed and we were unable to recover it. 00:28:49.965 [2024-07-15 16:10:18.676460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.965 [2024-07-15 16:10:18.676492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.965 qpair failed and we were unable to recover it. 00:28:49.965 [2024-07-15 16:10:18.676797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.965 [2024-07-15 16:10:18.676830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.965 qpair failed and we were unable to recover it. 00:28:49.965 [2024-07-15 16:10:18.677122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.965 [2024-07-15 16:10:18.677154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.965 qpair failed and we were unable to recover it. 00:28:49.965 [2024-07-15 16:10:18.677385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.965 [2024-07-15 16:10:18.677418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.965 qpair failed and we were unable to recover it. 00:28:49.965 [2024-07-15 16:10:18.677655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.965 [2024-07-15 16:10:18.677688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.965 qpair failed and we were unable to recover it. 00:28:49.965 [2024-07-15 16:10:18.677976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.965 [2024-07-15 16:10:18.678008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.965 qpair failed and we were unable to recover it. 00:28:49.965 [2024-07-15 16:10:18.678257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.965 [2024-07-15 16:10:18.678291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.965 qpair failed and we were unable to recover it. 00:28:49.965 [2024-07-15 16:10:18.678547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.965 [2024-07-15 16:10:18.678579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.965 qpair failed and we were unable to recover it. 00:28:49.965 [2024-07-15 16:10:18.678907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.965 [2024-07-15 16:10:18.678939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.965 qpair failed and we were unable to recover it. 00:28:49.965 [2024-07-15 16:10:18.679207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.965 [2024-07-15 16:10:18.679265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.965 qpair failed and we were unable to recover it. 00:28:49.965 [2024-07-15 16:10:18.679566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.965 [2024-07-15 16:10:18.679582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.965 qpair failed and we were unable to recover it. 00:28:49.965 [2024-07-15 16:10:18.679780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.966 [2024-07-15 16:10:18.679797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.966 qpair failed and we were unable to recover it. 00:28:49.966 [2024-07-15 16:10:18.679927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.966 [2024-07-15 16:10:18.679943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.966 qpair failed and we were unable to recover it. 00:28:49.966 [2024-07-15 16:10:18.680089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.966 [2024-07-15 16:10:18.680120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.966 qpair failed and we were unable to recover it. 00:28:49.966 [2024-07-15 16:10:18.680334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.966 [2024-07-15 16:10:18.680367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.966 qpair failed and we were unable to recover it. 00:28:49.966 [2024-07-15 16:10:18.680649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.966 [2024-07-15 16:10:18.680681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.966 qpair failed and we were unable to recover it. 00:28:49.966 [2024-07-15 16:10:18.680960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.966 [2024-07-15 16:10:18.680991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.966 qpair failed and we were unable to recover it. 00:28:49.966 [2024-07-15 16:10:18.681335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.966 [2024-07-15 16:10:18.681369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.966 qpair failed and we were unable to recover it. 00:28:49.966 [2024-07-15 16:10:18.681654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.966 [2024-07-15 16:10:18.681686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.966 qpair failed and we were unable to recover it. 00:28:49.966 [2024-07-15 16:10:18.682009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.966 [2024-07-15 16:10:18.682041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.966 qpair failed and we were unable to recover it. 00:28:49.966 [2024-07-15 16:10:18.682338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.966 [2024-07-15 16:10:18.682355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.966 qpair failed and we were unable to recover it. 00:28:49.966 [2024-07-15 16:10:18.682577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.966 [2024-07-15 16:10:18.682594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.966 qpair failed and we were unable to recover it. 00:28:49.966 [2024-07-15 16:10:18.682884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.966 [2024-07-15 16:10:18.682900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.966 qpair failed and we were unable to recover it. 00:28:49.966 [2024-07-15 16:10:18.683022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.966 [2024-07-15 16:10:18.683037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.966 qpair failed and we were unable to recover it. 00:28:49.966 [2024-07-15 16:10:18.683173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.966 [2024-07-15 16:10:18.683206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.966 qpair failed and we were unable to recover it. 00:28:49.966 [2024-07-15 16:10:18.683499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.966 [2024-07-15 16:10:18.683532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.966 qpair failed and we were unable to recover it. 00:28:49.966 [2024-07-15 16:10:18.683819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.966 [2024-07-15 16:10:18.683851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.966 qpair failed and we were unable to recover it. 00:28:49.966 [2024-07-15 16:10:18.684086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.966 [2024-07-15 16:10:18.684119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.966 qpair failed and we were unable to recover it. 00:28:49.966 [2024-07-15 16:10:18.684364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.966 [2024-07-15 16:10:18.684382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.966 qpair failed and we were unable to recover it. 00:28:49.966 [2024-07-15 16:10:18.684642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.966 [2024-07-15 16:10:18.684675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.966 qpair failed and we were unable to recover it. 00:28:49.966 [2024-07-15 16:10:18.684920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.966 [2024-07-15 16:10:18.684957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.966 qpair failed and we were unable to recover it. 00:28:49.966 [2024-07-15 16:10:18.685173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.966 [2024-07-15 16:10:18.685204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.966 qpair failed and we were unable to recover it. 00:28:49.966 [2024-07-15 16:10:18.685387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.966 [2024-07-15 16:10:18.685420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.966 qpair failed and we were unable to recover it. 00:28:49.966 [2024-07-15 16:10:18.685648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.966 [2024-07-15 16:10:18.685681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.966 qpair failed and we were unable to recover it. 00:28:49.966 [2024-07-15 16:10:18.685955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.966 [2024-07-15 16:10:18.685987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.966 qpair failed and we were unable to recover it. 00:28:49.966 [2024-07-15 16:10:18.686292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.966 [2024-07-15 16:10:18.686310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.966 qpair failed and we were unable to recover it. 00:28:49.966 [2024-07-15 16:10:18.686482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.966 [2024-07-15 16:10:18.686498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.966 qpair failed and we were unable to recover it. 00:28:49.966 [2024-07-15 16:10:18.686699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.966 [2024-07-15 16:10:18.686731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.966 qpair failed and we were unable to recover it. 00:28:49.966 [2024-07-15 16:10:18.686953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.966 [2024-07-15 16:10:18.686985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.966 qpair failed and we were unable to recover it. 00:28:49.966 [2024-07-15 16:10:18.687265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.966 [2024-07-15 16:10:18.687299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.966 qpair failed and we were unable to recover it. 00:28:49.966 [2024-07-15 16:10:18.687541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.966 [2024-07-15 16:10:18.687573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.966 qpair failed and we were unable to recover it. 00:28:49.966 [2024-07-15 16:10:18.687871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.966 [2024-07-15 16:10:18.687903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.966 qpair failed and we were unable to recover it. 00:28:49.966 [2024-07-15 16:10:18.688145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.966 [2024-07-15 16:10:18.688161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.966 qpair failed and we were unable to recover it. 00:28:49.966 [2024-07-15 16:10:18.688457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.966 [2024-07-15 16:10:18.688474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.966 qpair failed and we were unable to recover it. 00:28:49.966 [2024-07-15 16:10:18.688618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.966 [2024-07-15 16:10:18.688635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.966 qpair failed and we were unable to recover it. 00:28:49.966 [2024-07-15 16:10:18.688896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.966 [2024-07-15 16:10:18.688928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.966 qpair failed and we were unable to recover it. 00:28:49.966 [2024-07-15 16:10:18.689156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.966 [2024-07-15 16:10:18.689172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.966 qpair failed and we were unable to recover it. 00:28:49.966 [2024-07-15 16:10:18.689360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.966 [2024-07-15 16:10:18.689378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.966 qpair failed and we were unable to recover it. 00:28:49.966 [2024-07-15 16:10:18.689572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.966 [2024-07-15 16:10:18.689588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.966 qpair failed and we were unable to recover it. 00:28:49.966 [2024-07-15 16:10:18.689783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.967 [2024-07-15 16:10:18.689799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.967 qpair failed and we were unable to recover it. 00:28:49.967 [2024-07-15 16:10:18.689921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.967 [2024-07-15 16:10:18.689938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.967 qpair failed and we were unable to recover it. 00:28:49.967 [2024-07-15 16:10:18.690133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.967 [2024-07-15 16:10:18.690165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.967 qpair failed and we were unable to recover it. 00:28:49.967 [2024-07-15 16:10:18.690407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.967 [2024-07-15 16:10:18.690440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.967 qpair failed and we were unable to recover it. 00:28:49.967 [2024-07-15 16:10:18.690772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.967 [2024-07-15 16:10:18.690804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.967 qpair failed and we were unable to recover it. 00:28:49.967 [2024-07-15 16:10:18.691086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.967 [2024-07-15 16:10:18.691118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.967 qpair failed and we were unable to recover it. 00:28:49.967 [2024-07-15 16:10:18.691287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.967 [2024-07-15 16:10:18.691321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.967 qpair failed and we were unable to recover it. 00:28:49.967 [2024-07-15 16:10:18.691480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.967 [2024-07-15 16:10:18.691512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:49.967 qpair failed and we were unable to recover it. 00:28:49.967 [2024-07-15 16:10:18.691779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.967 [2024-07-15 16:10:18.691859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.967 qpair failed and we were unable to recover it. 00:28:49.967 [2024-07-15 16:10:18.692121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.967 [2024-07-15 16:10:18.692158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.967 qpair failed and we were unable to recover it. 00:28:49.967 [2024-07-15 16:10:18.692430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.967 [2024-07-15 16:10:18.692465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.967 qpair failed and we were unable to recover it. 00:28:49.967 [2024-07-15 16:10:18.692692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.967 [2024-07-15 16:10:18.692709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.967 qpair failed and we were unable to recover it. 00:28:49.967 [2024-07-15 16:10:18.692977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.967 [2024-07-15 16:10:18.693012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.967 qpair failed and we were unable to recover it. 00:28:49.967 [2024-07-15 16:10:18.693185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.967 [2024-07-15 16:10:18.693217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.967 qpair failed and we were unable to recover it. 00:28:49.967 [2024-07-15 16:10:18.693514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.967 [2024-07-15 16:10:18.693546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.967 qpair failed and we were unable to recover it. 00:28:49.967 [2024-07-15 16:10:18.693787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.967 [2024-07-15 16:10:18.693820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.967 qpair failed and we were unable to recover it. 00:28:49.967 [2024-07-15 16:10:18.694037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.967 [2024-07-15 16:10:18.694071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.967 qpair failed and we were unable to recover it. 00:28:49.967 [2024-07-15 16:10:18.694372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.967 [2024-07-15 16:10:18.694389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.967 qpair failed and we were unable to recover it. 00:28:49.967 [2024-07-15 16:10:18.694586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.967 [2024-07-15 16:10:18.694603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.967 qpair failed and we were unable to recover it. 00:28:49.967 [2024-07-15 16:10:18.694791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.967 [2024-07-15 16:10:18.694807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.967 qpair failed and we were unable to recover it. 00:28:49.967 [2024-07-15 16:10:18.694989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.967 [2024-07-15 16:10:18.695023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.967 qpair failed and we were unable to recover it. 00:28:49.967 [2024-07-15 16:10:18.695247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.967 [2024-07-15 16:10:18.695265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.967 qpair failed and we were unable to recover it. 00:28:49.967 [2024-07-15 16:10:18.695544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.967 [2024-07-15 16:10:18.695561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.967 qpair failed and we were unable to recover it. 00:28:49.967 [2024-07-15 16:10:18.695739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.967 [2024-07-15 16:10:18.695755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.967 qpair failed and we were unable to recover it. 00:28:49.967 [2024-07-15 16:10:18.695942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.967 [2024-07-15 16:10:18.695974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.967 qpair failed and we were unable to recover it. 00:28:49.967 [2024-07-15 16:10:18.696183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.967 [2024-07-15 16:10:18.696216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.967 qpair failed and we were unable to recover it. 00:28:49.967 [2024-07-15 16:10:18.696535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.967 [2024-07-15 16:10:18.696552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.967 qpair failed and we were unable to recover it. 00:28:49.967 [2024-07-15 16:10:18.696732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.967 [2024-07-15 16:10:18.696749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.967 qpair failed and we were unable to recover it. 00:28:49.967 [2024-07-15 16:10:18.696964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.967 [2024-07-15 16:10:18.696997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.967 qpair failed and we were unable to recover it. 00:28:49.967 [2024-07-15 16:10:18.697168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.967 [2024-07-15 16:10:18.697200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.967 qpair failed and we were unable to recover it. 00:28:49.967 [2024-07-15 16:10:18.697487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.967 [2024-07-15 16:10:18.697520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.967 qpair failed and we were unable to recover it. 00:28:49.967 [2024-07-15 16:10:18.697826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.967 [2024-07-15 16:10:18.697858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.967 qpair failed and we were unable to recover it. 00:28:49.967 [2024-07-15 16:10:18.698085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.968 [2024-07-15 16:10:18.698118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.968 qpair failed and we were unable to recover it. 00:28:49.968 [2024-07-15 16:10:18.698399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.968 [2024-07-15 16:10:18.698417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.968 qpair failed and we were unable to recover it. 00:28:49.968 [2024-07-15 16:10:18.698661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.968 [2024-07-15 16:10:18.698677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.968 qpair failed and we were unable to recover it. 00:28:49.968 [2024-07-15 16:10:18.698976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.968 [2024-07-15 16:10:18.699015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.968 qpair failed and we were unable to recover it. 00:28:49.968 [2024-07-15 16:10:18.699329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.968 [2024-07-15 16:10:18.699362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.968 qpair failed and we were unable to recover it. 00:28:49.968 [2024-07-15 16:10:18.699667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.968 [2024-07-15 16:10:18.699698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.968 qpair failed and we were unable to recover it. 00:28:49.968 [2024-07-15 16:10:18.699992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.968 [2024-07-15 16:10:18.700025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.968 qpair failed and we were unable to recover it. 00:28:49.968 [2024-07-15 16:10:18.700328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.968 [2024-07-15 16:10:18.700361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.968 qpair failed and we were unable to recover it. 00:28:49.968 [2024-07-15 16:10:18.700598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.968 [2024-07-15 16:10:18.700638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.968 qpair failed and we were unable to recover it. 00:28:49.968 [2024-07-15 16:10:18.700919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.968 [2024-07-15 16:10:18.700952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.968 qpair failed and we were unable to recover it. 00:28:49.968 [2024-07-15 16:10:18.701259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.968 [2024-07-15 16:10:18.701293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.968 qpair failed and we were unable to recover it. 00:28:49.968 [2024-07-15 16:10:18.701419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.968 [2024-07-15 16:10:18.701457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.968 qpair failed and we were unable to recover it. 00:28:49.968 [2024-07-15 16:10:18.701740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.968 [2024-07-15 16:10:18.701772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.968 qpair failed and we were unable to recover it. 00:28:49.968 [2024-07-15 16:10:18.701975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.968 [2024-07-15 16:10:18.702008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.968 qpair failed and we were unable to recover it. 00:28:49.968 [2024-07-15 16:10:18.702216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.968 [2024-07-15 16:10:18.702239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.968 qpair failed and we were unable to recover it. 00:28:49.968 [2024-07-15 16:10:18.702444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.968 [2024-07-15 16:10:18.702477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.968 qpair failed and we were unable to recover it. 00:28:49.968 [2024-07-15 16:10:18.702810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.968 [2024-07-15 16:10:18.702842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.968 qpair failed and we were unable to recover it. 00:28:49.968 [2024-07-15 16:10:18.703094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.968 [2024-07-15 16:10:18.703127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.968 qpair failed and we were unable to recover it. 00:28:49.968 [2024-07-15 16:10:18.703385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.968 [2024-07-15 16:10:18.703403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.968 qpair failed and we were unable to recover it. 00:28:49.968 [2024-07-15 16:10:18.703534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.968 [2024-07-15 16:10:18.703550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.968 qpair failed and we were unable to recover it. 00:28:49.968 [2024-07-15 16:10:18.703826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.968 [2024-07-15 16:10:18.703858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.968 qpair failed and we were unable to recover it. 00:28:49.968 [2024-07-15 16:10:18.704089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.968 [2024-07-15 16:10:18.704120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.968 qpair failed and we were unable to recover it. 00:28:49.968 [2024-07-15 16:10:18.704329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.968 [2024-07-15 16:10:18.704346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.968 qpair failed and we were unable to recover it. 00:28:49.968 [2024-07-15 16:10:18.704597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.968 [2024-07-15 16:10:18.704629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.968 qpair failed and we were unable to recover it. 00:28:49.968 [2024-07-15 16:10:18.704909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.968 [2024-07-15 16:10:18.704941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.968 qpair failed and we were unable to recover it. 00:28:49.968 [2024-07-15 16:10:18.705175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.968 [2024-07-15 16:10:18.705207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.968 qpair failed and we were unable to recover it. 00:28:49.968 [2024-07-15 16:10:18.705515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.968 [2024-07-15 16:10:18.705556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.968 qpair failed and we were unable to recover it. 00:28:49.968 [2024-07-15 16:10:18.705839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.968 [2024-07-15 16:10:18.705870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.968 qpair failed and we were unable to recover it. 00:28:49.968 [2024-07-15 16:10:18.706177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.968 [2024-07-15 16:10:18.706210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.968 qpair failed and we were unable to recover it. 00:28:49.968 [2024-07-15 16:10:18.706498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.968 [2024-07-15 16:10:18.706515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.968 qpair failed and we were unable to recover it. 00:28:49.968 [2024-07-15 16:10:18.706717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.968 [2024-07-15 16:10:18.706737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.968 qpair failed and we were unable to recover it. 00:28:49.968 [2024-07-15 16:10:18.706934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.968 [2024-07-15 16:10:18.706951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.968 qpair failed and we were unable to recover it. 00:28:49.968 [2024-07-15 16:10:18.707169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.968 [2024-07-15 16:10:18.707200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.968 qpair failed and we were unable to recover it. 00:28:49.968 [2024-07-15 16:10:18.707518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.968 [2024-07-15 16:10:18.707551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.968 qpair failed and we were unable to recover it. 00:28:49.968 [2024-07-15 16:10:18.707712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.968 [2024-07-15 16:10:18.707745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.968 qpair failed and we were unable to recover it. 00:28:49.968 [2024-07-15 16:10:18.707991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.968 [2024-07-15 16:10:18.708023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.968 qpair failed and we were unable to recover it. 00:28:49.968 [2024-07-15 16:10:18.708323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.968 [2024-07-15 16:10:18.708362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.968 qpair failed and we were unable to recover it. 00:28:49.968 [2024-07-15 16:10:18.708603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.968 [2024-07-15 16:10:18.708635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.968 qpair failed and we were unable to recover it. 00:28:49.968 [2024-07-15 16:10:18.708978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.968 [2024-07-15 16:10:18.709010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.968 qpair failed and we were unable to recover it. 00:28:49.969 [2024-07-15 16:10:18.709233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.969 [2024-07-15 16:10:18.709267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.969 qpair failed and we were unable to recover it. 00:28:49.969 [2024-07-15 16:10:18.709549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.969 [2024-07-15 16:10:18.709566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.969 qpair failed and we were unable to recover it. 00:28:49.969 [2024-07-15 16:10:18.709762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.969 [2024-07-15 16:10:18.709794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.969 qpair failed and we were unable to recover it. 00:28:49.969 [2024-07-15 16:10:18.710091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.969 [2024-07-15 16:10:18.710123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.969 qpair failed and we were unable to recover it. 00:28:49.969 [2024-07-15 16:10:18.710431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.969 [2024-07-15 16:10:18.710463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.969 qpair failed and we were unable to recover it. 00:28:49.969 [2024-07-15 16:10:18.710797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.969 [2024-07-15 16:10:18.710815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.969 qpair failed and we were unable to recover it. 00:28:49.969 [2024-07-15 16:10:18.711112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.969 [2024-07-15 16:10:18.711129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.969 qpair failed and we were unable to recover it. 00:28:49.969 [2024-07-15 16:10:18.711377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.969 [2024-07-15 16:10:18.711410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.969 qpair failed and we were unable to recover it. 00:28:49.969 [2024-07-15 16:10:18.711664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.969 [2024-07-15 16:10:18.711697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.969 qpair failed and we were unable to recover it. 00:28:49.969 [2024-07-15 16:10:18.711923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.969 [2024-07-15 16:10:18.711955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.969 qpair failed and we were unable to recover it. 00:28:49.969 [2024-07-15 16:10:18.712273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.969 [2024-07-15 16:10:18.712291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.969 qpair failed and we were unable to recover it. 00:28:49.969 [2024-07-15 16:10:18.712563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.969 [2024-07-15 16:10:18.712580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.969 qpair failed and we were unable to recover it. 00:28:49.969 [2024-07-15 16:10:18.712771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.969 [2024-07-15 16:10:18.712788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.969 qpair failed and we were unable to recover it. 00:28:49.969 [2024-07-15 16:10:18.713058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.969 [2024-07-15 16:10:18.713090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.969 qpair failed and we were unable to recover it. 00:28:49.969 [2024-07-15 16:10:18.713376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.969 [2024-07-15 16:10:18.713393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.969 qpair failed and we were unable to recover it. 00:28:49.969 [2024-07-15 16:10:18.713712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.969 [2024-07-15 16:10:18.713745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.969 qpair failed and we were unable to recover it. 00:28:49.969 [2024-07-15 16:10:18.714044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.969 [2024-07-15 16:10:18.714076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.969 qpair failed and we were unable to recover it. 00:28:49.969 [2024-07-15 16:10:18.714331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.969 [2024-07-15 16:10:18.714364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.969 qpair failed and we were unable to recover it. 00:28:49.969 [2024-07-15 16:10:18.714597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.969 [2024-07-15 16:10:18.714635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.969 qpair failed and we were unable to recover it. 00:28:49.969 [2024-07-15 16:10:18.714849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.969 [2024-07-15 16:10:18.714882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.969 qpair failed and we were unable to recover it. 00:28:49.969 [2024-07-15 16:10:18.715090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.969 [2024-07-15 16:10:18.715122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.969 qpair failed and we were unable to recover it. 00:28:49.969 [2024-07-15 16:10:18.715346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.969 [2024-07-15 16:10:18.715379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.969 qpair failed and we were unable to recover it. 00:28:49.969 [2024-07-15 16:10:18.715682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.969 [2024-07-15 16:10:18.715714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.969 qpair failed and we were unable to recover it. 00:28:49.969 [2024-07-15 16:10:18.715922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.969 [2024-07-15 16:10:18.715954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.969 qpair failed and we were unable to recover it. 00:28:49.969 [2024-07-15 16:10:18.716185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.969 [2024-07-15 16:10:18.716218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.969 qpair failed and we were unable to recover it. 00:28:49.969 [2024-07-15 16:10:18.716569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.969 [2024-07-15 16:10:18.716606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.969 qpair failed and we were unable to recover it. 00:28:49.969 [2024-07-15 16:10:18.716913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.969 [2024-07-15 16:10:18.716945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.969 qpair failed and we were unable to recover it. 00:28:49.969 [2024-07-15 16:10:18.717187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.969 [2024-07-15 16:10:18.717220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.969 qpair failed and we were unable to recover it. 00:28:49.969 [2024-07-15 16:10:18.717564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.969 [2024-07-15 16:10:18.717597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.969 qpair failed and we were unable to recover it. 00:28:49.969 [2024-07-15 16:10:18.717850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.969 [2024-07-15 16:10:18.717882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.969 qpair failed and we were unable to recover it. 00:28:49.969 [2024-07-15 16:10:18.718159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.969 [2024-07-15 16:10:18.718192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:49.969 qpair failed and we were unable to recover it. 00:28:49.969 [2024-07-15 16:10:18.718475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.969 [2024-07-15 16:10:18.718553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.969 qpair failed and we were unable to recover it. 00:28:49.969 [2024-07-15 16:10:18.718873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.969 [2024-07-15 16:10:18.718911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.969 qpair failed and we were unable to recover it. 00:28:49.969 [2024-07-15 16:10:18.719200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.969 [2024-07-15 16:10:18.719250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.969 qpair failed and we were unable to recover it. 00:28:49.969 [2024-07-15 16:10:18.719485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.969 [2024-07-15 16:10:18.719519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.969 qpair failed and we were unable to recover it. 00:28:49.969 [2024-07-15 16:10:18.719735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.969 [2024-07-15 16:10:18.719767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.969 qpair failed and we were unable to recover it. 00:28:49.969 [2024-07-15 16:10:18.720024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.969 [2024-07-15 16:10:18.720057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.969 qpair failed and we were unable to recover it. 00:28:49.969 [2024-07-15 16:10:18.720273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.969 [2024-07-15 16:10:18.720308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.969 qpair failed and we were unable to recover it. 00:28:49.969 [2024-07-15 16:10:18.720454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.969 [2024-07-15 16:10:18.720467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.969 qpair failed and we were unable to recover it. 00:28:49.969 [2024-07-15 16:10:18.720647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.969 [2024-07-15 16:10:18.720660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.969 qpair failed and we were unable to recover it. 00:28:49.969 [2024-07-15 16:10:18.720896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.970 [2024-07-15 16:10:18.720925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.970 qpair failed and we were unable to recover it. 00:28:49.970 [2024-07-15 16:10:18.721169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.970 [2024-07-15 16:10:18.721202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.970 qpair failed and we were unable to recover it. 00:28:49.970 [2024-07-15 16:10:18.721446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.970 [2024-07-15 16:10:18.721480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.970 qpair failed and we were unable to recover it. 00:28:49.970 [2024-07-15 16:10:18.721722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.970 [2024-07-15 16:10:18.721754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.970 qpair failed and we were unable to recover it. 00:28:49.970 [2024-07-15 16:10:18.721976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.970 [2024-07-15 16:10:18.722009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.970 qpair failed and we were unable to recover it. 00:28:49.970 [2024-07-15 16:10:18.722291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.970 [2024-07-15 16:10:18.722344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.970 qpair failed and we were unable to recover it. 00:28:49.970 [2024-07-15 16:10:18.722647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.970 [2024-07-15 16:10:18.722680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.970 qpair failed and we were unable to recover it. 00:28:49.970 [2024-07-15 16:10:18.722960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.970 [2024-07-15 16:10:18.722993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.970 qpair failed and we were unable to recover it. 00:28:49.970 [2024-07-15 16:10:18.723220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.970 [2024-07-15 16:10:18.723262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.970 qpair failed and we were unable to recover it. 00:28:49.970 [2024-07-15 16:10:18.723564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.970 [2024-07-15 16:10:18.723597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.970 qpair failed and we were unable to recover it. 00:28:49.970 [2024-07-15 16:10:18.723809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.970 [2024-07-15 16:10:18.723841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.970 qpair failed and we were unable to recover it. 00:28:49.970 [2024-07-15 16:10:18.724122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.970 [2024-07-15 16:10:18.724155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.970 qpair failed and we were unable to recover it. 00:28:49.970 [2024-07-15 16:10:18.724469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.970 [2024-07-15 16:10:18.724502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.970 qpair failed and we were unable to recover it. 00:28:49.970 [2024-07-15 16:10:18.724786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.970 [2024-07-15 16:10:18.724818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.970 qpair failed and we were unable to recover it. 00:28:49.970 [2024-07-15 16:10:18.724978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.970 [2024-07-15 16:10:18.725010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.970 qpair failed and we were unable to recover it. 00:28:49.970 [2024-07-15 16:10:18.725288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.970 [2024-07-15 16:10:18.725322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.970 qpair failed and we were unable to recover it. 00:28:49.970 [2024-07-15 16:10:18.725646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.970 [2024-07-15 16:10:18.725680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.970 qpair failed and we were unable to recover it. 00:28:49.970 [2024-07-15 16:10:18.725864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.970 [2024-07-15 16:10:18.725897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.970 qpair failed and we were unable to recover it. 00:28:49.970 [2024-07-15 16:10:18.726196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.970 [2024-07-15 16:10:18.726236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.970 qpair failed and we were unable to recover it. 00:28:49.970 [2024-07-15 16:10:18.726491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.970 [2024-07-15 16:10:18.726524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.970 qpair failed and we were unable to recover it. 00:28:49.970 [2024-07-15 16:10:18.726741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.970 [2024-07-15 16:10:18.726773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.970 qpair failed and we were unable to recover it. 00:28:49.970 [2024-07-15 16:10:18.727079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.970 [2024-07-15 16:10:18.727112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.970 qpair failed and we were unable to recover it. 00:28:49.970 [2024-07-15 16:10:18.727408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.970 [2024-07-15 16:10:18.727441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.970 qpair failed and we were unable to recover it. 00:28:49.970 [2024-07-15 16:10:18.727685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.970 [2024-07-15 16:10:18.727717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.970 qpair failed and we were unable to recover it. 00:28:49.970 [2024-07-15 16:10:18.728004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.970 [2024-07-15 16:10:18.728037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.970 qpair failed and we were unable to recover it. 00:28:49.970 [2024-07-15 16:10:18.728340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.970 [2024-07-15 16:10:18.728374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.970 qpair failed and we were unable to recover it. 00:28:49.970 [2024-07-15 16:10:18.728667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.970 [2024-07-15 16:10:18.728700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.970 qpair failed and we were unable to recover it. 00:28:49.970 [2024-07-15 16:10:18.728959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.970 [2024-07-15 16:10:18.728992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.970 qpair failed and we were unable to recover it. 00:28:49.970 [2024-07-15 16:10:18.729284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.970 [2024-07-15 16:10:18.729317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.970 qpair failed and we were unable to recover it. 00:28:49.970 [2024-07-15 16:10:18.729558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.970 [2024-07-15 16:10:18.729591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.970 qpair failed and we were unable to recover it. 00:28:49.970 [2024-07-15 16:10:18.729806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.970 [2024-07-15 16:10:18.729838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.970 qpair failed and we were unable to recover it. 00:28:49.970 [2024-07-15 16:10:18.730117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.970 [2024-07-15 16:10:18.730150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.970 qpair failed and we were unable to recover it. 00:28:49.970 [2024-07-15 16:10:18.730375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.970 [2024-07-15 16:10:18.730388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.970 qpair failed and we were unable to recover it. 00:28:49.970 [2024-07-15 16:10:18.730588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.970 [2024-07-15 16:10:18.730621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.970 qpair failed and we were unable to recover it. 00:28:49.970 [2024-07-15 16:10:18.730829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.970 [2024-07-15 16:10:18.730861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.970 qpair failed and we were unable to recover it. 00:28:49.970 [2024-07-15 16:10:18.731169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.970 [2024-07-15 16:10:18.731201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.970 qpair failed and we were unable to recover it. 00:28:49.970 [2024-07-15 16:10:18.731452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.970 [2024-07-15 16:10:18.731485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.970 qpair failed and we were unable to recover it. 00:28:49.970 [2024-07-15 16:10:18.731716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.970 [2024-07-15 16:10:18.731748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.970 qpair failed and we were unable to recover it. 00:28:49.970 [2024-07-15 16:10:18.732052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.970 [2024-07-15 16:10:18.732084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.970 qpair failed and we were unable to recover it. 00:28:49.970 [2024-07-15 16:10:18.732379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.970 [2024-07-15 16:10:18.732412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.970 qpair failed and we were unable to recover it. 00:28:49.970 [2024-07-15 16:10:18.732715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.970 [2024-07-15 16:10:18.732747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.970 qpair failed and we were unable to recover it. 00:28:49.970 [2024-07-15 16:10:18.732908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.971 [2024-07-15 16:10:18.732940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.971 qpair failed and we were unable to recover it. 00:28:49.971 [2024-07-15 16:10:18.733107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.971 [2024-07-15 16:10:18.733139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.971 qpair failed and we were unable to recover it. 00:28:49.971 [2024-07-15 16:10:18.733399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.971 [2024-07-15 16:10:18.733431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.971 qpair failed and we were unable to recover it. 00:28:49.971 [2024-07-15 16:10:18.733637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.971 [2024-07-15 16:10:18.733669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.971 qpair failed and we were unable to recover it. 00:28:49.971 [2024-07-15 16:10:18.733949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.971 [2024-07-15 16:10:18.733985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.971 qpair failed and we were unable to recover it. 00:28:49.971 [2024-07-15 16:10:18.734268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.971 [2024-07-15 16:10:18.734302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.971 qpair failed and we were unable to recover it. 00:28:49.971 [2024-07-15 16:10:18.734606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.971 [2024-07-15 16:10:18.734619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.971 qpair failed and we were unable to recover it. 00:28:49.971 [2024-07-15 16:10:18.734857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.971 [2024-07-15 16:10:18.734884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.971 qpair failed and we were unable to recover it. 00:28:49.971 [2024-07-15 16:10:18.735191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.971 [2024-07-15 16:10:18.735223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.971 qpair failed and we were unable to recover it. 00:28:49.971 [2024-07-15 16:10:18.735533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.971 [2024-07-15 16:10:18.735566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.971 qpair failed and we were unable to recover it. 00:28:49.971 [2024-07-15 16:10:18.735854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.971 [2024-07-15 16:10:18.735885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.971 qpair failed and we were unable to recover it. 00:28:49.971 [2024-07-15 16:10:18.736167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.971 [2024-07-15 16:10:18.736198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.971 qpair failed and we were unable to recover it. 00:28:49.971 [2024-07-15 16:10:18.736489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.971 [2024-07-15 16:10:18.736522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.971 qpair failed and we were unable to recover it. 00:28:49.971 [2024-07-15 16:10:18.736760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.971 [2024-07-15 16:10:18.736792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.971 qpair failed and we were unable to recover it. 00:28:49.971 [2024-07-15 16:10:18.737036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.971 [2024-07-15 16:10:18.737068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.971 qpair failed and we were unable to recover it. 00:28:49.971 [2024-07-15 16:10:18.737348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.971 [2024-07-15 16:10:18.737389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.971 qpair failed and we were unable to recover it. 00:28:49.971 [2024-07-15 16:10:18.737583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.971 [2024-07-15 16:10:18.737597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.971 qpair failed and we were unable to recover it. 00:28:49.971 [2024-07-15 16:10:18.737760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.971 [2024-07-15 16:10:18.737774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.971 qpair failed and we were unable to recover it. 00:28:49.971 [2024-07-15 16:10:18.738047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.971 [2024-07-15 16:10:18.738080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.971 qpair failed and we were unable to recover it. 00:28:49.971 [2024-07-15 16:10:18.738339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.971 [2024-07-15 16:10:18.738353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.971 qpair failed and we were unable to recover it. 00:28:49.971 [2024-07-15 16:10:18.738596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.971 [2024-07-15 16:10:18.738628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.971 qpair failed and we were unable to recover it. 00:28:49.971 [2024-07-15 16:10:18.738835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.971 [2024-07-15 16:10:18.738868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.971 qpair failed and we were unable to recover it. 00:28:49.971 [2024-07-15 16:10:18.739172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.971 [2024-07-15 16:10:18.739203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.971 qpair failed and we were unable to recover it. 00:28:49.971 [2024-07-15 16:10:18.739527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.971 [2024-07-15 16:10:18.739561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.971 qpair failed and we were unable to recover it. 00:28:49.971 [2024-07-15 16:10:18.739823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.971 [2024-07-15 16:10:18.739837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.971 qpair failed and we were unable to recover it. 00:28:49.971 [2024-07-15 16:10:18.740099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.971 [2024-07-15 16:10:18.740112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.971 qpair failed and we were unable to recover it. 00:28:49.971 [2024-07-15 16:10:18.740418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.971 [2024-07-15 16:10:18.740451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.971 qpair failed and we were unable to recover it. 00:28:49.971 [2024-07-15 16:10:18.740763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.971 [2024-07-15 16:10:18.740795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.971 qpair failed and we were unable to recover it. 00:28:49.971 [2024-07-15 16:10:18.741013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.971 [2024-07-15 16:10:18.741045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.971 qpair failed and we were unable to recover it. 00:28:49.971 [2024-07-15 16:10:18.741357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.971 [2024-07-15 16:10:18.741390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.971 qpair failed and we were unable to recover it. 00:28:49.971 [2024-07-15 16:10:18.741697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.971 [2024-07-15 16:10:18.741730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.971 qpair failed and we were unable to recover it. 00:28:49.971 [2024-07-15 16:10:18.742022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.971 [2024-07-15 16:10:18.742055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.971 qpair failed and we were unable to recover it. 00:28:49.971 [2024-07-15 16:10:18.742402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.971 [2024-07-15 16:10:18.742414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.971 qpair failed and we were unable to recover it. 00:28:49.971 [2024-07-15 16:10:18.742566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.971 [2024-07-15 16:10:18.742580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.971 qpair failed and we were unable to recover it. 00:28:49.971 [2024-07-15 16:10:18.742776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.971 [2024-07-15 16:10:18.742790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.971 qpair failed and we were unable to recover it. 00:28:49.971 [2024-07-15 16:10:18.743010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.971 [2024-07-15 16:10:18.743023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.971 qpair failed and we were unable to recover it. 00:28:49.971 [2024-07-15 16:10:18.743208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.971 [2024-07-15 16:10:18.743249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.971 qpair failed and we were unable to recover it. 00:28:49.971 [2024-07-15 16:10:18.743468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.971 [2024-07-15 16:10:18.743481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.971 qpair failed and we were unable to recover it. 00:28:49.971 [2024-07-15 16:10:18.743749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.971 [2024-07-15 16:10:18.743781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.971 qpair failed and we were unable to recover it. 00:28:49.971 [2024-07-15 16:10:18.744060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.971 [2024-07-15 16:10:18.744098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.971 qpair failed and we were unable to recover it. 00:28:49.971 [2024-07-15 16:10:18.744236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.971 [2024-07-15 16:10:18.744251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.971 qpair failed and we were unable to recover it. 00:28:49.972 [2024-07-15 16:10:18.744496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.972 [2024-07-15 16:10:18.744528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.972 qpair failed and we were unable to recover it. 00:28:49.972 [2024-07-15 16:10:18.744759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.972 [2024-07-15 16:10:18.744792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.972 qpair failed and we were unable to recover it. 00:28:49.972 [2024-07-15 16:10:18.745099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.972 [2024-07-15 16:10:18.745131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.972 qpair failed and we were unable to recover it. 00:28:49.972 [2024-07-15 16:10:18.745432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.972 [2024-07-15 16:10:18.745470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.972 qpair failed and we were unable to recover it. 00:28:49.972 [2024-07-15 16:10:18.745769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.972 [2024-07-15 16:10:18.745801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.972 qpair failed and we were unable to recover it. 00:28:49.972 [2024-07-15 16:10:18.746027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.972 [2024-07-15 16:10:18.746060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.972 qpair failed and we were unable to recover it. 00:28:49.972 [2024-07-15 16:10:18.746403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.972 [2024-07-15 16:10:18.746437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.972 qpair failed and we were unable to recover it. 00:28:49.972 [2024-07-15 16:10:18.746744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.972 [2024-07-15 16:10:18.746777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.972 qpair failed and we were unable to recover it. 00:28:49.972 [2024-07-15 16:10:18.747071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.972 [2024-07-15 16:10:18.747102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.972 qpair failed and we were unable to recover it. 00:28:49.972 [2024-07-15 16:10:18.747335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.972 [2024-07-15 16:10:18.747369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.972 qpair failed and we were unable to recover it. 00:28:49.972 [2024-07-15 16:10:18.747667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.972 [2024-07-15 16:10:18.747698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.972 qpair failed and we were unable to recover it. 00:28:49.972 [2024-07-15 16:10:18.747944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.972 [2024-07-15 16:10:18.747977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.972 qpair failed and we were unable to recover it. 00:28:49.972 [2024-07-15 16:10:18.748299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.972 [2024-07-15 16:10:18.748333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.972 qpair failed and we were unable to recover it. 00:28:49.972 [2024-07-15 16:10:18.748657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.972 [2024-07-15 16:10:18.748689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.972 qpair failed and we were unable to recover it. 00:28:49.972 [2024-07-15 16:10:18.748870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.972 [2024-07-15 16:10:18.748902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.972 qpair failed and we were unable to recover it. 00:28:49.972 [2024-07-15 16:10:18.749125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.972 [2024-07-15 16:10:18.749157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.972 qpair failed and we were unable to recover it. 00:28:49.972 [2024-07-15 16:10:18.749384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.972 [2024-07-15 16:10:18.749418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.972 qpair failed and we were unable to recover it. 00:28:49.972 [2024-07-15 16:10:18.749672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.972 [2024-07-15 16:10:18.749685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.972 qpair failed and we were unable to recover it. 00:28:49.972 [2024-07-15 16:10:18.749940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.972 [2024-07-15 16:10:18.749953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.972 qpair failed and we were unable to recover it. 00:28:49.972 [2024-07-15 16:10:18.750070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.972 [2024-07-15 16:10:18.750084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.972 qpair failed and we were unable to recover it. 00:28:49.972 [2024-07-15 16:10:18.750342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.972 [2024-07-15 16:10:18.750375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.972 qpair failed and we were unable to recover it. 00:28:49.972 [2024-07-15 16:10:18.750609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.972 [2024-07-15 16:10:18.750642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.972 qpair failed and we were unable to recover it. 00:28:49.972 [2024-07-15 16:10:18.750857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.972 [2024-07-15 16:10:18.750889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.972 qpair failed and we were unable to recover it. 00:28:49.972 [2024-07-15 16:10:18.751198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.972 [2024-07-15 16:10:18.751239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.972 qpair failed and we were unable to recover it. 00:28:49.972 [2024-07-15 16:10:18.751467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.972 [2024-07-15 16:10:18.751500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.972 qpair failed and we were unable to recover it. 00:28:49.972 [2024-07-15 16:10:18.751739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.972 [2024-07-15 16:10:18.751771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.972 qpair failed and we were unable to recover it. 00:28:49.972 [2024-07-15 16:10:18.752070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.972 [2024-07-15 16:10:18.752103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.972 qpair failed and we were unable to recover it. 00:28:49.972 [2024-07-15 16:10:18.752404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.972 [2024-07-15 16:10:18.752438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.972 qpair failed and we were unable to recover it. 00:28:49.972 [2024-07-15 16:10:18.752733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.972 [2024-07-15 16:10:18.752765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.972 qpair failed and we were unable to recover it. 00:28:49.972 [2024-07-15 16:10:18.753071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.972 [2024-07-15 16:10:18.753103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:49.972 qpair failed and we were unable to recover it. 00:28:49.972 [2024-07-15 16:10:18.753510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.972 [2024-07-15 16:10:18.753588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.972 qpair failed and we were unable to recover it. 00:28:49.972 [2024-07-15 16:10:18.753826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.972 [2024-07-15 16:10:18.753862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.972 qpair failed and we were unable to recover it. 00:28:49.972 [2024-07-15 16:10:18.754170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.972 [2024-07-15 16:10:18.754203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.972 qpair failed and we were unable to recover it. 00:28:49.972 [2024-07-15 16:10:18.754450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.972 [2024-07-15 16:10:18.754484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.972 qpair failed and we were unable to recover it. 00:28:49.972 [2024-07-15 16:10:18.754702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.973 [2024-07-15 16:10:18.754720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.973 qpair failed and we were unable to recover it. 00:28:49.973 [2024-07-15 16:10:18.754971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.973 [2024-07-15 16:10:18.755003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.973 qpair failed and we were unable to recover it. 00:28:49.973 [2024-07-15 16:10:18.755283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.973 [2024-07-15 16:10:18.755317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.973 qpair failed and we were unable to recover it. 00:28:49.973 [2024-07-15 16:10:18.755532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.973 [2024-07-15 16:10:18.755549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.973 qpair failed and we were unable to recover it. 00:28:49.973 [2024-07-15 16:10:18.755749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.973 [2024-07-15 16:10:18.755781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.973 qpair failed and we were unable to recover it. 00:28:49.973 [2024-07-15 16:10:18.755944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.973 [2024-07-15 16:10:18.755976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.973 qpair failed and we were unable to recover it. 00:28:49.973 [2024-07-15 16:10:18.756311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.973 [2024-07-15 16:10:18.756345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.973 qpair failed and we were unable to recover it. 00:28:49.973 [2024-07-15 16:10:18.756636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.973 [2024-07-15 16:10:18.756669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.973 qpair failed and we were unable to recover it. 00:28:49.973 [2024-07-15 16:10:18.756972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.973 [2024-07-15 16:10:18.757005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.973 qpair failed and we were unable to recover it. 00:28:49.973 [2024-07-15 16:10:18.757299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.973 [2024-07-15 16:10:18.757342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.973 qpair failed and we were unable to recover it. 00:28:49.973 [2024-07-15 16:10:18.757658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.973 [2024-07-15 16:10:18.757690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.973 qpair failed and we were unable to recover it. 00:28:49.973 [2024-07-15 16:10:18.757914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.973 [2024-07-15 16:10:18.757946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.973 qpair failed and we were unable to recover it. 00:28:49.973 [2024-07-15 16:10:18.758249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.973 [2024-07-15 16:10:18.758283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.973 qpair failed and we were unable to recover it. 00:28:49.973 [2024-07-15 16:10:18.758574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.973 [2024-07-15 16:10:18.758607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.973 qpair failed and we were unable to recover it. 00:28:49.973 [2024-07-15 16:10:18.758838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.973 [2024-07-15 16:10:18.758871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.973 qpair failed and we were unable to recover it. 00:28:49.973 [2024-07-15 16:10:18.759102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.973 [2024-07-15 16:10:18.759135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.973 qpair failed and we were unable to recover it. 00:28:49.973 [2024-07-15 16:10:18.759364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.973 [2024-07-15 16:10:18.759397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.973 qpair failed and we were unable to recover it. 00:28:49.973 [2024-07-15 16:10:18.759674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.973 [2024-07-15 16:10:18.759690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.973 qpair failed and we were unable to recover it. 00:28:49.973 [2024-07-15 16:10:18.759815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.973 [2024-07-15 16:10:18.759831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.973 qpair failed and we were unable to recover it. 00:28:49.973 [2024-07-15 16:10:18.760025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.973 [2024-07-15 16:10:18.760057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.973 qpair failed and we were unable to recover it. 00:28:49.973 [2024-07-15 16:10:18.760339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.973 [2024-07-15 16:10:18.760373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.973 qpair failed and we were unable to recover it. 00:28:49.973 [2024-07-15 16:10:18.760590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.973 [2024-07-15 16:10:18.760623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.973 qpair failed and we were unable to recover it. 00:28:49.973 [2024-07-15 16:10:18.760854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.973 [2024-07-15 16:10:18.760887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.973 qpair failed and we were unable to recover it. 00:28:49.973 [2024-07-15 16:10:18.761196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.973 [2024-07-15 16:10:18.761239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.973 qpair failed and we were unable to recover it. 00:28:49.973 [2024-07-15 16:10:18.761464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.973 [2024-07-15 16:10:18.761497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.973 qpair failed and we were unable to recover it. 00:28:49.973 [2024-07-15 16:10:18.761844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.973 [2024-07-15 16:10:18.761877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.973 qpair failed and we were unable to recover it. 00:28:49.973 [2024-07-15 16:10:18.762108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.973 [2024-07-15 16:10:18.762141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.973 qpair failed and we were unable to recover it. 00:28:49.973 [2024-07-15 16:10:18.762449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.973 [2024-07-15 16:10:18.762482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.973 qpair failed and we were unable to recover it. 00:28:49.973 [2024-07-15 16:10:18.762773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.973 [2024-07-15 16:10:18.762805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.973 qpair failed and we were unable to recover it. 00:28:49.973 [2024-07-15 16:10:18.763061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.973 [2024-07-15 16:10:18.763093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.973 qpair failed and we were unable to recover it. 00:28:49.973 [2024-07-15 16:10:18.763333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.973 [2024-07-15 16:10:18.763366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.973 qpair failed and we were unable to recover it. 00:28:49.973 [2024-07-15 16:10:18.763584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.973 [2024-07-15 16:10:18.763617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.973 qpair failed and we were unable to recover it. 00:28:49.973 [2024-07-15 16:10:18.763843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.973 [2024-07-15 16:10:18.763874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.973 qpair failed and we were unable to recover it. 00:28:49.973 [2024-07-15 16:10:18.764154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.973 [2024-07-15 16:10:18.764186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.973 qpair failed and we were unable to recover it. 00:28:49.973 [2024-07-15 16:10:18.764500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.973 [2024-07-15 16:10:18.764533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.973 qpair failed and we were unable to recover it. 00:28:49.973 [2024-07-15 16:10:18.764807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.973 [2024-07-15 16:10:18.764838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.973 qpair failed and we were unable to recover it. 00:28:49.973 [2024-07-15 16:10:18.765123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.973 [2024-07-15 16:10:18.765157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.973 qpair failed and we were unable to recover it. 00:28:49.973 [2024-07-15 16:10:18.765467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.973 [2024-07-15 16:10:18.765485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.973 qpair failed and we were unable to recover it. 00:28:49.973 [2024-07-15 16:10:18.765782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.973 [2024-07-15 16:10:18.765814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.973 qpair failed and we were unable to recover it. 00:28:49.973 [2024-07-15 16:10:18.766118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.973 [2024-07-15 16:10:18.766151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.973 qpair failed and we were unable to recover it. 00:28:49.973 [2024-07-15 16:10:18.766437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.973 [2024-07-15 16:10:18.766455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.973 qpair failed and we were unable to recover it. 00:28:49.974 [2024-07-15 16:10:18.766590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.974 [2024-07-15 16:10:18.766607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.974 qpair failed and we were unable to recover it. 00:28:49.974 [2024-07-15 16:10:18.766880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.974 [2024-07-15 16:10:18.766912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.974 qpair failed and we were unable to recover it. 00:28:49.974 [2024-07-15 16:10:18.767201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.974 [2024-07-15 16:10:18.767244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.974 qpair failed and we were unable to recover it. 00:28:49.974 [2024-07-15 16:10:18.767495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.974 [2024-07-15 16:10:18.767527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.974 qpair failed and we were unable to recover it. 00:28:49.974 [2024-07-15 16:10:18.767822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.974 [2024-07-15 16:10:18.767854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.974 qpair failed and we were unable to recover it. 00:28:49.974 [2024-07-15 16:10:18.768075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.974 [2024-07-15 16:10:18.768107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.974 qpair failed and we were unable to recover it. 00:28:49.974 [2024-07-15 16:10:18.768388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.974 [2024-07-15 16:10:18.768423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.974 qpair failed and we were unable to recover it. 00:28:49.974 [2024-07-15 16:10:18.768702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.974 [2024-07-15 16:10:18.768735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.974 qpair failed and we were unable to recover it. 00:28:49.974 [2024-07-15 16:10:18.769046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.974 [2024-07-15 16:10:18.769083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.974 qpair failed and we were unable to recover it. 00:28:49.974 [2024-07-15 16:10:18.769367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.974 [2024-07-15 16:10:18.769402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.974 qpair failed and we were unable to recover it. 00:28:49.974 [2024-07-15 16:10:18.769576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.974 [2024-07-15 16:10:18.769608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.974 qpair failed and we were unable to recover it. 00:28:49.974 [2024-07-15 16:10:18.769907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.974 [2024-07-15 16:10:18.769939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.974 qpair failed and we were unable to recover it. 00:28:49.974 [2024-07-15 16:10:18.770245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.974 [2024-07-15 16:10:18.770278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.974 qpair failed and we were unable to recover it. 00:28:49.974 [2024-07-15 16:10:18.770572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.974 [2024-07-15 16:10:18.770604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.974 qpair failed and we were unable to recover it. 00:28:49.974 [2024-07-15 16:10:18.770932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.974 [2024-07-15 16:10:18.770964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.974 qpair failed and we were unable to recover it. 00:28:49.974 [2024-07-15 16:10:18.771215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.974 [2024-07-15 16:10:18.771258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.974 qpair failed and we were unable to recover it. 00:28:49.974 [2024-07-15 16:10:18.771505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.974 [2024-07-15 16:10:18.771537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.974 qpair failed and we were unable to recover it. 00:28:49.974 [2024-07-15 16:10:18.771838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.974 [2024-07-15 16:10:18.771870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.974 qpair failed and we were unable to recover it. 00:28:49.974 [2024-07-15 16:10:18.772174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.974 [2024-07-15 16:10:18.772207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.974 qpair failed and we were unable to recover it. 00:28:49.974 [2024-07-15 16:10:18.772474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.974 [2024-07-15 16:10:18.772507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.974 qpair failed and we were unable to recover it. 00:28:49.974 [2024-07-15 16:10:18.772839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.974 [2024-07-15 16:10:18.772871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.974 qpair failed and we were unable to recover it. 00:28:49.974 [2024-07-15 16:10:18.773175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.974 [2024-07-15 16:10:18.773209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.974 qpair failed and we were unable to recover it. 00:28:49.974 [2024-07-15 16:10:18.773497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.974 [2024-07-15 16:10:18.773531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.974 qpair failed and we were unable to recover it. 00:28:49.974 [2024-07-15 16:10:18.773836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.974 [2024-07-15 16:10:18.773869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.974 qpair failed and we were unable to recover it. 00:28:49.974 [2024-07-15 16:10:18.774167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.974 [2024-07-15 16:10:18.774200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.974 qpair failed and we were unable to recover it. 00:28:49.974 [2024-07-15 16:10:18.774500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.974 [2024-07-15 16:10:18.774533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.974 qpair failed and we were unable to recover it. 00:28:49.974 [2024-07-15 16:10:18.774681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.974 [2024-07-15 16:10:18.774714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.974 qpair failed and we were unable to recover it. 00:28:49.974 [2024-07-15 16:10:18.774923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.974 [2024-07-15 16:10:18.774956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.974 qpair failed and we were unable to recover it. 00:28:49.974 [2024-07-15 16:10:18.775214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.974 [2024-07-15 16:10:18.775257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.974 qpair failed and we were unable to recover it. 00:28:49.974 [2024-07-15 16:10:18.775558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.974 [2024-07-15 16:10:18.775590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.974 qpair failed and we were unable to recover it. 00:28:49.974 [2024-07-15 16:10:18.775878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.974 [2024-07-15 16:10:18.775910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.974 qpair failed and we were unable to recover it. 00:28:49.974 [2024-07-15 16:10:18.776164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.974 [2024-07-15 16:10:18.776197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.974 qpair failed and we were unable to recover it. 00:28:49.974 [2024-07-15 16:10:18.776425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.974 [2024-07-15 16:10:18.776443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.974 qpair failed and we were unable to recover it. 00:28:49.974 [2024-07-15 16:10:18.776629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.974 [2024-07-15 16:10:18.776662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.974 qpair failed and we were unable to recover it. 00:28:49.974 [2024-07-15 16:10:18.776941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.974 [2024-07-15 16:10:18.776973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.974 qpair failed and we were unable to recover it. 00:28:49.974 [2024-07-15 16:10:18.777266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.974 [2024-07-15 16:10:18.777302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.974 qpair failed and we were unable to recover it. 00:28:49.974 [2024-07-15 16:10:18.777608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.974 [2024-07-15 16:10:18.777641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.974 qpair failed and we were unable to recover it. 00:28:49.974 [2024-07-15 16:10:18.777941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.974 [2024-07-15 16:10:18.777974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.974 qpair failed and we were unable to recover it. 00:28:49.974 [2024-07-15 16:10:18.778274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.974 [2024-07-15 16:10:18.778308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.974 qpair failed and we were unable to recover it. 00:28:49.974 [2024-07-15 16:10:18.778608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.974 [2024-07-15 16:10:18.778640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.974 qpair failed and we were unable to recover it. 00:28:49.974 [2024-07-15 16:10:18.778810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.975 [2024-07-15 16:10:18.778842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.975 qpair failed and we were unable to recover it. 00:28:49.975 [2024-07-15 16:10:18.779092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.975 [2024-07-15 16:10:18.779124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.975 qpair failed and we were unable to recover it. 00:28:49.975 [2024-07-15 16:10:18.779413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.975 [2024-07-15 16:10:18.779448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.975 qpair failed and we were unable to recover it. 00:28:49.975 [2024-07-15 16:10:18.779751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.975 [2024-07-15 16:10:18.779768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.975 qpair failed and we were unable to recover it. 00:28:49.975 [2024-07-15 16:10:18.780052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.975 [2024-07-15 16:10:18.780069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.975 qpair failed and we were unable to recover it. 00:28:49.975 [2024-07-15 16:10:18.780285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.975 [2024-07-15 16:10:18.780303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.975 qpair failed and we were unable to recover it. 00:28:49.975 [2024-07-15 16:10:18.780500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.975 [2024-07-15 16:10:18.780517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.975 qpair failed and we were unable to recover it. 00:28:49.975 [2024-07-15 16:10:18.780801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.975 [2024-07-15 16:10:18.780819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.975 qpair failed and we were unable to recover it. 00:28:49.975 [2024-07-15 16:10:18.781095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.975 [2024-07-15 16:10:18.781128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.975 qpair failed and we were unable to recover it. 00:28:49.975 [2024-07-15 16:10:18.781462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.975 [2024-07-15 16:10:18.781496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.975 qpair failed and we were unable to recover it. 00:28:49.975 [2024-07-15 16:10:18.781785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.975 [2024-07-15 16:10:18.781817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.975 qpair failed and we were unable to recover it. 00:28:49.975 [2024-07-15 16:10:18.782127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.975 [2024-07-15 16:10:18.782160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.975 qpair failed and we were unable to recover it. 00:28:49.975 [2024-07-15 16:10:18.782447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.975 [2024-07-15 16:10:18.782480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.975 qpair failed and we were unable to recover it. 00:28:49.975 [2024-07-15 16:10:18.782706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.975 [2024-07-15 16:10:18.782723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.975 qpair failed and we were unable to recover it. 00:28:49.975 [2024-07-15 16:10:18.782943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.975 [2024-07-15 16:10:18.782959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.975 qpair failed and we were unable to recover it. 00:28:49.975 [2024-07-15 16:10:18.783073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.975 [2024-07-15 16:10:18.783091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.975 qpair failed and we were unable to recover it. 00:28:49.975 [2024-07-15 16:10:18.783341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.975 [2024-07-15 16:10:18.783374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.975 qpair failed and we were unable to recover it. 00:28:49.975 [2024-07-15 16:10:18.783656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.975 [2024-07-15 16:10:18.783688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.975 qpair failed and we were unable to recover it. 00:28:49.975 [2024-07-15 16:10:18.783997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.975 [2024-07-15 16:10:18.784013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.975 qpair failed and we were unable to recover it. 00:28:49.975 [2024-07-15 16:10:18.784280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.975 [2024-07-15 16:10:18.784298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.975 qpair failed and we were unable to recover it. 00:28:49.975 [2024-07-15 16:10:18.784567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.975 [2024-07-15 16:10:18.784584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.975 qpair failed and we were unable to recover it. 00:28:49.975 [2024-07-15 16:10:18.784790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.975 [2024-07-15 16:10:18.784822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.975 qpair failed and we were unable to recover it. 00:28:49.975 [2024-07-15 16:10:18.785153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.975 [2024-07-15 16:10:18.785186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.975 qpair failed and we were unable to recover it. 00:28:49.975 [2024-07-15 16:10:18.785448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.975 [2024-07-15 16:10:18.785465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.975 qpair failed and we were unable to recover it. 00:28:49.975 [2024-07-15 16:10:18.785761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.975 [2024-07-15 16:10:18.785794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.975 qpair failed and we were unable to recover it. 00:28:49.975 [2024-07-15 16:10:18.786020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.975 [2024-07-15 16:10:18.786052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.975 qpair failed and we were unable to recover it. 00:28:49.975 [2024-07-15 16:10:18.786271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.975 [2024-07-15 16:10:18.786305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.975 qpair failed and we were unable to recover it. 00:28:49.975 [2024-07-15 16:10:18.786588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.975 [2024-07-15 16:10:18.786624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.975 qpair failed and we were unable to recover it. 00:28:49.975 [2024-07-15 16:10:18.786845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.975 [2024-07-15 16:10:18.786877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.975 qpair failed and we were unable to recover it. 00:28:49.975 [2024-07-15 16:10:18.787126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.975 [2024-07-15 16:10:18.787158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.975 qpair failed and we were unable to recover it. 00:28:49.975 [2024-07-15 16:10:18.787452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.975 [2024-07-15 16:10:18.787469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.975 qpair failed and we were unable to recover it. 00:28:49.975 [2024-07-15 16:10:18.787653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.975 [2024-07-15 16:10:18.787669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.975 qpair failed and we were unable to recover it. 00:28:49.975 [2024-07-15 16:10:18.787921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.975 [2024-07-15 16:10:18.787954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.975 qpair failed and we were unable to recover it. 00:28:49.975 [2024-07-15 16:10:18.788205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.975 [2024-07-15 16:10:18.788247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.975 qpair failed and we were unable to recover it. 00:28:49.975 [2024-07-15 16:10:18.788436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.975 [2024-07-15 16:10:18.788453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.975 qpair failed and we were unable to recover it. 00:28:49.975 [2024-07-15 16:10:18.788712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.975 [2024-07-15 16:10:18.788732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.975 qpair failed and we were unable to recover it. 00:28:49.975 [2024-07-15 16:10:18.788919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.975 [2024-07-15 16:10:18.788936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.975 qpair failed and we were unable to recover it. 00:28:49.975 [2024-07-15 16:10:18.789183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.975 [2024-07-15 16:10:18.789219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.975 qpair failed and we were unable to recover it. 00:28:49.975 [2024-07-15 16:10:18.789531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.975 [2024-07-15 16:10:18.789564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.975 qpair failed and we were unable to recover it. 00:28:49.975 [2024-07-15 16:10:18.789732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.975 [2024-07-15 16:10:18.789764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.975 qpair failed and we were unable to recover it. 00:28:49.975 [2024-07-15 16:10:18.789982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.975 [2024-07-15 16:10:18.790015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.975 qpair failed and we were unable to recover it. 00:28:49.976 [2024-07-15 16:10:18.790248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.976 [2024-07-15 16:10:18.790282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.976 qpair failed and we were unable to recover it. 00:28:49.976 [2024-07-15 16:10:18.790580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.976 [2024-07-15 16:10:18.790620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.976 qpair failed and we were unable to recover it. 00:28:49.976 [2024-07-15 16:10:18.790907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.976 [2024-07-15 16:10:18.790939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.976 qpair failed and we were unable to recover it. 00:28:49.976 [2024-07-15 16:10:18.791106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.976 [2024-07-15 16:10:18.791139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.976 qpair failed and we were unable to recover it. 00:28:49.976 [2024-07-15 16:10:18.791371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.976 [2024-07-15 16:10:18.791404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.976 qpair failed and we were unable to recover it. 00:28:49.976 [2024-07-15 16:10:18.791666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.976 [2024-07-15 16:10:18.791683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.976 qpair failed and we were unable to recover it. 00:28:49.976 [2024-07-15 16:10:18.791927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.976 [2024-07-15 16:10:18.791943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.976 qpair failed and we were unable to recover it. 00:28:49.976 [2024-07-15 16:10:18.792213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.976 [2024-07-15 16:10:18.792236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.976 qpair failed and we were unable to recover it. 00:28:49.976 [2024-07-15 16:10:18.792492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.976 [2024-07-15 16:10:18.792524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.976 qpair failed and we were unable to recover it. 00:28:49.976 [2024-07-15 16:10:18.792870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.976 [2024-07-15 16:10:18.792902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.976 qpair failed and we were unable to recover it. 00:28:49.976 [2024-07-15 16:10:18.793127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.976 [2024-07-15 16:10:18.793160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.976 qpair failed and we were unable to recover it. 00:28:49.976 [2024-07-15 16:10:18.793385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.976 [2024-07-15 16:10:18.793420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.976 qpair failed and we were unable to recover it. 00:28:49.976 [2024-07-15 16:10:18.793598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.976 [2024-07-15 16:10:18.793631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.976 qpair failed and we were unable to recover it. 00:28:49.976 [2024-07-15 16:10:18.793853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.976 [2024-07-15 16:10:18.793886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.976 qpair failed and we were unable to recover it. 00:28:49.976 [2024-07-15 16:10:18.794098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.976 [2024-07-15 16:10:18.794131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.976 qpair failed and we were unable to recover it. 00:28:49.976 [2024-07-15 16:10:18.794439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.976 [2024-07-15 16:10:18.794473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.976 qpair failed and we were unable to recover it. 00:28:49.976 [2024-07-15 16:10:18.794764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.976 [2024-07-15 16:10:18.794796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.976 qpair failed and we were unable to recover it. 00:28:49.976 [2024-07-15 16:10:18.795027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.976 [2024-07-15 16:10:18.795059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.976 qpair failed and we were unable to recover it. 00:28:49.976 [2024-07-15 16:10:18.795338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.976 [2024-07-15 16:10:18.795372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.976 qpair failed and we were unable to recover it. 00:28:49.976 [2024-07-15 16:10:18.795637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.976 [2024-07-15 16:10:18.795669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.976 qpair failed and we were unable to recover it. 00:28:49.976 [2024-07-15 16:10:18.795903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.976 [2024-07-15 16:10:18.795935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.976 qpair failed and we were unable to recover it. 00:28:49.976 [2024-07-15 16:10:18.796151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.976 [2024-07-15 16:10:18.796184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.976 qpair failed and we were unable to recover it. 00:28:49.976 [2024-07-15 16:10:18.796421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.976 [2024-07-15 16:10:18.796454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.976 qpair failed and we were unable to recover it. 00:28:49.976 [2024-07-15 16:10:18.796758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.976 [2024-07-15 16:10:18.796790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.976 qpair failed and we were unable to recover it. 00:28:49.976 [2024-07-15 16:10:18.797106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.976 [2024-07-15 16:10:18.797137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.976 qpair failed and we were unable to recover it. 00:28:49.976 [2024-07-15 16:10:18.797386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.976 [2024-07-15 16:10:18.797420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.976 qpair failed and we were unable to recover it. 00:28:49.976 [2024-07-15 16:10:18.797634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.976 [2024-07-15 16:10:18.797666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.976 qpair failed and we were unable to recover it. 00:28:49.976 [2024-07-15 16:10:18.797812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.976 [2024-07-15 16:10:18.797844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.976 qpair failed and we were unable to recover it. 00:28:49.976 [2024-07-15 16:10:18.798062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.976 [2024-07-15 16:10:18.798094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.976 qpair failed and we were unable to recover it. 00:28:49.976 [2024-07-15 16:10:18.798416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.976 [2024-07-15 16:10:18.798450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.976 qpair failed and we were unable to recover it. 00:28:49.976 [2024-07-15 16:10:18.798735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.976 [2024-07-15 16:10:18.798768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.976 qpair failed and we were unable to recover it. 00:28:49.976 [2024-07-15 16:10:18.799075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.976 [2024-07-15 16:10:18.799108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.976 qpair failed and we were unable to recover it. 00:28:49.976 [2024-07-15 16:10:18.799342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.976 [2024-07-15 16:10:18.799377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.976 qpair failed and we were unable to recover it. 00:28:49.976 [2024-07-15 16:10:18.799619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.976 [2024-07-15 16:10:18.799636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.976 qpair failed and we were unable to recover it. 00:28:49.976 [2024-07-15 16:10:18.799933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.977 [2024-07-15 16:10:18.799976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.977 qpair failed and we were unable to recover it. 00:28:49.977 [2024-07-15 16:10:18.800260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.977 [2024-07-15 16:10:18.800295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.977 qpair failed and we were unable to recover it. 00:28:49.977 [2024-07-15 16:10:18.800611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.977 [2024-07-15 16:10:18.800644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.977 qpair failed and we were unable to recover it. 00:28:49.977 [2024-07-15 16:10:18.800871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.977 [2024-07-15 16:10:18.800904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.977 qpair failed and we were unable to recover it. 00:28:49.977 [2024-07-15 16:10:18.801130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.977 [2024-07-15 16:10:18.801162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.977 qpair failed and we were unable to recover it. 00:28:49.977 [2024-07-15 16:10:18.801395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.977 [2024-07-15 16:10:18.801428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.977 qpair failed and we were unable to recover it. 00:28:49.977 [2024-07-15 16:10:18.801731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.977 [2024-07-15 16:10:18.801763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.977 qpair failed and we were unable to recover it. 00:28:49.977 [2024-07-15 16:10:18.801981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.977 [2024-07-15 16:10:18.801998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.977 qpair failed and we were unable to recover it. 00:28:49.977 [2024-07-15 16:10:18.802184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.977 [2024-07-15 16:10:18.802217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.977 qpair failed and we were unable to recover it. 00:28:49.977 [2024-07-15 16:10:18.802546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.977 [2024-07-15 16:10:18.802579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.977 qpair failed and we were unable to recover it. 00:28:49.977 [2024-07-15 16:10:18.802825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.977 [2024-07-15 16:10:18.802857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.977 qpair failed and we were unable to recover it. 00:28:49.977 [2024-07-15 16:10:18.803165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.977 [2024-07-15 16:10:18.803198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.977 qpair failed and we were unable to recover it. 00:28:49.977 [2024-07-15 16:10:18.803506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.977 [2024-07-15 16:10:18.803539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.977 qpair failed and we were unable to recover it. 00:28:49.977 [2024-07-15 16:10:18.803832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.977 [2024-07-15 16:10:18.803864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.977 qpair failed and we were unable to recover it. 00:28:49.977 [2024-07-15 16:10:18.804190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.977 [2024-07-15 16:10:18.804222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.977 qpair failed and we were unable to recover it. 00:28:49.977 [2024-07-15 16:10:18.804548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.977 [2024-07-15 16:10:18.804581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.977 qpair failed and we were unable to recover it. 00:28:49.977 [2024-07-15 16:10:18.804881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.977 [2024-07-15 16:10:18.804914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.977 qpair failed and we were unable to recover it. 00:28:49.977 [2024-07-15 16:10:18.805212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.977 [2024-07-15 16:10:18.805256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.977 qpair failed and we were unable to recover it. 00:28:49.977 [2024-07-15 16:10:18.805484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.977 [2024-07-15 16:10:18.805501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.977 qpair failed and we were unable to recover it. 00:28:49.977 [2024-07-15 16:10:18.805750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.977 [2024-07-15 16:10:18.805783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.977 qpair failed and we were unable to recover it. 00:28:49.977 [2024-07-15 16:10:18.806090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.977 [2024-07-15 16:10:18.806122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.977 qpair failed and we were unable to recover it. 00:28:49.977 [2024-07-15 16:10:18.806435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.977 [2024-07-15 16:10:18.806453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.977 qpair failed and we were unable to recover it. 00:28:49.977 [2024-07-15 16:10:18.806742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.977 [2024-07-15 16:10:18.806788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.977 qpair failed and we were unable to recover it. 00:28:49.977 [2024-07-15 16:10:18.807086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.977 [2024-07-15 16:10:18.807118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.977 qpair failed and we were unable to recover it. 00:28:49.977 [2024-07-15 16:10:18.807354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.977 [2024-07-15 16:10:18.807389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.977 qpair failed and we were unable to recover it. 00:28:49.977 [2024-07-15 16:10:18.807559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.977 [2024-07-15 16:10:18.807592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.977 qpair failed and we were unable to recover it. 00:28:49.977 [2024-07-15 16:10:18.807805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.977 [2024-07-15 16:10:18.807844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.977 qpair failed and we were unable to recover it. 00:28:49.977 [2024-07-15 16:10:18.808117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.977 [2024-07-15 16:10:18.808134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.977 qpair failed and we were unable to recover it. 00:28:49.977 [2024-07-15 16:10:18.808412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.977 [2024-07-15 16:10:18.808447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.977 qpair failed and we were unable to recover it. 00:28:49.977 [2024-07-15 16:10:18.808734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.977 [2024-07-15 16:10:18.808767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.977 qpair failed and we were unable to recover it. 00:28:49.977 [2024-07-15 16:10:18.809022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.977 [2024-07-15 16:10:18.809055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.977 qpair failed and we were unable to recover it. 00:28:49.977 [2024-07-15 16:10:18.809239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.977 [2024-07-15 16:10:18.809272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.977 qpair failed and we were unable to recover it. 00:28:49.977 [2024-07-15 16:10:18.809500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.977 [2024-07-15 16:10:18.809533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.977 qpair failed and we were unable to recover it. 00:28:49.977 [2024-07-15 16:10:18.809808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.977 [2024-07-15 16:10:18.809825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.977 qpair failed and we were unable to recover it. 00:28:49.977 [2024-07-15 16:10:18.810012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.977 [2024-07-15 16:10:18.810029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.977 qpair failed and we were unable to recover it. 00:28:49.977 [2024-07-15 16:10:18.810298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.977 [2024-07-15 16:10:18.810332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.977 qpair failed and we were unable to recover it. 00:28:49.977 [2024-07-15 16:10:18.810497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.977 [2024-07-15 16:10:18.810514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.977 qpair failed and we were unable to recover it. 00:28:49.977 [2024-07-15 16:10:18.810768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.977 [2024-07-15 16:10:18.810801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.977 qpair failed and we were unable to recover it. 00:28:49.977 [2024-07-15 16:10:18.811027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.977 [2024-07-15 16:10:18.811059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.977 qpair failed and we were unable to recover it. 00:28:49.977 [2024-07-15 16:10:18.811360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.977 [2024-07-15 16:10:18.811394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.977 qpair failed and we were unable to recover it. 00:28:49.977 [2024-07-15 16:10:18.811607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.978 [2024-07-15 16:10:18.811644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.978 qpair failed and we were unable to recover it. 00:28:49.978 [2024-07-15 16:10:18.811937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.978 [2024-07-15 16:10:18.811954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.978 qpair failed and we were unable to recover it. 00:28:49.978 [2024-07-15 16:10:18.812157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.978 [2024-07-15 16:10:18.812175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.978 qpair failed and we were unable to recover it. 00:28:49.978 [2024-07-15 16:10:18.812425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.978 [2024-07-15 16:10:18.812442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.978 qpair failed and we were unable to recover it. 00:28:49.978 [2024-07-15 16:10:18.812570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.978 [2024-07-15 16:10:18.812587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.978 qpair failed and we were unable to recover it. 00:28:49.978 [2024-07-15 16:10:18.812821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.978 [2024-07-15 16:10:18.812854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.978 qpair failed and we were unable to recover it. 00:28:49.978 [2024-07-15 16:10:18.813093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.978 [2024-07-15 16:10:18.813126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.978 qpair failed and we were unable to recover it. 00:28:49.978 [2024-07-15 16:10:18.813427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.978 [2024-07-15 16:10:18.813460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.978 qpair failed and we were unable to recover it. 00:28:49.978 [2024-07-15 16:10:18.813676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.978 [2024-07-15 16:10:18.813709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.978 qpair failed and we were unable to recover it. 00:28:49.978 [2024-07-15 16:10:18.813916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.978 [2024-07-15 16:10:18.813949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.978 qpair failed and we were unable to recover it. 00:28:49.978 [2024-07-15 16:10:18.814238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.978 [2024-07-15 16:10:18.814272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.978 qpair failed and we were unable to recover it. 00:28:49.978 [2024-07-15 16:10:18.814596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.978 [2024-07-15 16:10:18.814628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.978 qpair failed and we were unable to recover it. 00:28:49.978 [2024-07-15 16:10:18.814842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.978 [2024-07-15 16:10:18.814874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.978 qpair failed and we were unable to recover it. 00:28:49.978 [2024-07-15 16:10:18.815154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.978 [2024-07-15 16:10:18.815186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.978 qpair failed and we were unable to recover it. 00:28:49.978 [2024-07-15 16:10:18.815504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.978 [2024-07-15 16:10:18.815539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.978 qpair failed and we were unable to recover it. 00:28:49.978 [2024-07-15 16:10:18.815842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.978 [2024-07-15 16:10:18.815874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.978 qpair failed and we were unable to recover it. 00:28:49.978 [2024-07-15 16:10:18.816164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.978 [2024-07-15 16:10:18.816197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.978 qpair failed and we were unable to recover it. 00:28:49.978 [2024-07-15 16:10:18.816551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.978 [2024-07-15 16:10:18.816585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.978 qpair failed and we were unable to recover it. 00:28:49.978 [2024-07-15 16:10:18.816793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.978 [2024-07-15 16:10:18.816826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.978 qpair failed and we were unable to recover it. 00:28:49.978 [2024-07-15 16:10:18.817103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.978 [2024-07-15 16:10:18.817120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.978 qpair failed and we were unable to recover it. 00:28:49.978 [2024-07-15 16:10:18.817317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.978 [2024-07-15 16:10:18.817336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.978 qpair failed and we were unable to recover it. 00:28:49.978 [2024-07-15 16:10:18.817590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.978 [2024-07-15 16:10:18.817608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.978 qpair failed and we were unable to recover it. 00:28:49.978 [2024-07-15 16:10:18.817858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.978 [2024-07-15 16:10:18.817877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.978 qpair failed and we were unable to recover it. 00:28:49.978 [2024-07-15 16:10:18.818184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.978 [2024-07-15 16:10:18.818217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.978 qpair failed and we were unable to recover it. 00:28:49.978 [2024-07-15 16:10:18.818408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.978 [2024-07-15 16:10:18.818442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.978 qpair failed and we were unable to recover it. 00:28:49.978 [2024-07-15 16:10:18.818680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.978 [2024-07-15 16:10:18.818712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.978 qpair failed and we were unable to recover it. 00:28:49.978 [2024-07-15 16:10:18.819011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.978 [2024-07-15 16:10:18.819030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.978 qpair failed and we were unable to recover it. 00:28:49.978 [2024-07-15 16:10:18.819154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.978 [2024-07-15 16:10:18.819170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.978 qpair failed and we were unable to recover it. 00:28:49.978 [2024-07-15 16:10:18.819382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.978 [2024-07-15 16:10:18.819418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.978 qpair failed and we were unable to recover it. 00:28:49.978 [2024-07-15 16:10:18.819642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.978 [2024-07-15 16:10:18.819675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.978 qpair failed and we were unable to recover it. 00:28:49.978 [2024-07-15 16:10:18.819956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.978 [2024-07-15 16:10:18.819990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.978 qpair failed and we were unable to recover it. 00:28:49.978 [2024-07-15 16:10:18.820201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.978 [2024-07-15 16:10:18.820245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.978 qpair failed and we were unable to recover it. 00:28:49.978 [2024-07-15 16:10:18.820525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.978 [2024-07-15 16:10:18.820557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.978 qpair failed and we were unable to recover it. 00:28:49.978 [2024-07-15 16:10:18.820812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.978 [2024-07-15 16:10:18.820844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.978 qpair failed and we were unable to recover it. 00:28:49.978 [2024-07-15 16:10:18.821174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.978 [2024-07-15 16:10:18.821206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.978 qpair failed and we were unable to recover it. 00:28:49.978 [2024-07-15 16:10:18.821468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.978 [2024-07-15 16:10:18.821501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.978 qpair failed and we were unable to recover it. 00:28:49.978 [2024-07-15 16:10:18.821762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.978 [2024-07-15 16:10:18.821795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.978 qpair failed and we were unable to recover it. 00:28:49.978 [2024-07-15 16:10:18.822002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.978 [2024-07-15 16:10:18.822021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.978 qpair failed and we were unable to recover it. 00:28:49.978 [2024-07-15 16:10:18.822268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.978 [2024-07-15 16:10:18.822287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.978 qpair failed and we were unable to recover it. 00:28:49.978 [2024-07-15 16:10:18.822559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.978 [2024-07-15 16:10:18.822576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.978 qpair failed and we were unable to recover it. 00:28:49.979 [2024-07-15 16:10:18.822911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.979 [2024-07-15 16:10:18.822949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.979 qpair failed and we were unable to recover it. 00:28:49.979 [2024-07-15 16:10:18.823250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.979 [2024-07-15 16:10:18.823286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.979 qpair failed and we were unable to recover it. 00:28:49.979 [2024-07-15 16:10:18.823509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.979 [2024-07-15 16:10:18.823542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.979 qpair failed and we were unable to recover it. 00:28:49.979 [2024-07-15 16:10:18.823704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.979 [2024-07-15 16:10:18.823739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.979 qpair failed and we were unable to recover it. 00:28:49.979 [2024-07-15 16:10:18.823951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.979 [2024-07-15 16:10:18.823986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.979 qpair failed and we were unable to recover it. 00:28:49.979 [2024-07-15 16:10:18.824199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.979 [2024-07-15 16:10:18.824243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.979 qpair failed and we were unable to recover it. 00:28:49.979 [2024-07-15 16:10:18.824459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.979 [2024-07-15 16:10:18.824491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.979 qpair failed and we were unable to recover it. 00:28:49.979 [2024-07-15 16:10:18.824812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.979 [2024-07-15 16:10:18.824853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.979 qpair failed and we were unable to recover it. 00:28:49.979 [2024-07-15 16:10:18.825066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.979 [2024-07-15 16:10:18.825083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.979 qpair failed and we were unable to recover it. 00:28:49.979 [2024-07-15 16:10:18.825257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.979 [2024-07-15 16:10:18.825274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.979 qpair failed and we were unable to recover it. 00:28:49.979 [2024-07-15 16:10:18.825417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.979 [2024-07-15 16:10:18.825434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.979 qpair failed and we were unable to recover it. 00:28:49.979 [2024-07-15 16:10:18.825614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.979 [2024-07-15 16:10:18.825632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.979 qpair failed and we were unable to recover it. 00:28:49.979 [2024-07-15 16:10:18.825901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.979 [2024-07-15 16:10:18.825933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.979 qpair failed and we were unable to recover it. 00:28:49.979 [2024-07-15 16:10:18.826283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.979 [2024-07-15 16:10:18.826319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.979 qpair failed and we were unable to recover it. 00:28:49.979 [2024-07-15 16:10:18.826548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.979 [2024-07-15 16:10:18.826567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.979 qpair failed and we were unable to recover it. 00:28:49.979 [2024-07-15 16:10:18.826773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.979 [2024-07-15 16:10:18.826791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.979 qpair failed and we were unable to recover it. 00:28:49.979 [2024-07-15 16:10:18.827015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.979 [2024-07-15 16:10:18.827034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.979 qpair failed and we were unable to recover it. 00:28:49.979 [2024-07-15 16:10:18.827167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.979 [2024-07-15 16:10:18.827185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.979 qpair failed and we were unable to recover it. 00:28:49.979 [2024-07-15 16:10:18.827501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.979 [2024-07-15 16:10:18.827552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.979 qpair failed and we were unable to recover it. 00:28:49.979 [2024-07-15 16:10:18.827774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.979 [2024-07-15 16:10:18.827806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.979 qpair failed and we were unable to recover it. 00:28:49.979 [2024-07-15 16:10:18.828091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.979 [2024-07-15 16:10:18.828124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.979 qpair failed and we were unable to recover it. 00:28:49.979 [2024-07-15 16:10:18.828352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.979 [2024-07-15 16:10:18.828387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.979 qpair failed and we were unable to recover it. 00:28:49.979 [2024-07-15 16:10:18.828561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.979 [2024-07-15 16:10:18.828593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.979 qpair failed and we were unable to recover it. 00:28:49.979 [2024-07-15 16:10:18.828898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.979 [2024-07-15 16:10:18.828916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.979 qpair failed and we were unable to recover it. 00:28:49.979 [2024-07-15 16:10:18.829096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.979 [2024-07-15 16:10:18.829113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.979 qpair failed and we were unable to recover it. 00:28:49.979 [2024-07-15 16:10:18.829379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.979 [2024-07-15 16:10:18.829397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.979 qpair failed and we were unable to recover it. 00:28:49.979 [2024-07-15 16:10:18.829685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.979 [2024-07-15 16:10:18.829718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.979 qpair failed and we were unable to recover it. 00:28:49.979 [2024-07-15 16:10:18.829891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.979 [2024-07-15 16:10:18.829923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.979 qpair failed and we were unable to recover it. 00:28:49.979 [2024-07-15 16:10:18.830138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.979 [2024-07-15 16:10:18.830171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.979 qpair failed and we were unable to recover it. 00:28:49.979 [2024-07-15 16:10:18.830413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.979 [2024-07-15 16:10:18.830448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.979 qpair failed and we were unable to recover it. 00:28:49.979 [2024-07-15 16:10:18.830747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.979 [2024-07-15 16:10:18.830764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.979 qpair failed and we were unable to recover it. 00:28:49.979 [2024-07-15 16:10:18.831065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.979 [2024-07-15 16:10:18.831110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.979 qpair failed and we were unable to recover it. 00:28:49.979 [2024-07-15 16:10:18.831261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.979 [2024-07-15 16:10:18.831296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.979 qpair failed and we were unable to recover it. 00:28:49.979 [2024-07-15 16:10:18.831511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.979 [2024-07-15 16:10:18.831545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.979 qpair failed and we were unable to recover it. 00:28:49.979 [2024-07-15 16:10:18.831788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.979 [2024-07-15 16:10:18.831805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.979 qpair failed and we were unable to recover it. 00:28:49.979 [2024-07-15 16:10:18.832005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.979 [2024-07-15 16:10:18.832023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.979 qpair failed and we were unable to recover it. 00:28:49.979 [2024-07-15 16:10:18.832216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.979 [2024-07-15 16:10:18.832261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.979 qpair failed and we were unable to recover it. 00:28:49.979 [2024-07-15 16:10:18.832438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.979 [2024-07-15 16:10:18.832471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.979 qpair failed and we were unable to recover it. 00:28:49.979 [2024-07-15 16:10:18.832813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.979 [2024-07-15 16:10:18.832846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.979 qpair failed and we were unable to recover it. 00:28:49.979 [2024-07-15 16:10:18.833084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.979 [2024-07-15 16:10:18.833117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.980 qpair failed and we were unable to recover it. 00:28:49.980 [2024-07-15 16:10:18.833386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.980 [2024-07-15 16:10:18.833426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.980 qpair failed and we were unable to recover it. 00:28:49.980 [2024-07-15 16:10:18.833670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.980 [2024-07-15 16:10:18.833701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.980 qpair failed and we were unable to recover it. 00:28:49.980 [2024-07-15 16:10:18.833875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.980 [2024-07-15 16:10:18.833908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.980 qpair failed and we were unable to recover it. 00:28:49.980 [2024-07-15 16:10:18.834126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.980 [2024-07-15 16:10:18.834159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.980 qpair failed and we were unable to recover it. 00:28:49.980 [2024-07-15 16:10:18.834407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.980 [2024-07-15 16:10:18.834444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.980 qpair failed and we were unable to recover it. 00:28:49.980 [2024-07-15 16:10:18.834610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.980 [2024-07-15 16:10:18.834643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.980 qpair failed and we were unable to recover it. 00:28:49.980 [2024-07-15 16:10:18.834927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.980 [2024-07-15 16:10:18.834961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.980 qpair failed and we were unable to recover it. 00:28:49.980 [2024-07-15 16:10:18.835175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.980 [2024-07-15 16:10:18.835209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.980 qpair failed and we were unable to recover it. 00:28:49.980 [2024-07-15 16:10:18.835492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.980 [2024-07-15 16:10:18.835525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.980 qpair failed and we were unable to recover it. 00:28:49.980 [2024-07-15 16:10:18.835683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.980 [2024-07-15 16:10:18.835716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.980 qpair failed and we were unable to recover it. 00:28:49.980 [2024-07-15 16:10:18.836004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.980 [2024-07-15 16:10:18.836036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.980 qpair failed and we were unable to recover it. 00:28:49.980 [2024-07-15 16:10:18.836347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.980 [2024-07-15 16:10:18.836381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.980 qpair failed and we were unable to recover it. 00:28:49.980 [2024-07-15 16:10:18.836666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.980 [2024-07-15 16:10:18.836698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.980 qpair failed and we were unable to recover it. 00:28:49.980 [2024-07-15 16:10:18.836944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.980 [2024-07-15 16:10:18.836976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.980 qpair failed and we were unable to recover it. 00:28:49.980 [2024-07-15 16:10:18.837186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.980 [2024-07-15 16:10:18.837218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.980 qpair failed and we were unable to recover it. 00:28:49.980 [2024-07-15 16:10:18.837510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.980 [2024-07-15 16:10:18.837542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.980 qpair failed and we were unable to recover it. 00:28:49.980 [2024-07-15 16:10:18.837829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.980 [2024-07-15 16:10:18.837862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.980 qpair failed and we were unable to recover it. 00:28:49.980 [2024-07-15 16:10:18.838171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.980 [2024-07-15 16:10:18.838204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.980 qpair failed and we were unable to recover it. 00:28:49.980 [2024-07-15 16:10:18.838450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.980 [2024-07-15 16:10:18.838482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.980 qpair failed and we were unable to recover it. 00:28:49.980 [2024-07-15 16:10:18.838736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.980 [2024-07-15 16:10:18.838770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.980 qpair failed and we were unable to recover it. 00:28:49.980 [2024-07-15 16:10:18.839031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.980 [2024-07-15 16:10:18.839064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.980 qpair failed and we were unable to recover it. 00:28:49.980 [2024-07-15 16:10:18.839210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.980 [2024-07-15 16:10:18.839267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.980 qpair failed and we were unable to recover it. 00:28:49.980 [2024-07-15 16:10:18.839483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.980 [2024-07-15 16:10:18.839515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.980 qpair failed and we were unable to recover it. 00:28:49.980 [2024-07-15 16:10:18.839735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.980 [2024-07-15 16:10:18.839767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.980 qpair failed and we were unable to recover it. 00:28:49.980 [2024-07-15 16:10:18.840060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.980 [2024-07-15 16:10:18.840093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.980 qpair failed and we were unable to recover it. 00:28:49.980 [2024-07-15 16:10:18.840324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.980 [2024-07-15 16:10:18.840359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.980 qpair failed and we were unable to recover it. 00:28:49.980 [2024-07-15 16:10:18.840638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.980 [2024-07-15 16:10:18.840655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.980 qpair failed and we were unable to recover it. 00:28:49.980 [2024-07-15 16:10:18.840871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.980 [2024-07-15 16:10:18.840888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.980 qpair failed and we were unable to recover it. 00:28:49.980 [2024-07-15 16:10:18.841101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.980 [2024-07-15 16:10:18.841117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.980 qpair failed and we were unable to recover it. 00:28:49.980 [2024-07-15 16:10:18.841401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.980 [2024-07-15 16:10:18.841435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.980 qpair failed and we were unable to recover it. 00:28:49.980 [2024-07-15 16:10:18.841685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.980 [2024-07-15 16:10:18.841717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.980 qpair failed and we were unable to recover it. 00:28:49.980 [2024-07-15 16:10:18.841953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.980 [2024-07-15 16:10:18.841986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.980 qpair failed and we were unable to recover it. 00:28:49.980 [2024-07-15 16:10:18.842290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.980 [2024-07-15 16:10:18.842324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.980 qpair failed and we were unable to recover it. 00:28:49.980 [2024-07-15 16:10:18.842549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.980 [2024-07-15 16:10:18.842580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.980 qpair failed and we were unable to recover it. 00:28:49.980 [2024-07-15 16:10:18.842861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.980 [2024-07-15 16:10:18.842895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.980 qpair failed and we were unable to recover it. 00:28:49.980 [2024-07-15 16:10:18.843203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.980 [2024-07-15 16:10:18.843259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.980 qpair failed and we were unable to recover it. 00:28:49.980 [2024-07-15 16:10:18.843491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.981 [2024-07-15 16:10:18.843524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.981 qpair failed and we were unable to recover it. 00:28:49.981 [2024-07-15 16:10:18.843803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.981 [2024-07-15 16:10:18.843836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.981 qpair failed and we were unable to recover it. 00:28:49.981 [2024-07-15 16:10:18.844088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.981 [2024-07-15 16:10:18.844122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.981 qpair failed and we were unable to recover it. 00:28:49.981 [2024-07-15 16:10:18.844356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.981 [2024-07-15 16:10:18.844390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.981 qpair failed and we were unable to recover it. 00:28:49.981 [2024-07-15 16:10:18.844636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.981 [2024-07-15 16:10:18.844656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.981 qpair failed and we were unable to recover it. 00:28:49.981 [2024-07-15 16:10:18.844854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.981 [2024-07-15 16:10:18.844872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.981 qpair failed and we were unable to recover it. 00:28:49.981 [2024-07-15 16:10:18.845140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.981 [2024-07-15 16:10:18.845177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.981 qpair failed and we were unable to recover it. 00:28:49.981 [2024-07-15 16:10:18.845337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.981 [2024-07-15 16:10:18.845370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.981 qpair failed and we were unable to recover it. 00:28:49.981 [2024-07-15 16:10:18.845605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.981 [2024-07-15 16:10:18.845637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.981 qpair failed and we were unable to recover it. 00:28:49.981 [2024-07-15 16:10:18.845826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.981 [2024-07-15 16:10:18.845843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.981 qpair failed and we were unable to recover it. 00:28:49.981 [2024-07-15 16:10:18.846129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.981 [2024-07-15 16:10:18.846147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.981 qpair failed and we were unable to recover it. 00:28:49.981 [2024-07-15 16:10:18.846350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.981 [2024-07-15 16:10:18.846367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.981 qpair failed and we were unable to recover it. 00:28:49.981 [2024-07-15 16:10:18.846509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.981 [2024-07-15 16:10:18.846541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.981 qpair failed and we were unable to recover it. 00:28:49.981 [2024-07-15 16:10:18.846841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.981 [2024-07-15 16:10:18.846873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.981 qpair failed and we were unable to recover it. 00:28:49.981 [2024-07-15 16:10:18.847105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.981 [2024-07-15 16:10:18.847137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.981 qpair failed and we were unable to recover it. 00:28:49.981 [2024-07-15 16:10:18.847394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.981 [2024-07-15 16:10:18.847426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.981 qpair failed and we were unable to recover it. 00:28:49.981 [2024-07-15 16:10:18.847704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.981 [2024-07-15 16:10:18.847736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.981 qpair failed and we were unable to recover it. 00:28:49.981 [2024-07-15 16:10:18.847996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.981 [2024-07-15 16:10:18.848028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.981 qpair failed and we were unable to recover it. 00:28:49.981 [2024-07-15 16:10:18.848282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.981 [2024-07-15 16:10:18.848317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.981 qpair failed and we were unable to recover it. 00:28:49.981 [2024-07-15 16:10:18.848544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.981 [2024-07-15 16:10:18.848584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.981 qpair failed and we were unable to recover it. 00:28:49.981 [2024-07-15 16:10:18.848791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.981 [2024-07-15 16:10:18.848808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.981 qpair failed and we were unable to recover it. 00:28:49.981 [2024-07-15 16:10:18.849107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.981 [2024-07-15 16:10:18.849140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.981 qpair failed and we were unable to recover it. 00:28:49.981 [2024-07-15 16:10:18.849367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.981 [2024-07-15 16:10:18.849400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.981 qpair failed and we were unable to recover it. 00:28:49.981 [2024-07-15 16:10:18.849709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.981 [2024-07-15 16:10:18.849741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.981 qpair failed and we were unable to recover it. 00:28:49.981 [2024-07-15 16:10:18.850030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.981 [2024-07-15 16:10:18.850062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.981 qpair failed and we were unable to recover it. 00:28:49.981 [2024-07-15 16:10:18.850276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.981 [2024-07-15 16:10:18.850309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.981 qpair failed and we were unable to recover it. 00:28:49.981 [2024-07-15 16:10:18.850518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.981 [2024-07-15 16:10:18.850551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.981 qpair failed and we were unable to recover it. 00:28:49.981 [2024-07-15 16:10:18.850852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.981 [2024-07-15 16:10:18.850884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.981 qpair failed and we were unable to recover it. 00:28:49.981 [2024-07-15 16:10:18.851097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.981 [2024-07-15 16:10:18.851130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.981 qpair failed and we were unable to recover it. 00:28:49.981 [2024-07-15 16:10:18.851396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.981 [2024-07-15 16:10:18.851429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.981 qpair failed and we were unable to recover it. 00:28:49.981 [2024-07-15 16:10:18.851635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.981 [2024-07-15 16:10:18.851667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.981 qpair failed and we were unable to recover it. 00:28:49.981 [2024-07-15 16:10:18.851984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.981 [2024-07-15 16:10:18.852017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.981 qpair failed and we were unable to recover it. 00:28:49.981 [2024-07-15 16:10:18.852320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.981 [2024-07-15 16:10:18.852354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.981 qpair failed and we were unable to recover it. 00:28:49.981 [2024-07-15 16:10:18.852588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.981 [2024-07-15 16:10:18.852622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.981 qpair failed and we were unable to recover it. 00:28:49.981 [2024-07-15 16:10:18.852843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.981 [2024-07-15 16:10:18.852876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.981 qpair failed and we were unable to recover it. 00:28:49.981 [2024-07-15 16:10:18.853188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.981 [2024-07-15 16:10:18.853221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.981 qpair failed and we were unable to recover it. 00:28:49.981 [2024-07-15 16:10:18.853504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.981 [2024-07-15 16:10:18.853537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.981 qpair failed and we were unable to recover it. 00:28:49.981 [2024-07-15 16:10:18.853683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.981 [2024-07-15 16:10:18.853715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.981 qpair failed and we were unable to recover it. 00:28:49.982 [2024-07-15 16:10:18.853914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.982 [2024-07-15 16:10:18.853931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.982 qpair failed and we were unable to recover it. 00:28:49.982 [2024-07-15 16:10:18.854197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.982 [2024-07-15 16:10:18.854213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.982 qpair failed and we were unable to recover it. 00:28:49.982 [2024-07-15 16:10:18.854437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.982 [2024-07-15 16:10:18.854470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.982 qpair failed and we were unable to recover it. 00:28:49.982 [2024-07-15 16:10:18.854709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.982 [2024-07-15 16:10:18.854741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.982 qpair failed and we were unable to recover it. 00:28:49.982 [2024-07-15 16:10:18.855003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.982 [2024-07-15 16:10:18.855035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.982 qpair failed and we were unable to recover it. 00:28:49.982 [2024-07-15 16:10:18.855368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.982 [2024-07-15 16:10:18.855401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.982 qpair failed and we were unable to recover it. 00:28:49.982 [2024-07-15 16:10:18.855658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.982 [2024-07-15 16:10:18.855700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.982 qpair failed and we were unable to recover it. 00:28:49.982 [2024-07-15 16:10:18.856020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.982 [2024-07-15 16:10:18.856053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.982 qpair failed and we were unable to recover it. 00:28:49.982 [2024-07-15 16:10:18.856272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.982 [2024-07-15 16:10:18.856307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.982 qpair failed and we were unable to recover it. 00:28:49.982 [2024-07-15 16:10:18.856614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.982 [2024-07-15 16:10:18.856647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.982 qpair failed and we were unable to recover it. 00:28:49.982 [2024-07-15 16:10:18.856954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.982 [2024-07-15 16:10:18.856985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.982 qpair failed and we were unable to recover it. 00:28:49.982 [2024-07-15 16:10:18.857298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.982 [2024-07-15 16:10:18.857332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.982 qpair failed and we were unable to recover it. 00:28:49.982 [2024-07-15 16:10:18.857614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.982 [2024-07-15 16:10:18.857632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.982 qpair failed and we were unable to recover it. 00:28:49.982 [2024-07-15 16:10:18.857832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.982 [2024-07-15 16:10:18.857848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.982 qpair failed and we were unable to recover it. 00:28:49.982 [2024-07-15 16:10:18.858048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.982 [2024-07-15 16:10:18.858080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.982 qpair failed and we were unable to recover it. 00:28:49.982 [2024-07-15 16:10:18.858385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.982 [2024-07-15 16:10:18.858419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.982 qpair failed and we were unable to recover it. 00:28:49.982 [2024-07-15 16:10:18.858673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.982 [2024-07-15 16:10:18.858705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.982 qpair failed and we were unable to recover it. 00:28:49.982 [2024-07-15 16:10:18.858984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.982 [2024-07-15 16:10:18.859000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.982 qpair failed and we were unable to recover it. 00:28:49.982 [2024-07-15 16:10:18.859198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.982 [2024-07-15 16:10:18.859215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.982 qpair failed and we were unable to recover it. 00:28:49.982 [2024-07-15 16:10:18.859541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.982 [2024-07-15 16:10:18.859575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.982 qpair failed and we were unable to recover it. 00:28:49.982 [2024-07-15 16:10:18.859887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.982 [2024-07-15 16:10:18.859920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.982 qpair failed and we were unable to recover it. 00:28:49.982 [2024-07-15 16:10:18.860248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.982 [2024-07-15 16:10:18.860282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.982 qpair failed and we were unable to recover it. 00:28:49.982 [2024-07-15 16:10:18.860466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.982 [2024-07-15 16:10:18.860483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.982 qpair failed and we were unable to recover it. 00:28:49.982 [2024-07-15 16:10:18.860682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.982 [2024-07-15 16:10:18.860715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.982 qpair failed and we were unable to recover it. 00:28:49.982 [2024-07-15 16:10:18.860961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.982 [2024-07-15 16:10:18.860994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.982 qpair failed and we were unable to recover it. 00:28:49.982 [2024-07-15 16:10:18.861200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.982 [2024-07-15 16:10:18.861245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.982 qpair failed and we were unable to recover it. 00:28:49.982 [2024-07-15 16:10:18.861476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.982 [2024-07-15 16:10:18.861509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.982 qpair failed and we were unable to recover it. 00:28:49.982 [2024-07-15 16:10:18.861751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.982 [2024-07-15 16:10:18.861783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.982 qpair failed and we were unable to recover it. 00:28:49.982 [2024-07-15 16:10:18.862059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.982 [2024-07-15 16:10:18.862076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.982 qpair failed and we were unable to recover it. 00:28:49.982 [2024-07-15 16:10:18.862280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.982 [2024-07-15 16:10:18.862299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.982 qpair failed and we were unable to recover it. 00:28:49.982 [2024-07-15 16:10:18.862583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.982 [2024-07-15 16:10:18.862615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.982 qpair failed and we were unable to recover it. 00:28:49.982 [2024-07-15 16:10:18.862920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.982 [2024-07-15 16:10:18.862952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.982 qpair failed and we were unable to recover it. 00:28:49.982 [2024-07-15 16:10:18.863199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.982 [2024-07-15 16:10:18.863243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.982 qpair failed and we were unable to recover it. 00:28:49.982 [2024-07-15 16:10:18.863463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.982 [2024-07-15 16:10:18.863496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.982 qpair failed and we were unable to recover it. 00:28:49.982 [2024-07-15 16:10:18.863778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.982 [2024-07-15 16:10:18.863811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.982 qpair failed and we were unable to recover it. 00:28:49.982 [2024-07-15 16:10:18.864122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.982 [2024-07-15 16:10:18.864140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.982 qpair failed and we were unable to recover it. 00:28:49.982 [2024-07-15 16:10:18.864329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.982 [2024-07-15 16:10:18.864347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.982 qpair failed and we were unable to recover it. 00:28:49.982 [2024-07-15 16:10:18.864618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.982 [2024-07-15 16:10:18.864634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.982 qpair failed and we were unable to recover it. 00:28:49.982 [2024-07-15 16:10:18.864930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.982 [2024-07-15 16:10:18.864962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.982 qpair failed and we were unable to recover it. 00:28:49.982 [2024-07-15 16:10:18.865278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.982 [2024-07-15 16:10:18.865311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.982 qpair failed and we were unable to recover it. 00:28:49.982 [2024-07-15 16:10:18.865540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.982 [2024-07-15 16:10:18.865573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.982 qpair failed and we were unable to recover it. 00:28:49.983 [2024-07-15 16:10:18.865892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.983 [2024-07-15 16:10:18.865924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.983 qpair failed and we were unable to recover it. 00:28:49.983 [2024-07-15 16:10:18.866209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.983 [2024-07-15 16:10:18.866266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.983 qpair failed and we were unable to recover it. 00:28:49.983 [2024-07-15 16:10:18.866503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.983 [2024-07-15 16:10:18.866536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.983 qpair failed and we were unable to recover it. 00:28:49.983 [2024-07-15 16:10:18.866753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.983 [2024-07-15 16:10:18.866786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.983 qpair failed and we were unable to recover it. 00:28:49.983 [2024-07-15 16:10:18.867084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.983 [2024-07-15 16:10:18.867101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.983 qpair failed and we were unable to recover it. 00:28:49.983 [2024-07-15 16:10:18.867286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.983 [2024-07-15 16:10:18.867308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.983 qpair failed and we were unable to recover it. 00:28:49.983 [2024-07-15 16:10:18.867523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.983 [2024-07-15 16:10:18.867539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.983 qpair failed and we were unable to recover it. 00:28:49.983 [2024-07-15 16:10:18.867737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.983 [2024-07-15 16:10:18.867754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.983 qpair failed and we were unable to recover it. 00:28:49.983 [2024-07-15 16:10:18.867935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.983 [2024-07-15 16:10:18.867953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.983 qpair failed and we were unable to recover it. 00:28:49.983 [2024-07-15 16:10:18.868148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.983 [2024-07-15 16:10:18.868164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.983 qpair failed and we were unable to recover it. 00:28:49.983 [2024-07-15 16:10:18.868374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.983 [2024-07-15 16:10:18.868407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.983 qpair failed and we were unable to recover it. 00:28:49.983 [2024-07-15 16:10:18.868576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.983 [2024-07-15 16:10:18.868593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.983 qpair failed and we were unable to recover it. 00:28:49.983 [2024-07-15 16:10:18.868861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.983 [2024-07-15 16:10:18.868878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.983 qpair failed and we were unable to recover it. 00:28:49.983 [2024-07-15 16:10:18.869153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.983 [2024-07-15 16:10:18.869171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.983 qpair failed and we were unable to recover it. 00:28:49.983 [2024-07-15 16:10:18.869438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.983 [2024-07-15 16:10:18.869455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.983 qpair failed and we were unable to recover it. 00:28:49.983 [2024-07-15 16:10:18.869561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.983 [2024-07-15 16:10:18.869578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.983 qpair failed and we were unable to recover it. 00:28:49.983 [2024-07-15 16:10:18.869751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.983 [2024-07-15 16:10:18.869767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.983 qpair failed and we were unable to recover it. 00:28:49.983 [2024-07-15 16:10:18.869941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.983 [2024-07-15 16:10:18.869957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.983 qpair failed and we were unable to recover it. 00:28:49.983 [2024-07-15 16:10:18.870241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.983 [2024-07-15 16:10:18.870275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.983 qpair failed and we were unable to recover it. 00:28:49.983 [2024-07-15 16:10:18.870455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.983 [2024-07-15 16:10:18.870488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.983 qpair failed and we were unable to recover it. 00:28:49.983 [2024-07-15 16:10:18.870768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.983 [2024-07-15 16:10:18.870801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.983 qpair failed and we were unable to recover it. 00:28:49.983 [2024-07-15 16:10:18.871075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.983 [2024-07-15 16:10:18.871092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.983 qpair failed and we were unable to recover it. 00:28:49.983 [2024-07-15 16:10:18.871271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.983 [2024-07-15 16:10:18.871290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.983 qpair failed and we were unable to recover it. 00:28:49.983 [2024-07-15 16:10:18.871478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.983 [2024-07-15 16:10:18.871494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:49.983 qpair failed and we were unable to recover it. 00:28:49.983 [2024-07-15 16:10:18.871748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.983 [2024-07-15 16:10:18.871765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.259 qpair failed and we were unable to recover it. 00:28:50.259 [2024-07-15 16:10:18.872021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.259 [2024-07-15 16:10:18.872038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.259 qpair failed and we were unable to recover it. 00:28:50.259 [2024-07-15 16:10:18.872247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.259 [2024-07-15 16:10:18.872265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.259 qpair failed and we were unable to recover it. 00:28:50.259 [2024-07-15 16:10:18.872555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.259 [2024-07-15 16:10:18.872572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.259 qpair failed and we were unable to recover it. 00:28:50.259 [2024-07-15 16:10:18.872715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.259 [2024-07-15 16:10:18.872732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.259 qpair failed and we were unable to recover it. 00:28:50.259 [2024-07-15 16:10:18.873005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.259 [2024-07-15 16:10:18.873022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.259 qpair failed and we were unable to recover it. 00:28:50.259 [2024-07-15 16:10:18.873293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.259 [2024-07-15 16:10:18.873310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.259 qpair failed and we were unable to recover it. 00:28:50.259 [2024-07-15 16:10:18.873554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.259 [2024-07-15 16:10:18.873572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.259 qpair failed and we were unable to recover it. 00:28:50.259 [2024-07-15 16:10:18.873822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.259 [2024-07-15 16:10:18.873839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.259 qpair failed and we were unable to recover it. 00:28:50.259 [2024-07-15 16:10:18.874099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.259 [2024-07-15 16:10:18.874116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.259 qpair failed and we were unable to recover it. 00:28:50.259 [2024-07-15 16:10:18.874253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.259 [2024-07-15 16:10:18.874271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.259 qpair failed and we were unable to recover it. 00:28:50.259 [2024-07-15 16:10:18.874447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.259 [2024-07-15 16:10:18.874464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.259 qpair failed and we were unable to recover it. 00:28:50.259 [2024-07-15 16:10:18.874590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.259 [2024-07-15 16:10:18.874607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.259 qpair failed and we were unable to recover it. 00:28:50.259 [2024-07-15 16:10:18.874795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.259 [2024-07-15 16:10:18.874812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.259 qpair failed and we were unable to recover it. 00:28:50.259 [2024-07-15 16:10:18.875005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.259 [2024-07-15 16:10:18.875022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.259 qpair failed and we were unable to recover it. 00:28:50.259 [2024-07-15 16:10:18.875291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.259 [2024-07-15 16:10:18.875309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.259 qpair failed and we were unable to recover it. 00:28:50.259 [2024-07-15 16:10:18.875487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.259 [2024-07-15 16:10:18.875504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.259 qpair failed and we were unable to recover it. 00:28:50.259 [2024-07-15 16:10:18.875717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.259 [2024-07-15 16:10:18.875735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.259 qpair failed and we were unable to recover it. 00:28:50.259 [2024-07-15 16:10:18.875932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.259 [2024-07-15 16:10:18.875970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.259 qpair failed and we were unable to recover it. 00:28:50.259 [2024-07-15 16:10:18.876199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.259 [2024-07-15 16:10:18.876254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.259 qpair failed and we were unable to recover it. 00:28:50.259 [2024-07-15 16:10:18.876539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.259 [2024-07-15 16:10:18.876572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.259 qpair failed and we were unable to recover it. 00:28:50.259 [2024-07-15 16:10:18.876880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.259 [2024-07-15 16:10:18.876917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.259 qpair failed and we were unable to recover it. 00:28:50.259 [2024-07-15 16:10:18.877200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.259 [2024-07-15 16:10:18.877243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.259 qpair failed and we were unable to recover it. 00:28:50.259 [2024-07-15 16:10:18.877567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.259 [2024-07-15 16:10:18.877599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.259 qpair failed and we were unable to recover it. 00:28:50.259 [2024-07-15 16:10:18.877926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.259 [2024-07-15 16:10:18.877958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.259 qpair failed and we were unable to recover it. 00:28:50.259 [2024-07-15 16:10:18.878281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.259 [2024-07-15 16:10:18.878314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.259 qpair failed and we were unable to recover it. 00:28:50.260 [2024-07-15 16:10:18.878597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.260 [2024-07-15 16:10:18.878630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.260 qpair failed and we were unable to recover it. 00:28:50.260 [2024-07-15 16:10:18.878909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.260 [2024-07-15 16:10:18.878942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.260 qpair failed and we were unable to recover it. 00:28:50.260 [2024-07-15 16:10:18.879181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.260 [2024-07-15 16:10:18.879213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.260 qpair failed and we were unable to recover it. 00:28:50.260 [2024-07-15 16:10:18.879395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.260 [2024-07-15 16:10:18.879428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.260 qpair failed and we were unable to recover it. 00:28:50.260 [2024-07-15 16:10:18.879710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.260 [2024-07-15 16:10:18.879741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.260 qpair failed and we were unable to recover it. 00:28:50.260 [2024-07-15 16:10:18.880053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.260 [2024-07-15 16:10:18.880085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.260 qpair failed and we were unable to recover it. 00:28:50.260 [2024-07-15 16:10:18.880371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.260 [2024-07-15 16:10:18.880405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.260 qpair failed and we were unable to recover it. 00:28:50.260 [2024-07-15 16:10:18.880720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.260 [2024-07-15 16:10:18.880752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.260 qpair failed and we were unable to recover it. 00:28:50.260 [2024-07-15 16:10:18.881039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.260 [2024-07-15 16:10:18.881072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.260 qpair failed and we were unable to recover it. 00:28:50.260 [2024-07-15 16:10:18.881239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.260 [2024-07-15 16:10:18.881273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.260 qpair failed and we were unable to recover it. 00:28:50.260 [2024-07-15 16:10:18.881487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.260 [2024-07-15 16:10:18.881520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.260 qpair failed and we were unable to recover it. 00:28:50.260 [2024-07-15 16:10:18.881746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.260 [2024-07-15 16:10:18.881779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.260 qpair failed and we were unable to recover it. 00:28:50.260 [2024-07-15 16:10:18.881995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.260 [2024-07-15 16:10:18.882027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.260 qpair failed and we were unable to recover it. 00:28:50.260 [2024-07-15 16:10:18.882309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.260 [2024-07-15 16:10:18.882343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.260 qpair failed and we were unable to recover it. 00:28:50.260 [2024-07-15 16:10:18.882622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.260 [2024-07-15 16:10:18.882654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.260 qpair failed and we were unable to recover it. 00:28:50.260 [2024-07-15 16:10:18.882816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.260 [2024-07-15 16:10:18.882848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.260 qpair failed and we were unable to recover it. 00:28:50.260 [2024-07-15 16:10:18.883099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.260 [2024-07-15 16:10:18.883132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.260 qpair failed and we were unable to recover it. 00:28:50.260 [2024-07-15 16:10:18.883412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.260 [2024-07-15 16:10:18.883446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.260 qpair failed and we were unable to recover it. 00:28:50.260 [2024-07-15 16:10:18.883755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.260 [2024-07-15 16:10:18.883787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.260 qpair failed and we were unable to recover it. 00:28:50.260 [2024-07-15 16:10:18.884071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.260 [2024-07-15 16:10:18.884103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.260 qpair failed and we were unable to recover it. 00:28:50.260 [2024-07-15 16:10:18.884408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.260 [2024-07-15 16:10:18.884442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.260 qpair failed and we were unable to recover it. 00:28:50.260 [2024-07-15 16:10:18.884736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.260 [2024-07-15 16:10:18.884767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.260 qpair failed and we were unable to recover it. 00:28:50.260 [2024-07-15 16:10:18.885059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.260 [2024-07-15 16:10:18.885138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.260 qpair failed and we were unable to recover it. 00:28:50.260 [2024-07-15 16:10:18.885342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.260 [2024-07-15 16:10:18.885381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.260 qpair failed and we were unable to recover it. 00:28:50.260 [2024-07-15 16:10:18.885668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.260 [2024-07-15 16:10:18.885702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.260 qpair failed and we were unable to recover it. 00:28:50.260 [2024-07-15 16:10:18.886034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.260 [2024-07-15 16:10:18.886066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.260 qpair failed and we were unable to recover it. 00:28:50.260 [2024-07-15 16:10:18.886359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.260 [2024-07-15 16:10:18.886392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.260 qpair failed and we were unable to recover it. 00:28:50.260 [2024-07-15 16:10:18.886629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.260 [2024-07-15 16:10:18.886661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.260 qpair failed and we were unable to recover it. 00:28:50.260 [2024-07-15 16:10:18.886915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.260 [2024-07-15 16:10:18.886947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.260 qpair failed and we were unable to recover it. 00:28:50.260 [2024-07-15 16:10:18.887220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.260 [2024-07-15 16:10:18.887263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.260 qpair failed and we were unable to recover it. 00:28:50.260 [2024-07-15 16:10:18.887567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.260 [2024-07-15 16:10:18.887599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.260 qpair failed and we were unable to recover it. 00:28:50.260 [2024-07-15 16:10:18.887866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.260 [2024-07-15 16:10:18.887897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.260 qpair failed and we were unable to recover it. 00:28:50.260 [2024-07-15 16:10:18.888175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.260 [2024-07-15 16:10:18.888207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.260 qpair failed and we were unable to recover it. 00:28:50.260 [2024-07-15 16:10:18.888395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.260 [2024-07-15 16:10:18.888429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.260 qpair failed and we were unable to recover it. 00:28:50.260 [2024-07-15 16:10:18.888588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.260 [2024-07-15 16:10:18.888621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.260 qpair failed and we were unable to recover it. 00:28:50.260 [2024-07-15 16:10:18.888790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.260 [2024-07-15 16:10:18.888822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.260 qpair failed and we were unable to recover it. 00:28:50.260 [2024-07-15 16:10:18.889003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.260 [2024-07-15 16:10:18.889035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.260 qpair failed and we were unable to recover it. 00:28:50.260 [2024-07-15 16:10:18.889392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.260 [2024-07-15 16:10:18.889426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.260 qpair failed and we were unable to recover it. 00:28:50.260 [2024-07-15 16:10:18.889605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.260 [2024-07-15 16:10:18.889637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.260 qpair failed and we were unable to recover it. 00:28:50.260 [2024-07-15 16:10:18.889943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.260 [2024-07-15 16:10:18.889975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.260 qpair failed and we were unable to recover it. 00:28:50.260 [2024-07-15 16:10:18.890143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.260 [2024-07-15 16:10:18.890176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.260 qpair failed and we were unable to recover it. 00:28:50.261 [2024-07-15 16:10:18.890486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.261 [2024-07-15 16:10:18.890519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.261 qpair failed and we were unable to recover it. 00:28:50.261 [2024-07-15 16:10:18.890846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.261 [2024-07-15 16:10:18.890878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.261 qpair failed and we were unable to recover it. 00:28:50.261 [2024-07-15 16:10:18.891179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.261 [2024-07-15 16:10:18.891212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.261 qpair failed and we were unable to recover it. 00:28:50.261 [2024-07-15 16:10:18.891530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.261 [2024-07-15 16:10:18.891562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.261 qpair failed and we were unable to recover it. 00:28:50.261 [2024-07-15 16:10:18.891827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.261 [2024-07-15 16:10:18.891860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.261 qpair failed and we were unable to recover it. 00:28:50.261 [2024-07-15 16:10:18.892186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.261 [2024-07-15 16:10:18.892218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.261 qpair failed and we were unable to recover it. 00:28:50.261 [2024-07-15 16:10:18.892510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.261 [2024-07-15 16:10:18.892549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.261 qpair failed and we were unable to recover it. 00:28:50.261 [2024-07-15 16:10:18.892794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.261 [2024-07-15 16:10:18.892828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.261 qpair failed and we were unable to recover it. 00:28:50.261 [2024-07-15 16:10:18.892999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.261 [2024-07-15 16:10:18.893037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.261 qpair failed and we were unable to recover it. 00:28:50.261 [2024-07-15 16:10:18.893249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.261 [2024-07-15 16:10:18.893283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.261 qpair failed and we were unable to recover it. 00:28:50.261 [2024-07-15 16:10:18.893492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.261 [2024-07-15 16:10:18.893524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.261 qpair failed and we were unable to recover it. 00:28:50.261 [2024-07-15 16:10:18.893825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.261 [2024-07-15 16:10:18.893867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.261 qpair failed and we were unable to recover it. 00:28:50.261 [2024-07-15 16:10:18.894166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.261 [2024-07-15 16:10:18.894198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.261 qpair failed and we were unable to recover it. 00:28:50.261 [2024-07-15 16:10:18.894439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.261 [2024-07-15 16:10:18.894472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.261 qpair failed and we were unable to recover it. 00:28:50.261 [2024-07-15 16:10:18.894766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.261 [2024-07-15 16:10:18.894798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.261 qpair failed and we were unable to recover it. 00:28:50.261 [2024-07-15 16:10:18.895104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.261 [2024-07-15 16:10:18.895135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.261 qpair failed and we were unable to recover it. 00:28:50.261 [2024-07-15 16:10:18.895364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.261 [2024-07-15 16:10:18.895398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.261 qpair failed and we were unable to recover it. 00:28:50.261 [2024-07-15 16:10:18.895623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.261 [2024-07-15 16:10:18.895655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.261 qpair failed and we were unable to recover it. 00:28:50.261 [2024-07-15 16:10:18.895880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.261 [2024-07-15 16:10:18.895912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.261 qpair failed and we were unable to recover it. 00:28:50.261 [2024-07-15 16:10:18.896069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.261 [2024-07-15 16:10:18.896102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.261 qpair failed and we were unable to recover it. 00:28:50.261 [2024-07-15 16:10:18.896321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.261 [2024-07-15 16:10:18.896354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.261 qpair failed and we were unable to recover it. 00:28:50.261 [2024-07-15 16:10:18.896653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.261 [2024-07-15 16:10:18.896669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.261 qpair failed and we were unable to recover it. 00:28:50.261 [2024-07-15 16:10:18.896848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.261 [2024-07-15 16:10:18.896865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.261 qpair failed and we were unable to recover it. 00:28:50.261 [2024-07-15 16:10:18.897050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.261 [2024-07-15 16:10:18.897082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.261 qpair failed and we were unable to recover it. 00:28:50.261 [2024-07-15 16:10:18.897391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.261 [2024-07-15 16:10:18.897426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.261 qpair failed and we were unable to recover it. 00:28:50.261 [2024-07-15 16:10:18.897734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.261 [2024-07-15 16:10:18.897767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.261 qpair failed and we were unable to recover it. 00:28:50.261 [2024-07-15 16:10:18.898082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.261 [2024-07-15 16:10:18.898115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.261 qpair failed and we were unable to recover it. 00:28:50.261 [2024-07-15 16:10:18.898335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.261 [2024-07-15 16:10:18.898369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.261 qpair failed and we were unable to recover it. 00:28:50.261 [2024-07-15 16:10:18.898652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.261 [2024-07-15 16:10:18.898684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.261 qpair failed and we were unable to recover it. 00:28:50.261 [2024-07-15 16:10:18.898909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.261 [2024-07-15 16:10:18.898942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.261 qpair failed and we were unable to recover it. 00:28:50.261 [2024-07-15 16:10:18.899151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.261 [2024-07-15 16:10:18.899168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.261 qpair failed and we were unable to recover it. 00:28:50.261 [2024-07-15 16:10:18.899383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.261 [2024-07-15 16:10:18.899400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.261 qpair failed and we were unable to recover it. 00:28:50.261 [2024-07-15 16:10:18.899690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.261 [2024-07-15 16:10:18.899706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.261 qpair failed and we were unable to recover it. 00:28:50.261 [2024-07-15 16:10:18.899967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.261 [2024-07-15 16:10:18.899984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.261 qpair failed and we were unable to recover it. 00:28:50.261 [2024-07-15 16:10:18.900109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.261 [2024-07-15 16:10:18.900127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.261 qpair failed and we were unable to recover it. 00:28:50.261 [2024-07-15 16:10:18.900274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.261 [2024-07-15 16:10:18.900313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.261 qpair failed and we were unable to recover it. 00:28:50.261 [2024-07-15 16:10:18.900622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.261 [2024-07-15 16:10:18.900655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.261 qpair failed and we were unable to recover it. 00:28:50.261 [2024-07-15 16:10:18.900881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.261 [2024-07-15 16:10:18.900912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.261 qpair failed and we were unable to recover it. 00:28:50.261 [2024-07-15 16:10:18.901145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.262 [2024-07-15 16:10:18.901177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.262 qpair failed and we were unable to recover it. 00:28:50.262 [2024-07-15 16:10:18.901419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.262 [2024-07-15 16:10:18.901452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.262 qpair failed and we were unable to recover it. 00:28:50.262 [2024-07-15 16:10:18.901672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.262 [2024-07-15 16:10:18.901689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.262 qpair failed and we were unable to recover it. 00:28:50.262 [2024-07-15 16:10:18.901944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.262 [2024-07-15 16:10:18.901961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.262 qpair failed and we were unable to recover it. 00:28:50.262 [2024-07-15 16:10:18.902213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.262 [2024-07-15 16:10:18.902236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.262 qpair failed and we were unable to recover it. 00:28:50.262 [2024-07-15 16:10:18.902455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.262 [2024-07-15 16:10:18.902471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.262 qpair failed and we were unable to recover it. 00:28:50.262 [2024-07-15 16:10:18.902762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.262 [2024-07-15 16:10:18.902794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.262 qpair failed and we were unable to recover it. 00:28:50.262 [2024-07-15 16:10:18.903023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.262 [2024-07-15 16:10:18.903054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.262 qpair failed and we were unable to recover it. 00:28:50.262 [2024-07-15 16:10:18.903290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.262 [2024-07-15 16:10:18.903324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.262 qpair failed and we were unable to recover it. 00:28:50.262 [2024-07-15 16:10:18.903633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.262 [2024-07-15 16:10:18.903665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.262 qpair failed and we were unable to recover it. 00:28:50.262 [2024-07-15 16:10:18.903845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.262 [2024-07-15 16:10:18.903877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.262 qpair failed and we were unable to recover it. 00:28:50.262 [2024-07-15 16:10:18.904062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.262 [2024-07-15 16:10:18.904094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.262 qpair failed and we were unable to recover it. 00:28:50.262 [2024-07-15 16:10:18.904322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.262 [2024-07-15 16:10:18.904356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.262 qpair failed and we were unable to recover it. 00:28:50.262 [2024-07-15 16:10:18.904649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.262 [2024-07-15 16:10:18.904681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.262 qpair failed and we were unable to recover it. 00:28:50.262 [2024-07-15 16:10:18.904911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.262 [2024-07-15 16:10:18.904943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.262 qpair failed and we were unable to recover it. 00:28:50.262 [2024-07-15 16:10:18.905179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.262 [2024-07-15 16:10:18.905212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.262 qpair failed and we were unable to recover it. 00:28:50.262 [2024-07-15 16:10:18.905522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.262 [2024-07-15 16:10:18.905555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.262 qpair failed and we were unable to recover it. 00:28:50.262 [2024-07-15 16:10:18.905838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.262 [2024-07-15 16:10:18.905874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.262 qpair failed and we were unable to recover it. 00:28:50.262 [2024-07-15 16:10:18.906201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.262 [2024-07-15 16:10:18.906243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.262 qpair failed and we were unable to recover it. 00:28:50.262 [2024-07-15 16:10:18.906402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.262 [2024-07-15 16:10:18.906435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.262 qpair failed and we were unable to recover it. 00:28:50.262 [2024-07-15 16:10:18.906650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.262 [2024-07-15 16:10:18.906666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.262 qpair failed and we were unable to recover it. 00:28:50.262 [2024-07-15 16:10:18.906924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.262 [2024-07-15 16:10:18.906941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.262 qpair failed and we were unable to recover it. 00:28:50.262 [2024-07-15 16:10:18.907203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.262 [2024-07-15 16:10:18.907246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.262 qpair failed and we were unable to recover it. 00:28:50.262 [2024-07-15 16:10:18.907477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.262 [2024-07-15 16:10:18.907509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.262 qpair failed and we were unable to recover it. 00:28:50.262 [2024-07-15 16:10:18.907843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.262 [2024-07-15 16:10:18.907885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.262 qpair failed and we were unable to recover it. 00:28:50.262 [2024-07-15 16:10:18.908150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.262 [2024-07-15 16:10:18.908167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.262 qpair failed and we were unable to recover it. 00:28:50.262 [2024-07-15 16:10:18.908377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.262 [2024-07-15 16:10:18.908393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.262 qpair failed and we were unable to recover it. 00:28:50.262 [2024-07-15 16:10:18.908548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.262 [2024-07-15 16:10:18.908565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.262 qpair failed and we were unable to recover it. 00:28:50.262 [2024-07-15 16:10:18.908763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.262 [2024-07-15 16:10:18.908780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.262 qpair failed and we were unable to recover it. 00:28:50.262 [2024-07-15 16:10:18.908921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.262 [2024-07-15 16:10:18.908938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.262 qpair failed and we were unable to recover it. 00:28:50.262 [2024-07-15 16:10:18.909212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.262 [2024-07-15 16:10:18.909234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.262 qpair failed and we were unable to recover it. 00:28:50.262 [2024-07-15 16:10:18.909425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.262 [2024-07-15 16:10:18.909441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.262 qpair failed and we were unable to recover it. 00:28:50.262 [2024-07-15 16:10:18.909665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.262 [2024-07-15 16:10:18.909682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.263 qpair failed and we were unable to recover it. 00:28:50.263 [2024-07-15 16:10:18.909874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.263 [2024-07-15 16:10:18.909891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.263 qpair failed and we were unable to recover it. 00:28:50.263 [2024-07-15 16:10:18.910176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.263 [2024-07-15 16:10:18.910208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.263 qpair failed and we were unable to recover it. 00:28:50.263 [2024-07-15 16:10:18.910471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.263 [2024-07-15 16:10:18.910504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.263 qpair failed and we were unable to recover it. 00:28:50.263 [2024-07-15 16:10:18.910785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.263 [2024-07-15 16:10:18.910801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.263 qpair failed and we were unable to recover it. 00:28:50.263 [2024-07-15 16:10:18.911001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.263 [2024-07-15 16:10:18.911018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.263 qpair failed and we were unable to recover it. 00:28:50.263 [2024-07-15 16:10:18.911214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.263 [2024-07-15 16:10:18.911236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.263 qpair failed and we were unable to recover it. 00:28:50.263 [2024-07-15 16:10:18.911451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.263 [2024-07-15 16:10:18.911468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.263 qpair failed and we were unable to recover it. 00:28:50.263 [2024-07-15 16:10:18.911667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.263 [2024-07-15 16:10:18.911699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.263 qpair failed and we were unable to recover it. 00:28:50.263 [2024-07-15 16:10:18.911918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.263 [2024-07-15 16:10:18.911951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.263 qpair failed and we were unable to recover it. 00:28:50.263 [2024-07-15 16:10:18.912246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.263 [2024-07-15 16:10:18.912279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.263 qpair failed and we were unable to recover it. 00:28:50.263 [2024-07-15 16:10:18.912494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.263 [2024-07-15 16:10:18.912526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.263 qpair failed and we were unable to recover it. 00:28:50.263 [2024-07-15 16:10:18.912822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.263 [2024-07-15 16:10:18.912838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.263 qpair failed and we were unable to recover it. 00:28:50.263 [2024-07-15 16:10:18.913107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.263 [2024-07-15 16:10:18.913139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.263 qpair failed and we were unable to recover it. 00:28:50.263 [2024-07-15 16:10:18.913401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.263 [2024-07-15 16:10:18.913434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.263 qpair failed and we were unable to recover it. 00:28:50.263 [2024-07-15 16:10:18.913767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.263 [2024-07-15 16:10:18.913799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.263 qpair failed and we were unable to recover it. 00:28:50.263 [2024-07-15 16:10:18.914075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.263 [2024-07-15 16:10:18.914091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.263 qpair failed and we were unable to recover it. 00:28:50.263 [2024-07-15 16:10:18.914293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.263 [2024-07-15 16:10:18.914310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.263 qpair failed and we were unable to recover it. 00:28:50.263 [2024-07-15 16:10:18.914523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.263 [2024-07-15 16:10:18.914540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.263 qpair failed and we were unable to recover it. 00:28:50.263 [2024-07-15 16:10:18.914815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.263 [2024-07-15 16:10:18.914832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.263 qpair failed and we were unable to recover it. 00:28:50.263 [2024-07-15 16:10:18.915122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.263 [2024-07-15 16:10:18.915155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.263 qpair failed and we were unable to recover it. 00:28:50.263 [2024-07-15 16:10:18.915366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.263 [2024-07-15 16:10:18.915399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.263 qpair failed and we were unable to recover it. 00:28:50.263 [2024-07-15 16:10:18.915646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.263 [2024-07-15 16:10:18.915678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.263 qpair failed and we were unable to recover it. 00:28:50.263 [2024-07-15 16:10:18.915910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.263 [2024-07-15 16:10:18.915927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.263 qpair failed and we were unable to recover it. 00:28:50.263 [2024-07-15 16:10:18.916176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.263 [2024-07-15 16:10:18.916211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.263 qpair failed and we were unable to recover it. 00:28:50.263 [2024-07-15 16:10:18.916524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.263 [2024-07-15 16:10:18.916557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.263 qpair failed and we were unable to recover it. 00:28:50.263 [2024-07-15 16:10:18.916724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.263 [2024-07-15 16:10:18.916757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.263 qpair failed and we were unable to recover it. 00:28:50.263 [2024-07-15 16:10:18.916963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.263 [2024-07-15 16:10:18.916980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.263 qpair failed and we were unable to recover it. 00:28:50.263 [2024-07-15 16:10:18.917187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.263 [2024-07-15 16:10:18.917203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.263 qpair failed and we were unable to recover it. 00:28:50.263 [2024-07-15 16:10:18.917393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.263 [2024-07-15 16:10:18.917410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.263 qpair failed and we were unable to recover it. 00:28:50.263 [2024-07-15 16:10:18.917537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.263 [2024-07-15 16:10:18.917569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.263 qpair failed and we were unable to recover it. 00:28:50.263 [2024-07-15 16:10:18.917777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.263 [2024-07-15 16:10:18.917810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.263 qpair failed and we were unable to recover it. 00:28:50.263 [2024-07-15 16:10:18.918118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.263 [2024-07-15 16:10:18.918150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.263 qpair failed and we were unable to recover it. 00:28:50.263 [2024-07-15 16:10:18.918395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.263 [2024-07-15 16:10:18.918429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.263 qpair failed and we were unable to recover it. 00:28:50.263 [2024-07-15 16:10:18.918663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.264 [2024-07-15 16:10:18.918696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.264 qpair failed and we were unable to recover it. 00:28:50.264 [2024-07-15 16:10:18.918949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.264 [2024-07-15 16:10:18.918982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.264 qpair failed and we were unable to recover it. 00:28:50.264 [2024-07-15 16:10:18.919245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.264 [2024-07-15 16:10:18.919278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.264 qpair failed and we were unable to recover it. 00:28:50.264 [2024-07-15 16:10:18.919443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.264 [2024-07-15 16:10:18.919476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.264 qpair failed and we were unable to recover it. 00:28:50.264 [2024-07-15 16:10:18.919803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.264 [2024-07-15 16:10:18.919819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.264 qpair failed and we were unable to recover it. 00:28:50.264 [2024-07-15 16:10:18.920100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.264 [2024-07-15 16:10:18.920116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.264 qpair failed and we were unable to recover it. 00:28:50.264 [2024-07-15 16:10:18.920335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.264 [2024-07-15 16:10:18.920360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.264 qpair failed and we were unable to recover it. 00:28:50.264 [2024-07-15 16:10:18.920639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.264 [2024-07-15 16:10:18.920655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.264 qpair failed and we were unable to recover it. 00:28:50.264 [2024-07-15 16:10:18.920797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.264 [2024-07-15 16:10:18.920813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.264 qpair failed and we were unable to recover it. 00:28:50.264 [2024-07-15 16:10:18.921021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.264 [2024-07-15 16:10:18.921054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.264 qpair failed and we were unable to recover it. 00:28:50.264 [2024-07-15 16:10:18.921334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.264 [2024-07-15 16:10:18.921367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.264 qpair failed and we were unable to recover it. 00:28:50.264 [2024-07-15 16:10:18.921513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.264 [2024-07-15 16:10:18.921546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.264 qpair failed and we were unable to recover it. 00:28:50.264 [2024-07-15 16:10:18.921780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.264 [2024-07-15 16:10:18.921812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.264 qpair failed and we were unable to recover it. 00:28:50.264 [2024-07-15 16:10:18.922031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.264 [2024-07-15 16:10:18.922064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.264 qpair failed and we were unable to recover it. 00:28:50.264 [2024-07-15 16:10:18.922351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.264 [2024-07-15 16:10:18.922384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.264 qpair failed and we were unable to recover it. 00:28:50.264 [2024-07-15 16:10:18.922656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.264 [2024-07-15 16:10:18.922688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.264 qpair failed and we were unable to recover it. 00:28:50.264 [2024-07-15 16:10:18.923004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.264 [2024-07-15 16:10:18.923036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.264 qpair failed and we were unable to recover it. 00:28:50.264 [2024-07-15 16:10:18.923261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.264 [2024-07-15 16:10:18.923295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.264 qpair failed and we were unable to recover it. 00:28:50.264 [2024-07-15 16:10:18.923552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.264 [2024-07-15 16:10:18.923584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.264 qpair failed and we were unable to recover it. 00:28:50.264 [2024-07-15 16:10:18.923754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.264 [2024-07-15 16:10:18.923786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.264 qpair failed and we were unable to recover it. 00:28:50.264 [2024-07-15 16:10:18.924006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.264 [2024-07-15 16:10:18.924039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.264 qpair failed and we were unable to recover it. 00:28:50.264 [2024-07-15 16:10:18.924251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.264 [2024-07-15 16:10:18.924268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.264 qpair failed and we were unable to recover it. 00:28:50.264 [2024-07-15 16:10:18.924453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.264 [2024-07-15 16:10:18.924471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.264 qpair failed and we were unable to recover it. 00:28:50.264 [2024-07-15 16:10:18.924659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.264 [2024-07-15 16:10:18.924675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.264 qpair failed and we were unable to recover it. 00:28:50.264 [2024-07-15 16:10:18.924853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.264 [2024-07-15 16:10:18.924869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.264 qpair failed and we were unable to recover it. 00:28:50.264 [2024-07-15 16:10:18.925057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.264 [2024-07-15 16:10:18.925089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.264 qpair failed and we were unable to recover it. 00:28:50.264 [2024-07-15 16:10:18.925307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.264 [2024-07-15 16:10:18.925345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.264 qpair failed and we were unable to recover it. 00:28:50.264 [2024-07-15 16:10:18.925507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.264 [2024-07-15 16:10:18.925539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.264 qpair failed and we were unable to recover it. 00:28:50.264 [2024-07-15 16:10:18.925817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.264 [2024-07-15 16:10:18.925849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.264 qpair failed and we were unable to recover it. 00:28:50.264 [2024-07-15 16:10:18.926173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.264 [2024-07-15 16:10:18.926189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.264 qpair failed and we were unable to recover it. 00:28:50.264 [2024-07-15 16:10:18.926412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.264 [2024-07-15 16:10:18.926430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.264 qpair failed and we were unable to recover it. 00:28:50.264 [2024-07-15 16:10:18.926681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.265 [2024-07-15 16:10:18.926697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.265 qpair failed and we were unable to recover it. 00:28:50.265 [2024-07-15 16:10:18.926885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.265 [2024-07-15 16:10:18.926918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.265 qpair failed and we were unable to recover it. 00:28:50.265 [2024-07-15 16:10:18.927221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.265 [2024-07-15 16:10:18.927264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.265 qpair failed and we were unable to recover it. 00:28:50.265 [2024-07-15 16:10:18.927463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.265 [2024-07-15 16:10:18.927495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.265 qpair failed and we were unable to recover it. 00:28:50.265 [2024-07-15 16:10:18.927793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.265 [2024-07-15 16:10:18.927826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.265 qpair failed and we were unable to recover it. 00:28:50.265 [2024-07-15 16:10:18.928038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.265 [2024-07-15 16:10:18.928070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.265 qpair failed and we were unable to recover it. 00:28:50.265 [2024-07-15 16:10:18.928314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.265 [2024-07-15 16:10:18.928348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.265 qpair failed and we were unable to recover it. 00:28:50.265 [2024-07-15 16:10:18.928506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.265 [2024-07-15 16:10:18.928538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.265 qpair failed and we were unable to recover it. 00:28:50.265 [2024-07-15 16:10:18.928775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.265 [2024-07-15 16:10:18.928807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.265 qpair failed and we were unable to recover it. 00:28:50.265 [2024-07-15 16:10:18.929030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.265 [2024-07-15 16:10:18.929048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.265 qpair failed and we were unable to recover it. 00:28:50.265 [2024-07-15 16:10:18.929261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.265 [2024-07-15 16:10:18.929295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.265 qpair failed and we were unable to recover it. 00:28:50.265 [2024-07-15 16:10:18.929518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.265 [2024-07-15 16:10:18.929550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.265 qpair failed and we were unable to recover it. 00:28:50.265 [2024-07-15 16:10:18.929691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.265 [2024-07-15 16:10:18.929723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.265 qpair failed and we were unable to recover it. 00:28:50.265 [2024-07-15 16:10:18.929959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.265 [2024-07-15 16:10:18.929975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.265 qpair failed and we were unable to recover it. 00:28:50.265 [2024-07-15 16:10:18.930234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.265 [2024-07-15 16:10:18.930267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.265 qpair failed and we were unable to recover it. 00:28:50.265 [2024-07-15 16:10:18.930591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.265 [2024-07-15 16:10:18.930623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.265 qpair failed and we were unable to recover it. 00:28:50.265 [2024-07-15 16:10:18.930925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.265 [2024-07-15 16:10:18.930957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.265 qpair failed and we were unable to recover it. 00:28:50.265 [2024-07-15 16:10:18.931255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.265 [2024-07-15 16:10:18.931287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.265 qpair failed and we were unable to recover it. 00:28:50.265 [2024-07-15 16:10:18.931436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.265 [2024-07-15 16:10:18.931468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.265 qpair failed and we were unable to recover it. 00:28:50.265 [2024-07-15 16:10:18.931675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.265 [2024-07-15 16:10:18.931707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.265 qpair failed and we were unable to recover it. 00:28:50.265 [2024-07-15 16:10:18.932054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.265 [2024-07-15 16:10:18.932085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.265 qpair failed and we were unable to recover it. 00:28:50.265 [2024-07-15 16:10:18.932302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.265 [2024-07-15 16:10:18.932337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.265 qpair failed and we were unable to recover it. 00:28:50.265 [2024-07-15 16:10:18.932550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.265 [2024-07-15 16:10:18.932587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.265 qpair failed and we were unable to recover it. 00:28:50.265 [2024-07-15 16:10:18.932868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.265 [2024-07-15 16:10:18.932900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.265 qpair failed and we were unable to recover it. 00:28:50.265 [2024-07-15 16:10:18.933175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.265 [2024-07-15 16:10:18.933191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.265 qpair failed and we were unable to recover it. 00:28:50.265 [2024-07-15 16:10:18.933440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.265 [2024-07-15 16:10:18.933458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.265 qpair failed and we were unable to recover it. 00:28:50.265 [2024-07-15 16:10:18.933736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.265 [2024-07-15 16:10:18.933753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.265 qpair failed and we were unable to recover it. 00:28:50.265 [2024-07-15 16:10:18.934007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.265 [2024-07-15 16:10:18.934024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.265 qpair failed and we were unable to recover it. 00:28:50.265 [2024-07-15 16:10:18.934296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.265 [2024-07-15 16:10:18.934313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.265 qpair failed and we were unable to recover it. 00:28:50.265 [2024-07-15 16:10:18.934514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.265 [2024-07-15 16:10:18.934531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.265 qpair failed and we were unable to recover it. 00:28:50.265 [2024-07-15 16:10:18.934726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.265 [2024-07-15 16:10:18.934743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.265 qpair failed and we were unable to recover it. 00:28:50.265 [2024-07-15 16:10:18.934937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.265 [2024-07-15 16:10:18.934953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.265 qpair failed and we were unable to recover it. 00:28:50.265 [2024-07-15 16:10:18.935204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.265 [2024-07-15 16:10:18.935244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.265 qpair failed and we were unable to recover it. 00:28:50.265 [2024-07-15 16:10:18.935546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.265 [2024-07-15 16:10:18.935579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.265 qpair failed and we were unable to recover it. 00:28:50.265 [2024-07-15 16:10:18.935808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.265 [2024-07-15 16:10:18.935840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.265 qpair failed and we were unable to recover it. 00:28:50.265 [2024-07-15 16:10:18.936013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.265 [2024-07-15 16:10:18.936045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.265 qpair failed and we were unable to recover it. 00:28:50.265 [2024-07-15 16:10:18.936370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.265 [2024-07-15 16:10:18.936405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.265 qpair failed and we were unable to recover it. 00:28:50.265 [2024-07-15 16:10:18.936713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.266 [2024-07-15 16:10:18.936756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.266 qpair failed and we were unable to recover it. 00:28:50.266 [2024-07-15 16:10:18.936966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.266 [2024-07-15 16:10:18.936984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.266 qpair failed and we were unable to recover it. 00:28:50.266 [2024-07-15 16:10:18.937164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.266 [2024-07-15 16:10:18.937181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.266 qpair failed and we were unable to recover it. 00:28:50.266 [2024-07-15 16:10:18.937428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.266 [2024-07-15 16:10:18.937445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.266 qpair failed and we were unable to recover it. 00:28:50.266 [2024-07-15 16:10:18.937703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.266 [2024-07-15 16:10:18.937720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.266 qpair failed and we were unable to recover it. 00:28:50.266 [2024-07-15 16:10:18.937896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.266 [2024-07-15 16:10:18.937929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.266 qpair failed and we were unable to recover it. 00:28:50.266 [2024-07-15 16:10:18.938207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.266 [2024-07-15 16:10:18.938250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.266 qpair failed and we were unable to recover it. 00:28:50.266 [2024-07-15 16:10:18.938484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.266 [2024-07-15 16:10:18.938516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.266 qpair failed and we were unable to recover it. 00:28:50.266 [2024-07-15 16:10:18.938821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.266 [2024-07-15 16:10:18.938853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.266 qpair failed and we were unable to recover it. 00:28:50.266 [2024-07-15 16:10:18.939148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.266 [2024-07-15 16:10:18.939181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.266 qpair failed and we were unable to recover it. 00:28:50.266 [2024-07-15 16:10:18.939507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.266 [2024-07-15 16:10:18.939540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.266 qpair failed and we were unable to recover it. 00:28:50.266 [2024-07-15 16:10:18.939701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.266 [2024-07-15 16:10:18.939733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.266 qpair failed and we were unable to recover it. 00:28:50.266 [2024-07-15 16:10:18.940023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.266 [2024-07-15 16:10:18.940058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.266 qpair failed and we were unable to recover it. 00:28:50.266 [2024-07-15 16:10:18.940271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.266 [2024-07-15 16:10:18.940288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.266 qpair failed and we were unable to recover it. 00:28:50.266 [2024-07-15 16:10:18.940475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.266 [2024-07-15 16:10:18.940492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.266 qpair failed and we were unable to recover it. 00:28:50.266 [2024-07-15 16:10:18.940763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.266 [2024-07-15 16:10:18.940795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.266 qpair failed and we were unable to recover it. 00:28:50.266 [2024-07-15 16:10:18.941074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.266 [2024-07-15 16:10:18.941106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.266 qpair failed and we were unable to recover it. 00:28:50.266 [2024-07-15 16:10:18.941342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.266 [2024-07-15 16:10:18.941359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.266 qpair failed and we were unable to recover it. 00:28:50.266 [2024-07-15 16:10:18.941612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.266 [2024-07-15 16:10:18.941650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.266 qpair failed and we were unable to recover it. 00:28:50.266 [2024-07-15 16:10:18.941916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.266 [2024-07-15 16:10:18.941948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.266 qpair failed and we were unable to recover it. 00:28:50.266 [2024-07-15 16:10:18.942126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.266 [2024-07-15 16:10:18.942171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.266 qpair failed and we were unable to recover it. 00:28:50.266 [2024-07-15 16:10:18.942355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.266 [2024-07-15 16:10:18.942372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.266 qpair failed and we were unable to recover it. 00:28:50.266 [2024-07-15 16:10:18.942639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.266 [2024-07-15 16:10:18.942671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.266 qpair failed and we were unable to recover it. 00:28:50.266 [2024-07-15 16:10:18.942905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.266 [2024-07-15 16:10:18.942937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.266 qpair failed and we were unable to recover it. 00:28:50.266 [2024-07-15 16:10:18.943172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.266 [2024-07-15 16:10:18.943188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.266 qpair failed and we were unable to recover it. 00:28:50.266 [2024-07-15 16:10:18.943319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.266 [2024-07-15 16:10:18.943336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.266 qpair failed and we were unable to recover it. 00:28:50.266 [2024-07-15 16:10:18.943517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.266 [2024-07-15 16:10:18.943534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.266 qpair failed and we were unable to recover it. 00:28:50.266 [2024-07-15 16:10:18.943728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.266 [2024-07-15 16:10:18.943743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.266 qpair failed and we were unable to recover it. 00:28:50.266 [2024-07-15 16:10:18.944036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.266 [2024-07-15 16:10:18.944053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.266 qpair failed and we were unable to recover it. 00:28:50.266 [2024-07-15 16:10:18.944329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.266 [2024-07-15 16:10:18.944345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.266 qpair failed and we were unable to recover it. 00:28:50.266 [2024-07-15 16:10:18.944576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.266 [2024-07-15 16:10:18.944608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.266 qpair failed and we were unable to recover it. 00:28:50.266 [2024-07-15 16:10:18.944844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.266 [2024-07-15 16:10:18.944860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.266 qpair failed and we were unable to recover it. 00:28:50.266 [2024-07-15 16:10:18.945109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.266 [2024-07-15 16:10:18.945143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.266 qpair failed and we were unable to recover it. 00:28:50.266 [2024-07-15 16:10:18.945321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.266 [2024-07-15 16:10:18.945355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.266 qpair failed and we were unable to recover it. 00:28:50.266 [2024-07-15 16:10:18.945523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.266 [2024-07-15 16:10:18.945555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.266 qpair failed and we were unable to recover it. 00:28:50.266 [2024-07-15 16:10:18.945857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.267 [2024-07-15 16:10:18.945889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.267 qpair failed and we were unable to recover it. 00:28:50.267 [2024-07-15 16:10:18.946246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.267 [2024-07-15 16:10:18.946278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.267 qpair failed and we were unable to recover it. 00:28:50.267 [2024-07-15 16:10:18.946490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.267 [2024-07-15 16:10:18.946522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.267 qpair failed and we were unable to recover it. 00:28:50.267 [2024-07-15 16:10:18.946671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.267 [2024-07-15 16:10:18.946703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.267 qpair failed and we were unable to recover it. 00:28:50.267 [2024-07-15 16:10:18.946916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.267 [2024-07-15 16:10:18.946949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.267 qpair failed and we were unable to recover it. 00:28:50.267 [2024-07-15 16:10:18.947202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.267 [2024-07-15 16:10:18.947242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.267 qpair failed and we were unable to recover it. 00:28:50.267 [2024-07-15 16:10:18.947384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.267 [2024-07-15 16:10:18.947416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.267 qpair failed and we were unable to recover it. 00:28:50.267 [2024-07-15 16:10:18.947570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.267 [2024-07-15 16:10:18.947603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.267 qpair failed and we were unable to recover it. 00:28:50.267 [2024-07-15 16:10:18.947885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.267 [2024-07-15 16:10:18.947915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.267 qpair failed and we were unable to recover it. 00:28:50.267 [2024-07-15 16:10:18.948089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.267 [2024-07-15 16:10:18.948105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.267 qpair failed and we were unable to recover it. 00:28:50.267 [2024-07-15 16:10:18.948353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.267 [2024-07-15 16:10:18.948388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.267 qpair failed and we were unable to recover it. 00:28:50.267 [2024-07-15 16:10:18.948673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.267 [2024-07-15 16:10:18.948705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.267 qpair failed and we were unable to recover it. 00:28:50.267 [2024-07-15 16:10:18.948952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.267 [2024-07-15 16:10:18.948999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.267 qpair failed and we were unable to recover it. 00:28:50.267 [2024-07-15 16:10:18.949189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.267 [2024-07-15 16:10:18.949206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.267 qpair failed and we were unable to recover it. 00:28:50.267 [2024-07-15 16:10:18.949396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.267 [2024-07-15 16:10:18.949430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.267 qpair failed and we were unable to recover it. 00:28:50.267 [2024-07-15 16:10:18.949635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.267 [2024-07-15 16:10:18.949667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.267 qpair failed and we were unable to recover it. 00:28:50.267 [2024-07-15 16:10:18.949946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.267 [2024-07-15 16:10:18.949977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.267 qpair failed and we were unable to recover it. 00:28:50.267 [2024-07-15 16:10:18.950309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.267 [2024-07-15 16:10:18.950344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.267 qpair failed and we were unable to recover it. 00:28:50.267 [2024-07-15 16:10:18.950574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.267 [2024-07-15 16:10:18.950612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.267 qpair failed and we were unable to recover it. 00:28:50.267 [2024-07-15 16:10:18.950907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.267 [2024-07-15 16:10:18.950939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.267 qpair failed and we were unable to recover it. 00:28:50.267 [2024-07-15 16:10:18.951241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.267 [2024-07-15 16:10:18.951275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.267 qpair failed and we were unable to recover it. 00:28:50.267 [2024-07-15 16:10:18.951563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.267 [2024-07-15 16:10:18.951595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.267 qpair failed and we were unable to recover it. 00:28:50.267 [2024-07-15 16:10:18.951880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.267 [2024-07-15 16:10:18.951897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.267 qpair failed and we were unable to recover it. 00:28:50.267 [2024-07-15 16:10:18.952182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.267 [2024-07-15 16:10:18.952214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.267 qpair failed and we were unable to recover it. 00:28:50.267 [2024-07-15 16:10:18.952551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.267 [2024-07-15 16:10:18.952583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.267 qpair failed and we were unable to recover it. 00:28:50.267 [2024-07-15 16:10:18.952746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.267 [2024-07-15 16:10:18.952779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.267 qpair failed and we were unable to recover it. 00:28:50.267 [2024-07-15 16:10:18.953039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.267 [2024-07-15 16:10:18.953071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.267 qpair failed and we were unable to recover it. 00:28:50.267 [2024-07-15 16:10:18.953295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.267 [2024-07-15 16:10:18.953329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.267 qpair failed and we were unable to recover it. 00:28:50.267 [2024-07-15 16:10:18.953563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.267 [2024-07-15 16:10:18.953595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.267 qpair failed and we were unable to recover it. 00:28:50.267 [2024-07-15 16:10:18.953889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.267 [2024-07-15 16:10:18.953921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.267 qpair failed and we were unable to recover it. 00:28:50.267 [2024-07-15 16:10:18.954143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.267 [2024-07-15 16:10:18.954159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.267 qpair failed and we were unable to recover it. 00:28:50.267 [2024-07-15 16:10:18.954364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.267 [2024-07-15 16:10:18.954399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.267 qpair failed and we were unable to recover it. 00:28:50.267 [2024-07-15 16:10:18.954729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.267 [2024-07-15 16:10:18.954762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.267 qpair failed and we were unable to recover it. 00:28:50.267 [2024-07-15 16:10:18.955023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.267 [2024-07-15 16:10:18.955040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.267 qpair failed and we were unable to recover it. 00:28:50.267 [2024-07-15 16:10:18.955243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.267 [2024-07-15 16:10:18.955276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.267 qpair failed and we were unable to recover it. 00:28:50.267 [2024-07-15 16:10:18.955492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.267 [2024-07-15 16:10:18.955524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.267 qpair failed and we were unable to recover it. 00:28:50.267 [2024-07-15 16:10:18.955826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.267 [2024-07-15 16:10:18.955858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.267 qpair failed and we were unable to recover it. 00:28:50.267 [2024-07-15 16:10:18.956171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.267 [2024-07-15 16:10:18.956214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.267 qpair failed and we were unable to recover it. 00:28:50.267 [2024-07-15 16:10:18.956452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.267 [2024-07-15 16:10:18.956484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.267 qpair failed and we were unable to recover it. 00:28:50.268 [2024-07-15 16:10:18.956707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.268 [2024-07-15 16:10:18.956740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.268 qpair failed and we were unable to recover it. 00:28:50.268 [2024-07-15 16:10:18.957033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.268 [2024-07-15 16:10:18.957071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.268 qpair failed and we were unable to recover it. 00:28:50.268 [2024-07-15 16:10:18.957370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.268 [2024-07-15 16:10:18.957403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.268 qpair failed and we were unable to recover it. 00:28:50.268 [2024-07-15 16:10:18.957640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.268 [2024-07-15 16:10:18.957673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.268 qpair failed and we were unable to recover it. 00:28:50.268 [2024-07-15 16:10:18.957816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.268 [2024-07-15 16:10:18.957849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.268 qpair failed and we were unable to recover it. 00:28:50.268 [2024-07-15 16:10:18.958078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.268 [2024-07-15 16:10:18.958110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.268 qpair failed and we were unable to recover it. 00:28:50.268 [2024-07-15 16:10:18.958414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.268 [2024-07-15 16:10:18.958454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.268 qpair failed and we were unable to recover it. 00:28:50.268 [2024-07-15 16:10:18.958708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.268 [2024-07-15 16:10:18.958740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.268 qpair failed and we were unable to recover it. 00:28:50.268 [2024-07-15 16:10:18.959088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.268 [2024-07-15 16:10:18.959120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.268 qpair failed and we were unable to recover it. 00:28:50.268 [2024-07-15 16:10:18.959351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.268 [2024-07-15 16:10:18.959385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.268 qpair failed and we were unable to recover it. 00:28:50.268 [2024-07-15 16:10:18.959690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.268 [2024-07-15 16:10:18.959722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.268 qpair failed and we were unable to recover it. 00:28:50.268 [2024-07-15 16:10:18.960014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.268 [2024-07-15 16:10:18.960046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.268 qpair failed and we were unable to recover it. 00:28:50.268 [2024-07-15 16:10:18.960218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.268 [2024-07-15 16:10:18.960273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.268 qpair failed and we were unable to recover it. 00:28:50.268 [2024-07-15 16:10:18.960436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.268 [2024-07-15 16:10:18.960468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.268 qpair failed and we were unable to recover it. 00:28:50.268 [2024-07-15 16:10:18.960769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.268 [2024-07-15 16:10:18.960802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.268 qpair failed and we were unable to recover it. 00:28:50.268 [2024-07-15 16:10:18.961045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.268 [2024-07-15 16:10:18.961078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.268 qpair failed and we were unable to recover it. 00:28:50.268 [2024-07-15 16:10:18.961262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.268 [2024-07-15 16:10:18.961297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.268 qpair failed and we were unable to recover it. 00:28:50.268 [2024-07-15 16:10:18.961594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.268 [2024-07-15 16:10:18.961627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.268 qpair failed and we were unable to recover it. 00:28:50.268 [2024-07-15 16:10:18.961798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.268 [2024-07-15 16:10:18.961831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.268 qpair failed and we were unable to recover it. 00:28:50.268 [2024-07-15 16:10:18.962041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.268 [2024-07-15 16:10:18.962073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.268 qpair failed and we were unable to recover it. 00:28:50.268 [2024-07-15 16:10:18.962294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.268 [2024-07-15 16:10:18.962312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.268 qpair failed and we were unable to recover it. 00:28:50.268 [2024-07-15 16:10:18.962508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.268 [2024-07-15 16:10:18.962540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.268 qpair failed and we were unable to recover it. 00:28:50.268 [2024-07-15 16:10:18.962845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.268 [2024-07-15 16:10:18.962877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.268 qpair failed and we were unable to recover it. 00:28:50.268 [2024-07-15 16:10:18.963192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.268 [2024-07-15 16:10:18.963235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.268 qpair failed and we were unable to recover it. 00:28:50.268 [2024-07-15 16:10:18.963392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.268 [2024-07-15 16:10:18.963424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.268 qpair failed and we were unable to recover it. 00:28:50.268 [2024-07-15 16:10:18.963705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.268 [2024-07-15 16:10:18.963738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.268 qpair failed and we were unable to recover it. 00:28:50.268 [2024-07-15 16:10:18.964040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.268 [2024-07-15 16:10:18.964072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.268 qpair failed and we were unable to recover it. 00:28:50.268 [2024-07-15 16:10:18.964368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.268 [2024-07-15 16:10:18.964401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.268 qpair failed and we were unable to recover it. 00:28:50.268 [2024-07-15 16:10:18.964639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.268 [2024-07-15 16:10:18.964671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.268 qpair failed and we were unable to recover it. 00:28:50.268 [2024-07-15 16:10:18.964960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.269 [2024-07-15 16:10:18.965000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.269 qpair failed and we were unable to recover it. 00:28:50.269 [2024-07-15 16:10:18.965204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.269 [2024-07-15 16:10:18.965220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.269 qpair failed and we were unable to recover it. 00:28:50.269 [2024-07-15 16:10:18.965489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.269 [2024-07-15 16:10:18.965506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.269 qpair failed and we were unable to recover it. 00:28:50.269 [2024-07-15 16:10:18.965751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.269 [2024-07-15 16:10:18.965768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.269 qpair failed and we were unable to recover it. 00:28:50.269 [2024-07-15 16:10:18.965960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.269 [2024-07-15 16:10:18.965979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.269 qpair failed and we were unable to recover it. 00:28:50.269 [2024-07-15 16:10:18.966101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.269 [2024-07-15 16:10:18.966116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.269 qpair failed and we were unable to recover it. 00:28:50.269 [2024-07-15 16:10:18.966328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.269 [2024-07-15 16:10:18.966362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.269 qpair failed and we were unable to recover it. 00:28:50.269 [2024-07-15 16:10:18.966652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.269 [2024-07-15 16:10:18.966685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.269 qpair failed and we were unable to recover it. 00:28:50.269 [2024-07-15 16:10:18.966988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.269 [2024-07-15 16:10:18.967021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.269 qpair failed and we were unable to recover it. 00:28:50.269 [2024-07-15 16:10:18.967299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.269 [2024-07-15 16:10:18.967332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.269 qpair failed and we were unable to recover it. 00:28:50.269 [2024-07-15 16:10:18.967598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.269 [2024-07-15 16:10:18.967630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.269 qpair failed and we were unable to recover it. 00:28:50.269 [2024-07-15 16:10:18.967947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.269 [2024-07-15 16:10:18.967978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.269 qpair failed and we were unable to recover it. 00:28:50.269 [2024-07-15 16:10:18.968267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.269 [2024-07-15 16:10:18.968284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.269 qpair failed and we were unable to recover it. 00:28:50.269 [2024-07-15 16:10:18.968552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.269 [2024-07-15 16:10:18.968584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.269 qpair failed and we were unable to recover it. 00:28:50.269 [2024-07-15 16:10:18.968826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.269 [2024-07-15 16:10:18.968859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.269 qpair failed and we were unable to recover it. 00:28:50.269 [2024-07-15 16:10:18.969123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.269 [2024-07-15 16:10:18.969155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.269 qpair failed and we were unable to recover it. 00:28:50.269 [2024-07-15 16:10:18.969377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.269 [2024-07-15 16:10:18.969412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.269 qpair failed and we were unable to recover it. 00:28:50.269 [2024-07-15 16:10:18.969621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.269 [2024-07-15 16:10:18.969654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.269 qpair failed and we were unable to recover it. 00:28:50.269 [2024-07-15 16:10:18.970042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.269 [2024-07-15 16:10:18.970118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.269 qpair failed and we were unable to recover it. 00:28:50.269 [2024-07-15 16:10:18.970438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.269 [2024-07-15 16:10:18.970515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.269 qpair failed and we were unable to recover it. 00:28:50.269 [2024-07-15 16:10:18.970770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.269 [2024-07-15 16:10:18.970807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.269 qpair failed and we were unable to recover it. 00:28:50.269 [2024-07-15 16:10:18.971053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.269 [2024-07-15 16:10:18.971085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.269 qpair failed and we were unable to recover it. 00:28:50.269 [2024-07-15 16:10:18.971411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.269 [2024-07-15 16:10:18.971446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.269 qpair failed and we were unable to recover it. 00:28:50.269 [2024-07-15 16:10:18.971660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.269 [2024-07-15 16:10:18.971692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.269 qpair failed and we were unable to recover it. 00:28:50.269 [2024-07-15 16:10:18.971949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.269 [2024-07-15 16:10:18.971981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.269 qpair failed and we were unable to recover it. 00:28:50.269 [2024-07-15 16:10:18.972213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.269 [2024-07-15 16:10:18.972256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.269 qpair failed and we were unable to recover it. 00:28:50.269 [2024-07-15 16:10:18.972511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.269 [2024-07-15 16:10:18.972543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.269 qpair failed and we were unable to recover it. 00:28:50.269 [2024-07-15 16:10:18.972754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.269 [2024-07-15 16:10:18.972785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.269 qpair failed and we were unable to recover it. 00:28:50.269 [2024-07-15 16:10:18.973020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.269 [2024-07-15 16:10:18.973051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.269 qpair failed and we were unable to recover it. 00:28:50.269 [2024-07-15 16:10:18.973357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.269 [2024-07-15 16:10:18.973390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.269 qpair failed and we were unable to recover it. 00:28:50.269 [2024-07-15 16:10:18.973615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.269 [2024-07-15 16:10:18.973648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.269 qpair failed and we were unable to recover it. 00:28:50.269 [2024-07-15 16:10:18.973931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.269 [2024-07-15 16:10:18.973971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.269 qpair failed and we were unable to recover it. 00:28:50.269 [2024-07-15 16:10:18.974222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.269 [2024-07-15 16:10:18.974264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.269 qpair failed and we were unable to recover it. 00:28:50.269 [2024-07-15 16:10:18.974445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.269 [2024-07-15 16:10:18.974478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.269 qpair failed and we were unable to recover it. 00:28:50.269 [2024-07-15 16:10:18.974779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.269 [2024-07-15 16:10:18.974811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.269 qpair failed and we were unable to recover it. 00:28:50.269 [2024-07-15 16:10:18.975020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.269 [2024-07-15 16:10:18.975053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.269 qpair failed and we were unable to recover it. 00:28:50.269 [2024-07-15 16:10:18.975212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.270 [2024-07-15 16:10:18.975255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.270 qpair failed and we were unable to recover it. 00:28:50.270 [2024-07-15 16:10:18.975560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.270 [2024-07-15 16:10:18.975593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.270 qpair failed and we were unable to recover it. 00:28:50.270 [2024-07-15 16:10:18.975901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.270 [2024-07-15 16:10:18.975934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.270 qpair failed and we were unable to recover it. 00:28:50.270 [2024-07-15 16:10:18.976245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.270 [2024-07-15 16:10:18.976279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.270 qpair failed and we were unable to recover it. 00:28:50.270 [2024-07-15 16:10:18.976564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.270 [2024-07-15 16:10:18.976597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.270 qpair failed and we were unable to recover it. 00:28:50.270 [2024-07-15 16:10:18.976750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.270 [2024-07-15 16:10:18.976781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.270 qpair failed and we were unable to recover it. 00:28:50.270 [2024-07-15 16:10:18.977021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.270 [2024-07-15 16:10:18.977053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.270 qpair failed and we were unable to recover it. 00:28:50.270 [2024-07-15 16:10:18.977335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.270 [2024-07-15 16:10:18.977369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.270 qpair failed and we were unable to recover it. 00:28:50.270 [2024-07-15 16:10:18.977694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.270 [2024-07-15 16:10:18.977727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.270 qpair failed and we were unable to recover it. 00:28:50.270 [2024-07-15 16:10:18.977971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.270 [2024-07-15 16:10:18.978004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.270 qpair failed and we were unable to recover it. 00:28:50.270 [2024-07-15 16:10:18.978211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.270 [2024-07-15 16:10:18.978230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.270 qpair failed and we were unable to recover it. 00:28:50.270 [2024-07-15 16:10:18.978474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.270 [2024-07-15 16:10:18.978507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.270 qpair failed and we were unable to recover it. 00:28:50.270 [2024-07-15 16:10:18.978743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.270 [2024-07-15 16:10:18.978775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.270 qpair failed and we were unable to recover it. 00:28:50.270 [2024-07-15 16:10:18.979071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.270 [2024-07-15 16:10:18.979104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.270 qpair failed and we were unable to recover it. 00:28:50.270 [2024-07-15 16:10:18.979335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.270 [2024-07-15 16:10:18.979368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.270 qpair failed and we were unable to recover it. 00:28:50.270 [2024-07-15 16:10:18.979591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.270 [2024-07-15 16:10:18.979623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.270 qpair failed and we were unable to recover it. 00:28:50.270 [2024-07-15 16:10:18.979852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.270 [2024-07-15 16:10:18.979885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.270 qpair failed and we were unable to recover it. 00:28:50.270 [2024-07-15 16:10:18.980189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.270 [2024-07-15 16:10:18.980220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.270 qpair failed and we were unable to recover it. 00:28:50.270 [2024-07-15 16:10:18.980464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.270 [2024-07-15 16:10:18.980496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.270 qpair failed and we were unable to recover it. 00:28:50.270 [2024-07-15 16:10:18.980679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.270 [2024-07-15 16:10:18.980712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.270 qpair failed and we were unable to recover it. 00:28:50.270 [2024-07-15 16:10:18.980879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.270 [2024-07-15 16:10:18.980910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.270 qpair failed and we were unable to recover it. 00:28:50.270 [2024-07-15 16:10:18.981213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.270 [2024-07-15 16:10:18.981260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.270 qpair failed and we were unable to recover it. 00:28:50.270 [2024-07-15 16:10:18.981543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.270 [2024-07-15 16:10:18.981575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.270 qpair failed and we were unable to recover it. 00:28:50.270 [2024-07-15 16:10:18.981790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.270 [2024-07-15 16:10:18.981826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.270 qpair failed and we were unable to recover it. 00:28:50.270 [2024-07-15 16:10:18.982113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.270 [2024-07-15 16:10:18.982147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.270 qpair failed and we were unable to recover it. 00:28:50.270 [2024-07-15 16:10:18.982429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.270 [2024-07-15 16:10:18.982464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.270 qpair failed and we were unable to recover it. 00:28:50.270 [2024-07-15 16:10:18.982698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.270 [2024-07-15 16:10:18.982730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.270 qpair failed and we were unable to recover it. 00:28:50.270 [2024-07-15 16:10:18.982890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.270 [2024-07-15 16:10:18.982903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.270 qpair failed and we were unable to recover it. 00:28:50.270 [2024-07-15 16:10:18.983175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.270 [2024-07-15 16:10:18.983208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.270 qpair failed and we were unable to recover it. 00:28:50.270 [2024-07-15 16:10:18.983499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.270 [2024-07-15 16:10:18.983532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.270 qpair failed and we were unable to recover it. 00:28:50.270 [2024-07-15 16:10:18.983850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.270 [2024-07-15 16:10:18.983883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.270 qpair failed and we were unable to recover it. 00:28:50.270 [2024-07-15 16:10:18.984146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.270 [2024-07-15 16:10:18.984178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.270 qpair failed and we were unable to recover it. 00:28:50.270 [2024-07-15 16:10:18.984425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.270 [2024-07-15 16:10:18.984439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.270 qpair failed and we were unable to recover it. 00:28:50.270 [2024-07-15 16:10:18.984622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.270 [2024-07-15 16:10:18.984636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.270 qpair failed and we were unable to recover it. 00:28:50.270 [2024-07-15 16:10:18.984820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.270 [2024-07-15 16:10:18.984834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.270 qpair failed and we were unable to recover it. 00:28:50.270 [2024-07-15 16:10:18.985102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.270 [2024-07-15 16:10:18.985140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.270 qpair failed and we were unable to recover it. 00:28:50.270 [2024-07-15 16:10:18.985405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.270 [2024-07-15 16:10:18.985437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.270 qpair failed and we were unable to recover it. 00:28:50.270 [2024-07-15 16:10:18.985680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.270 [2024-07-15 16:10:18.985713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.270 qpair failed and we were unable to recover it. 00:28:50.270 [2024-07-15 16:10:18.985943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.270 [2024-07-15 16:10:18.985957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.270 qpair failed and we were unable to recover it. 00:28:50.270 [2024-07-15 16:10:18.986133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.270 [2024-07-15 16:10:18.986146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.270 qpair failed and we were unable to recover it. 00:28:50.270 [2024-07-15 16:10:18.986331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.270 [2024-07-15 16:10:18.986345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.270 qpair failed and we were unable to recover it. 00:28:50.271 [2024-07-15 16:10:18.986549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.271 [2024-07-15 16:10:18.986581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.271 qpair failed and we were unable to recover it. 00:28:50.271 [2024-07-15 16:10:18.986766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.271 [2024-07-15 16:10:18.986799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.271 qpair failed and we were unable to recover it. 00:28:50.271 [2024-07-15 16:10:18.987140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.271 [2024-07-15 16:10:18.987172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.271 qpair failed and we were unable to recover it. 00:28:50.271 [2024-07-15 16:10:18.987442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.271 [2024-07-15 16:10:18.987475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.271 qpair failed and we were unable to recover it. 00:28:50.271 [2024-07-15 16:10:18.987757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.271 [2024-07-15 16:10:18.987789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.271 qpair failed and we were unable to recover it. 00:28:50.271 [2024-07-15 16:10:18.987947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.271 [2024-07-15 16:10:18.987979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.271 qpair failed and we were unable to recover it. 00:28:50.271 [2024-07-15 16:10:18.988141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.271 [2024-07-15 16:10:18.988174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.271 qpair failed and we were unable to recover it. 00:28:50.271 [2024-07-15 16:10:18.988486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.271 [2024-07-15 16:10:18.988520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.271 qpair failed and we were unable to recover it. 00:28:50.271 [2024-07-15 16:10:18.988771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.271 [2024-07-15 16:10:18.988805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.271 qpair failed and we were unable to recover it. 00:28:50.271 [2024-07-15 16:10:18.989024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.271 [2024-07-15 16:10:18.989057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.271 qpair failed and we were unable to recover it. 00:28:50.271 [2024-07-15 16:10:18.989378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.271 [2024-07-15 16:10:18.989413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.271 qpair failed and we were unable to recover it. 00:28:50.271 [2024-07-15 16:10:18.989608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.271 [2024-07-15 16:10:18.989641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.271 qpair failed and we were unable to recover it. 00:28:50.271 [2024-07-15 16:10:18.989861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.271 [2024-07-15 16:10:18.989894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.271 qpair failed and we were unable to recover it. 00:28:50.271 [2024-07-15 16:10:18.990190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.271 [2024-07-15 16:10:18.990223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.271 qpair failed and we were unable to recover it. 00:28:50.271 [2024-07-15 16:10:18.990552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.271 [2024-07-15 16:10:18.990584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.271 qpair failed and we were unable to recover it. 00:28:50.271 [2024-07-15 16:10:18.990881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.271 [2024-07-15 16:10:18.990912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.271 qpair failed and we were unable to recover it. 00:28:50.271 [2024-07-15 16:10:18.991068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.271 [2024-07-15 16:10:18.991101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.271 qpair failed and we were unable to recover it. 00:28:50.271 [2024-07-15 16:10:18.991313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.271 [2024-07-15 16:10:18.991346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.271 qpair failed and we were unable to recover it. 00:28:50.271 [2024-07-15 16:10:18.991574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.271 [2024-07-15 16:10:18.991606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.271 qpair failed and we were unable to recover it. 00:28:50.271 [2024-07-15 16:10:18.991825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.271 [2024-07-15 16:10:18.991857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.271 qpair failed and we were unable to recover it. 00:28:50.271 [2024-07-15 16:10:18.992161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.271 [2024-07-15 16:10:18.992194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.271 qpair failed and we were unable to recover it. 00:28:50.271 [2024-07-15 16:10:18.992500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.271 [2024-07-15 16:10:18.992580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.271 qpair failed and we were unable to recover it. 00:28:50.271 [2024-07-15 16:10:18.992903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.271 [2024-07-15 16:10:18.992939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.271 qpair failed and we were unable to recover it. 00:28:50.271 [2024-07-15 16:10:18.993176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.271 [2024-07-15 16:10:18.993211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.271 qpair failed and we were unable to recover it. 00:28:50.271 [2024-07-15 16:10:18.993514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.271 [2024-07-15 16:10:18.993532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.271 qpair failed and we were unable to recover it. 00:28:50.271 [2024-07-15 16:10:18.993740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.271 [2024-07-15 16:10:18.993756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.271 qpair failed and we were unable to recover it. 00:28:50.271 [2024-07-15 16:10:18.994052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.271 [2024-07-15 16:10:18.994085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.271 qpair failed and we were unable to recover it. 00:28:50.271 [2024-07-15 16:10:18.994383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.271 [2024-07-15 16:10:18.994418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.271 qpair failed and we were unable to recover it. 00:28:50.271 [2024-07-15 16:10:18.994650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.271 [2024-07-15 16:10:18.994683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.271 qpair failed and we were unable to recover it. 00:28:50.271 [2024-07-15 16:10:18.994842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.271 [2024-07-15 16:10:18.994875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.271 qpair failed and we were unable to recover it. 00:28:50.271 [2024-07-15 16:10:18.995154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.271 [2024-07-15 16:10:18.995187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.271 qpair failed and we were unable to recover it. 00:28:50.271 [2024-07-15 16:10:18.995445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.271 [2024-07-15 16:10:18.995478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.271 qpair failed and we were unable to recover it. 00:28:50.271 [2024-07-15 16:10:18.995784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.271 [2024-07-15 16:10:18.995817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.271 qpair failed and we were unable to recover it. 00:28:50.271 [2024-07-15 16:10:18.996054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.271 [2024-07-15 16:10:18.996087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.271 qpair failed and we were unable to recover it. 00:28:50.271 [2024-07-15 16:10:18.996299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.271 [2024-07-15 16:10:18.996341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.271 qpair failed and we were unable to recover it. 00:28:50.271 [2024-07-15 16:10:18.996659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.271 [2024-07-15 16:10:18.996692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.271 qpair failed and we were unable to recover it. 00:28:50.271 [2024-07-15 16:10:18.996933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.271 [2024-07-15 16:10:18.996950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.271 qpair failed and we were unable to recover it. 00:28:50.271 [2024-07-15 16:10:18.997211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.271 [2024-07-15 16:10:18.997268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.271 qpair failed and we were unable to recover it. 00:28:50.271 [2024-07-15 16:10:18.997516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.271 [2024-07-15 16:10:18.997549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.271 qpair failed and we were unable to recover it. 00:28:50.271 [2024-07-15 16:10:18.997719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.271 [2024-07-15 16:10:18.997756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.271 qpair failed and we were unable to recover it. 00:28:50.271 [2024-07-15 16:10:18.997992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.272 [2024-07-15 16:10:18.998007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.272 qpair failed and we were unable to recover it. 00:28:50.272 [2024-07-15 16:10:18.998218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.272 [2024-07-15 16:10:18.998263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.272 qpair failed and we were unable to recover it. 00:28:50.272 [2024-07-15 16:10:18.998566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.272 [2024-07-15 16:10:18.998598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.272 qpair failed and we were unable to recover it. 00:28:50.272 [2024-07-15 16:10:18.998938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.272 [2024-07-15 16:10:18.998970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.272 qpair failed and we were unable to recover it. 00:28:50.272 [2024-07-15 16:10:18.999277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.272 [2024-07-15 16:10:18.999312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.272 qpair failed and we were unable to recover it. 00:28:50.272 [2024-07-15 16:10:18.999542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.272 [2024-07-15 16:10:18.999575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.272 qpair failed and we were unable to recover it. 00:28:50.272 [2024-07-15 16:10:18.999905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.272 [2024-07-15 16:10:18.999937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.272 qpair failed and we were unable to recover it. 00:28:50.272 [2024-07-15 16:10:19.000179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.272 [2024-07-15 16:10:19.000195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.272 qpair failed and we were unable to recover it. 00:28:50.272 [2024-07-15 16:10:19.000381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.272 [2024-07-15 16:10:19.000415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.272 qpair failed and we were unable to recover it. 00:28:50.272 [2024-07-15 16:10:19.000701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.272 [2024-07-15 16:10:19.000735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.272 qpair failed and we were unable to recover it. 00:28:50.272 [2024-07-15 16:10:19.001030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.272 [2024-07-15 16:10:19.001075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.272 qpair failed and we were unable to recover it. 00:28:50.272 [2024-07-15 16:10:19.001296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.272 [2024-07-15 16:10:19.001329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.272 qpair failed and we were unable to recover it. 00:28:50.272 [2024-07-15 16:10:19.001641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.272 [2024-07-15 16:10:19.001674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.272 qpair failed and we were unable to recover it. 00:28:50.272 [2024-07-15 16:10:19.001905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.272 [2024-07-15 16:10:19.001937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.272 qpair failed and we were unable to recover it. 00:28:50.272 [2024-07-15 16:10:19.002154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.272 [2024-07-15 16:10:19.002186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.272 qpair failed and we were unable to recover it. 00:28:50.272 [2024-07-15 16:10:19.002449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.272 [2024-07-15 16:10:19.002482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.272 qpair failed and we were unable to recover it. 00:28:50.272 [2024-07-15 16:10:19.002702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.272 [2024-07-15 16:10:19.002735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.272 qpair failed and we were unable to recover it. 00:28:50.272 [2024-07-15 16:10:19.002966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.272 [2024-07-15 16:10:19.002999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.272 qpair failed and we were unable to recover it. 00:28:50.272 [2024-07-15 16:10:19.003143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.272 [2024-07-15 16:10:19.003174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.272 qpair failed and we were unable to recover it. 00:28:50.272 [2024-07-15 16:10:19.003500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.272 [2024-07-15 16:10:19.003533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.272 qpair failed and we were unable to recover it. 00:28:50.272 [2024-07-15 16:10:19.003763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.272 [2024-07-15 16:10:19.003794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.272 qpair failed and we were unable to recover it. 00:28:50.272 [2024-07-15 16:10:19.004036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.272 [2024-07-15 16:10:19.004069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.272 qpair failed and we were unable to recover it. 00:28:50.272 [2024-07-15 16:10:19.004298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.272 [2024-07-15 16:10:19.004316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.272 qpair failed and we were unable to recover it. 00:28:50.272 [2024-07-15 16:10:19.004504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.272 [2024-07-15 16:10:19.004521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.272 qpair failed and we were unable to recover it. 00:28:50.272 [2024-07-15 16:10:19.004721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.272 [2024-07-15 16:10:19.004737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.272 qpair failed and we were unable to recover it. 00:28:50.272 [2024-07-15 16:10:19.004937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.272 [2024-07-15 16:10:19.004970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.272 qpair failed and we were unable to recover it. 00:28:50.272 [2024-07-15 16:10:19.005189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.272 [2024-07-15 16:10:19.005222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.272 qpair failed and we were unable to recover it. 00:28:50.272 [2024-07-15 16:10:19.005525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.272 [2024-07-15 16:10:19.005563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.272 qpair failed and we were unable to recover it. 00:28:50.272 [2024-07-15 16:10:19.005786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.272 [2024-07-15 16:10:19.005817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.272 qpair failed and we were unable to recover it. 00:28:50.272 [2024-07-15 16:10:19.006030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.272 [2024-07-15 16:10:19.006063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.272 qpair failed and we were unable to recover it. 00:28:50.272 [2024-07-15 16:10:19.006353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.272 [2024-07-15 16:10:19.006371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.272 qpair failed and we were unable to recover it. 00:28:50.272 [2024-07-15 16:10:19.006643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.272 [2024-07-15 16:10:19.006679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.272 qpair failed and we were unable to recover it. 00:28:50.272 [2024-07-15 16:10:19.006915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.272 [2024-07-15 16:10:19.006947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.272 qpair failed and we were unable to recover it. 00:28:50.272 [2024-07-15 16:10:19.007168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.272 [2024-07-15 16:10:19.007207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.272 qpair failed and we were unable to recover it. 00:28:50.272 [2024-07-15 16:10:19.007463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.272 [2024-07-15 16:10:19.007505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.272 qpair failed and we were unable to recover it. 00:28:50.272 [2024-07-15 16:10:19.007714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.272 [2024-07-15 16:10:19.007746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.272 qpair failed and we were unable to recover it. 00:28:50.272 [2024-07-15 16:10:19.007994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.272 [2024-07-15 16:10:19.008027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.272 qpair failed and we were unable to recover it. 00:28:50.272 [2024-07-15 16:10:19.008283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.272 [2024-07-15 16:10:19.008300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.272 qpair failed and we were unable to recover it. 00:28:50.272 [2024-07-15 16:10:19.008498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.272 [2024-07-15 16:10:19.008516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.272 qpair failed and we were unable to recover it. 00:28:50.272 [2024-07-15 16:10:19.008741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.272 [2024-07-15 16:10:19.008758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.272 qpair failed and we were unable to recover it. 00:28:50.272 [2024-07-15 16:10:19.009088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.272 [2024-07-15 16:10:19.009120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.272 qpair failed and we were unable to recover it. 00:28:50.273 [2024-07-15 16:10:19.009363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.273 [2024-07-15 16:10:19.009397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.273 qpair failed and we were unable to recover it. 00:28:50.273 [2024-07-15 16:10:19.009619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.273 [2024-07-15 16:10:19.009651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.273 qpair failed and we were unable to recover it. 00:28:50.273 [2024-07-15 16:10:19.009798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.273 [2024-07-15 16:10:19.009830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.273 qpair failed and we were unable to recover it. 00:28:50.273 [2024-07-15 16:10:19.010059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.273 [2024-07-15 16:10:19.010095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.273 qpair failed and we were unable to recover it. 00:28:50.273 [2024-07-15 16:10:19.010388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.273 [2024-07-15 16:10:19.010421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.273 qpair failed and we were unable to recover it. 00:28:50.273 [2024-07-15 16:10:19.010582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.273 [2024-07-15 16:10:19.010615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.273 qpair failed and we were unable to recover it. 00:28:50.273 [2024-07-15 16:10:19.010915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.273 [2024-07-15 16:10:19.010949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.273 qpair failed and we were unable to recover it. 00:28:50.273 [2024-07-15 16:10:19.011236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.273 [2024-07-15 16:10:19.011253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.273 qpair failed and we were unable to recover it. 00:28:50.273 [2024-07-15 16:10:19.011582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.273 [2024-07-15 16:10:19.011615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.273 qpair failed and we were unable to recover it. 00:28:50.273 [2024-07-15 16:10:19.011882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.273 [2024-07-15 16:10:19.011914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.273 qpair failed and we were unable to recover it. 00:28:50.273 [2024-07-15 16:10:19.012144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.273 [2024-07-15 16:10:19.012176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.273 qpair failed and we were unable to recover it. 00:28:50.273 [2024-07-15 16:10:19.012491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.273 [2024-07-15 16:10:19.012524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.273 qpair failed and we were unable to recover it. 00:28:50.273 [2024-07-15 16:10:19.012801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.273 [2024-07-15 16:10:19.012834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.273 qpair failed and we were unable to recover it. 00:28:50.273 [2024-07-15 16:10:19.012997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.273 [2024-07-15 16:10:19.013029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.273 qpair failed and we were unable to recover it. 00:28:50.273 [2024-07-15 16:10:19.013275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.273 [2024-07-15 16:10:19.013294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.273 qpair failed and we were unable to recover it. 00:28:50.273 [2024-07-15 16:10:19.013608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.273 [2024-07-15 16:10:19.013641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.273 qpair failed and we were unable to recover it. 00:28:50.273 [2024-07-15 16:10:19.013871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.273 [2024-07-15 16:10:19.013903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.273 qpair failed and we were unable to recover it. 00:28:50.273 [2024-07-15 16:10:19.014235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.273 [2024-07-15 16:10:19.014268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.273 qpair failed and we were unable to recover it. 00:28:50.273 [2024-07-15 16:10:19.014508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.273 [2024-07-15 16:10:19.014541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.273 qpair failed and we were unable to recover it. 00:28:50.273 [2024-07-15 16:10:19.014780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.273 [2024-07-15 16:10:19.014812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.273 qpair failed and we were unable to recover it. 00:28:50.273 [2024-07-15 16:10:19.015178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.273 [2024-07-15 16:10:19.015275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.273 qpair failed and we were unable to recover it. 00:28:50.273 [2024-07-15 16:10:19.015576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.273 [2024-07-15 16:10:19.015613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.273 qpair failed and we were unable to recover it. 00:28:50.273 [2024-07-15 16:10:19.015814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.273 [2024-07-15 16:10:19.015849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.273 qpair failed and we were unable to recover it. 00:28:50.273 [2024-07-15 16:10:19.016025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.273 [2024-07-15 16:10:19.016060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.273 qpair failed and we were unable to recover it. 00:28:50.273 [2024-07-15 16:10:19.016289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.273 [2024-07-15 16:10:19.016323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.273 qpair failed and we were unable to recover it. 00:28:50.273 [2024-07-15 16:10:19.016537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.273 [2024-07-15 16:10:19.016569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.273 qpair failed and we were unable to recover it. 00:28:50.273 [2024-07-15 16:10:19.016742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.273 [2024-07-15 16:10:19.016774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.273 qpair failed and we were unable to recover it. 00:28:50.273 [2024-07-15 16:10:19.017002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.273 [2024-07-15 16:10:19.017035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.273 qpair failed and we were unable to recover it. 00:28:50.273 [2024-07-15 16:10:19.017317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.273 [2024-07-15 16:10:19.017352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.273 qpair failed and we were unable to recover it. 00:28:50.273 [2024-07-15 16:10:19.017512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.273 [2024-07-15 16:10:19.017544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.273 qpair failed and we were unable to recover it. 00:28:50.273 [2024-07-15 16:10:19.017824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.273 [2024-07-15 16:10:19.017857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.273 qpair failed and we were unable to recover it. 00:28:50.273 [2024-07-15 16:10:19.018138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.273 [2024-07-15 16:10:19.018170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.273 qpair failed and we were unable to recover it. 00:28:50.273 [2024-07-15 16:10:19.018489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.273 [2024-07-15 16:10:19.018522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.273 qpair failed and we were unable to recover it. 00:28:50.273 [2024-07-15 16:10:19.018771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.273 [2024-07-15 16:10:19.018813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.273 qpair failed and we were unable to recover it. 00:28:50.273 [2024-07-15 16:10:19.019046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.274 [2024-07-15 16:10:19.019077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.274 qpair failed and we were unable to recover it. 00:28:50.274 [2024-07-15 16:10:19.019435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.274 [2024-07-15 16:10:19.019468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.274 qpair failed and we were unable to recover it. 00:28:50.274 [2024-07-15 16:10:19.019751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.274 [2024-07-15 16:10:19.019784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.274 qpair failed and we were unable to recover it. 00:28:50.274 [2024-07-15 16:10:19.020018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.274 [2024-07-15 16:10:19.020031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.274 qpair failed and we were unable to recover it. 00:28:50.274 [2024-07-15 16:10:19.020246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.274 [2024-07-15 16:10:19.020281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.274 qpair failed and we were unable to recover it. 00:28:50.274 [2024-07-15 16:10:19.020577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.274 [2024-07-15 16:10:19.020611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.274 qpair failed and we were unable to recover it. 00:28:50.274 [2024-07-15 16:10:19.020823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.274 [2024-07-15 16:10:19.020855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.274 qpair failed and we were unable to recover it. 00:28:50.274 [2024-07-15 16:10:19.021136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.274 [2024-07-15 16:10:19.021170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.274 qpair failed and we were unable to recover it. 00:28:50.274 [2024-07-15 16:10:19.021455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.274 [2024-07-15 16:10:19.021469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.274 qpair failed and we were unable to recover it. 00:28:50.274 [2024-07-15 16:10:19.021731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.274 [2024-07-15 16:10:19.021744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.274 qpair failed and we were unable to recover it. 00:28:50.274 [2024-07-15 16:10:19.021994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.274 [2024-07-15 16:10:19.022008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.274 qpair failed and we were unable to recover it. 00:28:50.274 [2024-07-15 16:10:19.022210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.274 [2024-07-15 16:10:19.022222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.274 qpair failed and we were unable to recover it. 00:28:50.274 [2024-07-15 16:10:19.022447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.274 [2024-07-15 16:10:19.022461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.274 qpair failed and we were unable to recover it. 00:28:50.274 [2024-07-15 16:10:19.022712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.274 [2024-07-15 16:10:19.022745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.274 qpair failed and we were unable to recover it. 00:28:50.274 [2024-07-15 16:10:19.022997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.274 [2024-07-15 16:10:19.023029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.274 qpair failed and we were unable to recover it. 00:28:50.274 [2024-07-15 16:10:19.023331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.274 [2024-07-15 16:10:19.023345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.274 qpair failed and we were unable to recover it. 00:28:50.274 [2024-07-15 16:10:19.023535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.274 [2024-07-15 16:10:19.023548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.274 qpair failed and we were unable to recover it. 00:28:50.274 [2024-07-15 16:10:19.023676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.274 [2024-07-15 16:10:19.023689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.274 qpair failed and we were unable to recover it. 00:28:50.274 [2024-07-15 16:10:19.023884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.274 [2024-07-15 16:10:19.023916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.274 qpair failed and we were unable to recover it. 00:28:50.274 [2024-07-15 16:10:19.024195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.274 [2024-07-15 16:10:19.024238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.274 qpair failed and we were unable to recover it. 00:28:50.274 [2024-07-15 16:10:19.024403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.274 [2024-07-15 16:10:19.024435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.274 qpair failed and we were unable to recover it. 00:28:50.274 [2024-07-15 16:10:19.024652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.274 [2024-07-15 16:10:19.024684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.274 qpair failed and we were unable to recover it. 00:28:50.274 [2024-07-15 16:10:19.024916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.274 [2024-07-15 16:10:19.024949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.274 qpair failed and we were unable to recover it. 00:28:50.274 [2024-07-15 16:10:19.025201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.274 [2024-07-15 16:10:19.025247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.274 qpair failed and we were unable to recover it. 00:28:50.274 [2024-07-15 16:10:19.025479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.274 [2024-07-15 16:10:19.025511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.274 qpair failed and we were unable to recover it. 00:28:50.274 [2024-07-15 16:10:19.025757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.274 [2024-07-15 16:10:19.025789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.274 qpair failed and we were unable to recover it. 00:28:50.274 [2024-07-15 16:10:19.026008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.274 [2024-07-15 16:10:19.026022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.274 qpair failed and we were unable to recover it. 00:28:50.274 [2024-07-15 16:10:19.026257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.274 [2024-07-15 16:10:19.026271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.274 qpair failed and we were unable to recover it. 00:28:50.274 [2024-07-15 16:10:19.026464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.274 [2024-07-15 16:10:19.026496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.274 qpair failed and we were unable to recover it. 00:28:50.274 [2024-07-15 16:10:19.026664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.274 [2024-07-15 16:10:19.026696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.274 qpair failed and we were unable to recover it. 00:28:50.274 [2024-07-15 16:10:19.027006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.274 [2024-07-15 16:10:19.027038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.274 qpair failed and we were unable to recover it. 00:28:50.274 [2024-07-15 16:10:19.027251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.274 [2024-07-15 16:10:19.027284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.274 qpair failed and we were unable to recover it. 00:28:50.274 [2024-07-15 16:10:19.027537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.274 [2024-07-15 16:10:19.027570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.274 qpair failed and we were unable to recover it. 00:28:50.274 [2024-07-15 16:10:19.027735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.274 [2024-07-15 16:10:19.027766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.274 qpair failed and we were unable to recover it. 00:28:50.274 [2024-07-15 16:10:19.027943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.274 [2024-07-15 16:10:19.027975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.274 qpair failed and we were unable to recover it. 00:28:50.274 [2024-07-15 16:10:19.028195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.274 [2024-07-15 16:10:19.028208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.274 qpair failed and we were unable to recover it. 00:28:50.274 [2024-07-15 16:10:19.028409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.274 [2024-07-15 16:10:19.028443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.274 qpair failed and we were unable to recover it. 00:28:50.274 [2024-07-15 16:10:19.028673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.274 [2024-07-15 16:10:19.028708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.274 qpair failed and we were unable to recover it. 00:28:50.274 [2024-07-15 16:10:19.028889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.274 [2024-07-15 16:10:19.028921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.274 qpair failed and we were unable to recover it. 00:28:50.274 [2024-07-15 16:10:19.029194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.274 [2024-07-15 16:10:19.029209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.274 qpair failed and we were unable to recover it. 00:28:50.274 [2024-07-15 16:10:19.029467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.274 [2024-07-15 16:10:19.029509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.274 qpair failed and we were unable to recover it. 00:28:50.274 [2024-07-15 16:10:19.029685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.275 [2024-07-15 16:10:19.029717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.275 qpair failed and we were unable to recover it. 00:28:50.275 [2024-07-15 16:10:19.030017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.275 [2024-07-15 16:10:19.030049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.275 qpair failed and we were unable to recover it. 00:28:50.275 [2024-07-15 16:10:19.030389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.275 [2024-07-15 16:10:19.030424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.275 qpair failed and we were unable to recover it. 00:28:50.275 [2024-07-15 16:10:19.030657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.275 [2024-07-15 16:10:19.030690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.275 qpair failed and we were unable to recover it. 00:28:50.275 [2024-07-15 16:10:19.030990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.275 [2024-07-15 16:10:19.031026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.275 qpair failed and we were unable to recover it. 00:28:50.275 [2024-07-15 16:10:19.031258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.275 [2024-07-15 16:10:19.031271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.275 qpair failed and we were unable to recover it. 00:28:50.275 [2024-07-15 16:10:19.031398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.275 [2024-07-15 16:10:19.031411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.275 qpair failed and we were unable to recover it. 00:28:50.275 [2024-07-15 16:10:19.031605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.275 [2024-07-15 16:10:19.031638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.275 qpair failed and we were unable to recover it. 00:28:50.275 [2024-07-15 16:10:19.031854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.275 [2024-07-15 16:10:19.031886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.275 qpair failed and we were unable to recover it. 00:28:50.275 [2024-07-15 16:10:19.032118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.275 [2024-07-15 16:10:19.032150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.275 qpair failed and we were unable to recover it. 00:28:50.275 [2024-07-15 16:10:19.032393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.275 [2024-07-15 16:10:19.032407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.275 qpair failed and we were unable to recover it. 00:28:50.275 [2024-07-15 16:10:19.032649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.275 [2024-07-15 16:10:19.032663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.275 qpair failed and we were unable to recover it. 00:28:50.275 [2024-07-15 16:10:19.032790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.275 [2024-07-15 16:10:19.032804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.275 qpair failed and we were unable to recover it. 00:28:50.275 [2024-07-15 16:10:19.032985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.275 [2024-07-15 16:10:19.033017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.275 qpair failed and we were unable to recover it. 00:28:50.275 [2024-07-15 16:10:19.033240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.275 [2024-07-15 16:10:19.033272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.275 qpair failed and we were unable to recover it. 00:28:50.275 [2024-07-15 16:10:19.033509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.275 [2024-07-15 16:10:19.033542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.275 qpair failed and we were unable to recover it. 00:28:50.275 [2024-07-15 16:10:19.033776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.275 [2024-07-15 16:10:19.033807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.275 qpair failed and we were unable to recover it. 00:28:50.275 [2024-07-15 16:10:19.034031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.275 [2024-07-15 16:10:19.034045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.275 qpair failed and we were unable to recover it. 00:28:50.275 [2024-07-15 16:10:19.034301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.275 [2024-07-15 16:10:19.034347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.275 qpair failed and we were unable to recover it. 00:28:50.275 [2024-07-15 16:10:19.034580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.275 [2024-07-15 16:10:19.034613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.275 qpair failed and we were unable to recover it. 00:28:50.275 [2024-07-15 16:10:19.034905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.275 [2024-07-15 16:10:19.034938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.275 qpair failed and we were unable to recover it. 00:28:50.275 [2024-07-15 16:10:19.035200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.275 [2024-07-15 16:10:19.035246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.275 qpair failed and we were unable to recover it. 00:28:50.275 [2024-07-15 16:10:19.035430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.275 [2024-07-15 16:10:19.035462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.275 qpair failed and we were unable to recover it. 00:28:50.275 [2024-07-15 16:10:19.035698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.275 [2024-07-15 16:10:19.035730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.275 qpair failed and we were unable to recover it. 00:28:50.275 [2024-07-15 16:10:19.036058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.275 [2024-07-15 16:10:19.036091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.275 qpair failed and we were unable to recover it. 00:28:50.275 [2024-07-15 16:10:19.036334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.275 [2024-07-15 16:10:19.036373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.275 qpair failed and we were unable to recover it. 00:28:50.275 [2024-07-15 16:10:19.036607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.275 [2024-07-15 16:10:19.036638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.275 qpair failed and we were unable to recover it. 00:28:50.275 [2024-07-15 16:10:19.036860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.275 [2024-07-15 16:10:19.036893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.275 qpair failed and we were unable to recover it. 00:28:50.275 [2024-07-15 16:10:19.037178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.275 [2024-07-15 16:10:19.037211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.275 qpair failed and we were unable to recover it. 00:28:50.275 [2024-07-15 16:10:19.037456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.275 [2024-07-15 16:10:19.037490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.275 qpair failed and we were unable to recover it. 00:28:50.275 [2024-07-15 16:10:19.037770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.275 [2024-07-15 16:10:19.037802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.275 qpair failed and we were unable to recover it. 00:28:50.275 [2024-07-15 16:10:19.038088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.275 [2024-07-15 16:10:19.038120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.275 qpair failed and we were unable to recover it. 00:28:50.275 [2024-07-15 16:10:19.038433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.275 [2024-07-15 16:10:19.038467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.275 qpair failed and we were unable to recover it. 00:28:50.275 [2024-07-15 16:10:19.038754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.275 [2024-07-15 16:10:19.038786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.275 qpair failed and we were unable to recover it. 00:28:50.275 [2024-07-15 16:10:19.039005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.275 [2024-07-15 16:10:19.039039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.275 qpair failed and we were unable to recover it. 00:28:50.275 [2024-07-15 16:10:19.039322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.275 [2024-07-15 16:10:19.039361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.275 qpair failed and we were unable to recover it. 00:28:50.275 [2024-07-15 16:10:19.039526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.275 [2024-07-15 16:10:19.039558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.275 qpair failed and we were unable to recover it. 00:28:50.275 [2024-07-15 16:10:19.039840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.275 [2024-07-15 16:10:19.039872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.275 qpair failed and we were unable to recover it. 00:28:50.275 [2024-07-15 16:10:19.040093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.275 [2024-07-15 16:10:19.040107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.275 qpair failed and we were unable to recover it. 00:28:50.275 [2024-07-15 16:10:19.040326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.275 [2024-07-15 16:10:19.040361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.275 qpair failed and we were unable to recover it. 00:28:50.275 [2024-07-15 16:10:19.040572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.275 [2024-07-15 16:10:19.040604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.275 qpair failed and we were unable to recover it. 00:28:50.275 [2024-07-15 16:10:19.040782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.276 [2024-07-15 16:10:19.040814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.276 qpair failed and we were unable to recover it. 00:28:50.276 [2024-07-15 16:10:19.041023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.276 [2024-07-15 16:10:19.041057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.276 qpair failed and we were unable to recover it. 00:28:50.276 [2024-07-15 16:10:19.041334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.276 [2024-07-15 16:10:19.041349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.276 qpair failed and we were unable to recover it. 00:28:50.276 [2024-07-15 16:10:19.041518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.276 [2024-07-15 16:10:19.041532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.276 qpair failed and we were unable to recover it. 00:28:50.276 [2024-07-15 16:10:19.041778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.276 [2024-07-15 16:10:19.041811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.276 qpair failed and we were unable to recover it. 00:28:50.276 [2024-07-15 16:10:19.042095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.276 [2024-07-15 16:10:19.042128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.276 qpair failed and we were unable to recover it. 00:28:50.276 [2024-07-15 16:10:19.042441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.276 [2024-07-15 16:10:19.042475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.276 qpair failed and we were unable to recover it. 00:28:50.276 [2024-07-15 16:10:19.042784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.276 [2024-07-15 16:10:19.042816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.276 qpair failed and we were unable to recover it. 00:28:50.276 [2024-07-15 16:10:19.043099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.276 [2024-07-15 16:10:19.043113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.276 qpair failed and we were unable to recover it. 00:28:50.276 [2024-07-15 16:10:19.043311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.276 [2024-07-15 16:10:19.043326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.276 qpair failed and we were unable to recover it. 00:28:50.276 [2024-07-15 16:10:19.043586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.276 [2024-07-15 16:10:19.043600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.276 qpair failed and we were unable to recover it. 00:28:50.276 [2024-07-15 16:10:19.043705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.276 [2024-07-15 16:10:19.043719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.276 qpair failed and we were unable to recover it. 00:28:50.276 [2024-07-15 16:10:19.043847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.276 [2024-07-15 16:10:19.043881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.276 qpair failed and we were unable to recover it. 00:28:50.276 [2024-07-15 16:10:19.044108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.276 [2024-07-15 16:10:19.044141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.276 qpair failed and we were unable to recover it. 00:28:50.276 [2024-07-15 16:10:19.044377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.276 [2024-07-15 16:10:19.044410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.276 qpair failed and we were unable to recover it. 00:28:50.276 [2024-07-15 16:10:19.044634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.276 [2024-07-15 16:10:19.044667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.276 qpair failed and we were unable to recover it. 00:28:50.276 [2024-07-15 16:10:19.044845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.276 [2024-07-15 16:10:19.044878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.276 qpair failed and we were unable to recover it. 00:28:50.276 [2024-07-15 16:10:19.045156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.276 [2024-07-15 16:10:19.045169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.276 qpair failed and we were unable to recover it. 00:28:50.276 [2024-07-15 16:10:19.045455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.276 [2024-07-15 16:10:19.045490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.276 qpair failed and we were unable to recover it. 00:28:50.276 [2024-07-15 16:10:19.045808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.276 [2024-07-15 16:10:19.045841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.276 qpair failed and we were unable to recover it. 00:28:50.276 [2024-07-15 16:10:19.046070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.276 [2024-07-15 16:10:19.046103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.276 qpair failed and we were unable to recover it. 00:28:50.276 [2024-07-15 16:10:19.046406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.276 [2024-07-15 16:10:19.046441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.276 qpair failed and we were unable to recover it. 00:28:50.276 [2024-07-15 16:10:19.046704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.276 [2024-07-15 16:10:19.046736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.276 qpair failed and we were unable to recover it. 00:28:50.276 [2024-07-15 16:10:19.046980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.276 [2024-07-15 16:10:19.047012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.276 qpair failed and we were unable to recover it. 00:28:50.276 [2024-07-15 16:10:19.047303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.276 [2024-07-15 16:10:19.047344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.276 qpair failed and we were unable to recover it. 00:28:50.276 [2024-07-15 16:10:19.047642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.276 [2024-07-15 16:10:19.047675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.276 qpair failed and we were unable to recover it. 00:28:50.276 [2024-07-15 16:10:19.047978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.276 [2024-07-15 16:10:19.048011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.276 qpair failed and we were unable to recover it. 00:28:50.276 [2024-07-15 16:10:19.048306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.276 [2024-07-15 16:10:19.048341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.276 qpair failed and we were unable to recover it. 00:28:50.276 [2024-07-15 16:10:19.048564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.276 [2024-07-15 16:10:19.048597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.276 qpair failed and we were unable to recover it. 00:28:50.276 [2024-07-15 16:10:19.048820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.276 [2024-07-15 16:10:19.048852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.276 qpair failed and we were unable to recover it. 00:28:50.276 [2024-07-15 16:10:19.049134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.276 [2024-07-15 16:10:19.049167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.276 qpair failed and we were unable to recover it. 00:28:50.276 [2024-07-15 16:10:19.049505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.276 [2024-07-15 16:10:19.049538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.276 qpair failed and we were unable to recover it. 00:28:50.276 [2024-07-15 16:10:19.049823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.276 [2024-07-15 16:10:19.049856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.276 qpair failed and we were unable to recover it. 00:28:50.276 [2024-07-15 16:10:19.050166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.276 [2024-07-15 16:10:19.050197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.276 qpair failed and we were unable to recover it. 00:28:50.276 [2024-07-15 16:10:19.050428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.276 [2024-07-15 16:10:19.050462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.276 qpair failed and we were unable to recover it. 00:28:50.276 [2024-07-15 16:10:19.050761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.276 [2024-07-15 16:10:19.050794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.276 qpair failed and we were unable to recover it. 00:28:50.276 [2024-07-15 16:10:19.051059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.276 [2024-07-15 16:10:19.051091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.276 qpair failed and we were unable to recover it. 00:28:50.276 [2024-07-15 16:10:19.051317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.276 [2024-07-15 16:10:19.051359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.276 qpair failed and we were unable to recover it. 00:28:50.276 [2024-07-15 16:10:19.051511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.276 [2024-07-15 16:10:19.051524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.276 qpair failed and we were unable to recover it. 00:28:50.276 [2024-07-15 16:10:19.051722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.276 [2024-07-15 16:10:19.051736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.276 qpair failed and we were unable to recover it. 00:28:50.276 [2024-07-15 16:10:19.051903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.276 [2024-07-15 16:10:19.051916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.276 qpair failed and we were unable to recover it. 00:28:50.276 [2024-07-15 16:10:19.052101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.277 [2024-07-15 16:10:19.052114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.277 qpair failed and we were unable to recover it. 00:28:50.277 [2024-07-15 16:10:19.052324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.277 [2024-07-15 16:10:19.052338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.277 qpair failed and we were unable to recover it. 00:28:50.277 [2024-07-15 16:10:19.052515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.277 [2024-07-15 16:10:19.052528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.277 qpair failed and we were unable to recover it. 00:28:50.277 [2024-07-15 16:10:19.052721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.277 [2024-07-15 16:10:19.052754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.277 qpair failed and we were unable to recover it. 00:28:50.277 [2024-07-15 16:10:19.052904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.277 [2024-07-15 16:10:19.052937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.277 qpair failed and we were unable to recover it. 00:28:50.277 [2024-07-15 16:10:19.053182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.277 [2024-07-15 16:10:19.053229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.277 qpair failed and we were unable to recover it. 00:28:50.277 [2024-07-15 16:10:19.053491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.277 [2024-07-15 16:10:19.053525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.277 qpair failed and we were unable to recover it. 00:28:50.277 [2024-07-15 16:10:19.053770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.277 [2024-07-15 16:10:19.053802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.277 qpair failed and we were unable to recover it. 00:28:50.277 [2024-07-15 16:10:19.054057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.277 [2024-07-15 16:10:19.054089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.277 qpair failed and we were unable to recover it. 00:28:50.277 [2024-07-15 16:10:19.054301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.277 [2024-07-15 16:10:19.054335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.277 qpair failed and we were unable to recover it. 00:28:50.277 [2024-07-15 16:10:19.054494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.277 [2024-07-15 16:10:19.054528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.277 qpair failed and we were unable to recover it. 00:28:50.277 [2024-07-15 16:10:19.054740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.277 [2024-07-15 16:10:19.054771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.277 qpair failed and we were unable to recover it. 00:28:50.277 [2024-07-15 16:10:19.054991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.277 [2024-07-15 16:10:19.055022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.277 qpair failed and we were unable to recover it. 00:28:50.277 [2024-07-15 16:10:19.055178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.277 [2024-07-15 16:10:19.055210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.277 qpair failed and we were unable to recover it. 00:28:50.277 [2024-07-15 16:10:19.055535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.277 [2024-07-15 16:10:19.055568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.277 qpair failed and we were unable to recover it. 00:28:50.277 [2024-07-15 16:10:19.055739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.277 [2024-07-15 16:10:19.055771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.277 qpair failed and we were unable to recover it. 00:28:50.277 [2024-07-15 16:10:19.055965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.277 [2024-07-15 16:10:19.055978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.277 qpair failed and we were unable to recover it. 00:28:50.277 [2024-07-15 16:10:19.056133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.277 [2024-07-15 16:10:19.056147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.277 qpair failed and we were unable to recover it. 00:28:50.277 [2024-07-15 16:10:19.056414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.277 [2024-07-15 16:10:19.056447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.277 qpair failed and we were unable to recover it. 00:28:50.277 [2024-07-15 16:10:19.056671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.277 [2024-07-15 16:10:19.056703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.277 qpair failed and we were unable to recover it. 00:28:50.277 [2024-07-15 16:10:19.056955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.277 [2024-07-15 16:10:19.056987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.277 qpair failed and we were unable to recover it. 00:28:50.277 [2024-07-15 16:10:19.057165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.277 [2024-07-15 16:10:19.057199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.277 qpair failed and we were unable to recover it. 00:28:50.277 [2024-07-15 16:10:19.057406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.277 [2024-07-15 16:10:19.057447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.277 qpair failed and we were unable to recover it. 00:28:50.277 [2024-07-15 16:10:19.057703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.277 [2024-07-15 16:10:19.057740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.277 qpair failed and we were unable to recover it. 00:28:50.277 [2024-07-15 16:10:19.057959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.277 [2024-07-15 16:10:19.057990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.277 qpair failed and we were unable to recover it. 00:28:50.277 [2024-07-15 16:10:19.058163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.277 [2024-07-15 16:10:19.058195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.277 qpair failed and we were unable to recover it. 00:28:50.277 [2024-07-15 16:10:19.058485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.277 [2024-07-15 16:10:19.058499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.277 qpair failed and we were unable to recover it. 00:28:50.277 [2024-07-15 16:10:19.058681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.277 [2024-07-15 16:10:19.058694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.277 qpair failed and we were unable to recover it. 00:28:50.277 [2024-07-15 16:10:19.058978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.277 [2024-07-15 16:10:19.059011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.277 qpair failed and we were unable to recover it. 00:28:50.277 [2024-07-15 16:10:19.059232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.277 [2024-07-15 16:10:19.059245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.277 qpair failed and we were unable to recover it. 00:28:50.277 [2024-07-15 16:10:19.059435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.277 [2024-07-15 16:10:19.059447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.277 qpair failed and we were unable to recover it. 00:28:50.277 [2024-07-15 16:10:19.059637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.277 [2024-07-15 16:10:19.059650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.277 qpair failed and we were unable to recover it. 00:28:50.277 [2024-07-15 16:10:19.059834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.277 [2024-07-15 16:10:19.059847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.277 qpair failed and we were unable to recover it. 00:28:50.277 [2024-07-15 16:10:19.060041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.277 [2024-07-15 16:10:19.060074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.277 qpair failed and we were unable to recover it. 00:28:50.277 [2024-07-15 16:10:19.060307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.277 [2024-07-15 16:10:19.060341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.277 qpair failed and we were unable to recover it. 00:28:50.277 [2024-07-15 16:10:19.060640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.278 [2024-07-15 16:10:19.060672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.278 qpair failed and we were unable to recover it. 00:28:50.278 [2024-07-15 16:10:19.060844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.278 [2024-07-15 16:10:19.060877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.278 qpair failed and we were unable to recover it. 00:28:50.278 [2024-07-15 16:10:19.061185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.278 [2024-07-15 16:10:19.061218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.278 qpair failed and we were unable to recover it. 00:28:50.278 [2024-07-15 16:10:19.061467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.278 [2024-07-15 16:10:19.061501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.278 qpair failed and we were unable to recover it. 00:28:50.278 [2024-07-15 16:10:19.061746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.278 [2024-07-15 16:10:19.061778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.278 qpair failed and we were unable to recover it. 00:28:50.278 [2024-07-15 16:10:19.062024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.278 [2024-07-15 16:10:19.062056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.278 qpair failed and we were unable to recover it. 00:28:50.278 [2024-07-15 16:10:19.062312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.278 [2024-07-15 16:10:19.062325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.278 qpair failed and we were unable to recover it. 00:28:50.278 [2024-07-15 16:10:19.062501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.278 [2024-07-15 16:10:19.062533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.278 qpair failed and we were unable to recover it. 00:28:50.278 [2024-07-15 16:10:19.062758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.278 [2024-07-15 16:10:19.062791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.278 qpair failed and we were unable to recover it. 00:28:50.278 [2024-07-15 16:10:19.063021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.278 [2024-07-15 16:10:19.063054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.278 qpair failed and we were unable to recover it. 00:28:50.278 [2024-07-15 16:10:19.063221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.278 [2024-07-15 16:10:19.063266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.278 qpair failed and we were unable to recover it. 00:28:50.278 [2024-07-15 16:10:19.063570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.278 [2024-07-15 16:10:19.063603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.278 qpair failed and we were unable to recover it. 00:28:50.278 [2024-07-15 16:10:19.063946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.278 [2024-07-15 16:10:19.063979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.278 qpair failed and we were unable to recover it. 00:28:50.278 [2024-07-15 16:10:19.064275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.278 [2024-07-15 16:10:19.064317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.278 qpair failed and we were unable to recover it. 00:28:50.278 [2024-07-15 16:10:19.064501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.278 [2024-07-15 16:10:19.064533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.278 qpair failed and we were unable to recover it. 00:28:50.278 [2024-07-15 16:10:19.064709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.278 [2024-07-15 16:10:19.064741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.278 qpair failed and we were unable to recover it. 00:28:50.278 [2024-07-15 16:10:19.064987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.278 [2024-07-15 16:10:19.065019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.278 qpair failed and we were unable to recover it. 00:28:50.278 [2024-07-15 16:10:19.065351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.278 [2024-07-15 16:10:19.065386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.278 qpair failed and we were unable to recover it. 00:28:50.278 [2024-07-15 16:10:19.065620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.278 [2024-07-15 16:10:19.065652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.278 qpair failed and we were unable to recover it. 00:28:50.278 [2024-07-15 16:10:19.065810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.278 [2024-07-15 16:10:19.065842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.278 qpair failed and we were unable to recover it. 00:28:50.278 [2024-07-15 16:10:19.066117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.278 [2024-07-15 16:10:19.066129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.278 qpair failed and we were unable to recover it. 00:28:50.278 [2024-07-15 16:10:19.066409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.278 [2024-07-15 16:10:19.066442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.278 qpair failed and we were unable to recover it. 00:28:50.278 [2024-07-15 16:10:19.066768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.278 [2024-07-15 16:10:19.066800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.278 qpair failed and we were unable to recover it. 00:28:50.278 [2024-07-15 16:10:19.067030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.278 [2024-07-15 16:10:19.067063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.278 qpair failed and we were unable to recover it. 00:28:50.278 [2024-07-15 16:10:19.067354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.278 [2024-07-15 16:10:19.067399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.278 qpair failed and we were unable to recover it. 00:28:50.278 [2024-07-15 16:10:19.067647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.278 [2024-07-15 16:10:19.067660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.278 qpair failed and we were unable to recover it. 00:28:50.278 [2024-07-15 16:10:19.067830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.278 [2024-07-15 16:10:19.067844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.278 qpair failed and we were unable to recover it. 00:28:50.278 [2024-07-15 16:10:19.068129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.278 [2024-07-15 16:10:19.068161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.278 qpair failed and we were unable to recover it. 00:28:50.278 [2024-07-15 16:10:19.068448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.278 [2024-07-15 16:10:19.068487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.278 qpair failed and we were unable to recover it. 00:28:50.278 [2024-07-15 16:10:19.068719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.278 [2024-07-15 16:10:19.068751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.278 qpair failed and we were unable to recover it. 00:28:50.278 [2024-07-15 16:10:19.068984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.278 [2024-07-15 16:10:19.069017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.278 qpair failed and we were unable to recover it. 00:28:50.278 [2024-07-15 16:10:19.069258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.278 [2024-07-15 16:10:19.069293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.278 qpair failed and we were unable to recover it. 00:28:50.278 [2024-07-15 16:10:19.069458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.278 [2024-07-15 16:10:19.069472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.278 qpair failed and we were unable to recover it. 00:28:50.278 [2024-07-15 16:10:19.069591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.278 [2024-07-15 16:10:19.069604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.278 qpair failed and we were unable to recover it. 00:28:50.278 [2024-07-15 16:10:19.069788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.278 [2024-07-15 16:10:19.069802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.278 qpair failed and we were unable to recover it. 00:28:50.278 [2024-07-15 16:10:19.070108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.278 [2024-07-15 16:10:19.070142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.278 qpair failed and we were unable to recover it. 00:28:50.278 [2024-07-15 16:10:19.070315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.278 [2024-07-15 16:10:19.070349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.278 qpair failed and we were unable to recover it. 00:28:50.278 [2024-07-15 16:10:19.070597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.278 [2024-07-15 16:10:19.070629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.278 qpair failed and we were unable to recover it. 00:28:50.278 [2024-07-15 16:10:19.070798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.278 [2024-07-15 16:10:19.070830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.278 qpair failed and we were unable to recover it. 00:28:50.278 [2024-07-15 16:10:19.070996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.278 [2024-07-15 16:10:19.071028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.278 qpair failed and we were unable to recover it. 00:28:50.278 [2024-07-15 16:10:19.071165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.279 [2024-07-15 16:10:19.071178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.279 qpair failed and we were unable to recover it. 00:28:50.279 [2024-07-15 16:10:19.071405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.279 [2024-07-15 16:10:19.071438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.279 qpair failed and we were unable to recover it. 00:28:50.279 [2024-07-15 16:10:19.071698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.279 [2024-07-15 16:10:19.071730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.279 qpair failed and we were unable to recover it. 00:28:50.279 [2024-07-15 16:10:19.072008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.279 [2024-07-15 16:10:19.072048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.279 qpair failed and we were unable to recover it. 00:28:50.279 [2024-07-15 16:10:19.072232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.279 [2024-07-15 16:10:19.072247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.279 qpair failed and we were unable to recover it. 00:28:50.279 [2024-07-15 16:10:19.072430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.279 [2024-07-15 16:10:19.072462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.279 qpair failed and we were unable to recover it. 00:28:50.279 [2024-07-15 16:10:19.072683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.279 [2024-07-15 16:10:19.072717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.279 qpair failed and we were unable to recover it. 00:28:50.279 [2024-07-15 16:10:19.072960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.279 [2024-07-15 16:10:19.072993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.279 qpair failed and we were unable to recover it. 00:28:50.279 [2024-07-15 16:10:19.073284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.279 [2024-07-15 16:10:19.073320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.279 qpair failed and we were unable to recover it. 00:28:50.279 [2024-07-15 16:10:19.073573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.279 [2024-07-15 16:10:19.073605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.279 qpair failed and we were unable to recover it. 00:28:50.279 [2024-07-15 16:10:19.073815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.279 [2024-07-15 16:10:19.073848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.279 qpair failed and we were unable to recover it. 00:28:50.279 [2024-07-15 16:10:19.074152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.279 [2024-07-15 16:10:19.074185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.279 qpair failed and we were unable to recover it. 00:28:50.279 [2024-07-15 16:10:19.074442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.279 [2024-07-15 16:10:19.074476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.279 qpair failed and we were unable to recover it. 00:28:50.279 [2024-07-15 16:10:19.074704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.279 [2024-07-15 16:10:19.074737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.279 qpair failed and we were unable to recover it. 00:28:50.279 [2024-07-15 16:10:19.075131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.279 [2024-07-15 16:10:19.075164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.279 qpair failed and we were unable to recover it. 00:28:50.279 [2024-07-15 16:10:19.075473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.279 [2024-07-15 16:10:19.075487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.279 qpair failed and we were unable to recover it. 00:28:50.279 [2024-07-15 16:10:19.075726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.279 [2024-07-15 16:10:19.075739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.279 qpair failed and we were unable to recover it. 00:28:50.279 [2024-07-15 16:10:19.075937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.279 [2024-07-15 16:10:19.075951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.279 qpair failed and we were unable to recover it. 00:28:50.279 [2024-07-15 16:10:19.076153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.279 [2024-07-15 16:10:19.076166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.279 qpair failed and we were unable to recover it. 00:28:50.279 [2024-07-15 16:10:19.076377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.279 [2024-07-15 16:10:19.076392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.279 qpair failed and we were unable to recover it. 00:28:50.279 [2024-07-15 16:10:19.076559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.279 [2024-07-15 16:10:19.076573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.279 qpair failed and we were unable to recover it. 00:28:50.279 [2024-07-15 16:10:19.076785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.279 [2024-07-15 16:10:19.076799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.279 qpair failed and we were unable to recover it. 00:28:50.279 [2024-07-15 16:10:19.077025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.279 [2024-07-15 16:10:19.077058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.279 qpair failed and we were unable to recover it. 00:28:50.279 [2024-07-15 16:10:19.077390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.279 [2024-07-15 16:10:19.077423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.279 qpair failed and we were unable to recover it. 00:28:50.279 [2024-07-15 16:10:19.077734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.279 [2024-07-15 16:10:19.077766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.279 qpair failed and we were unable to recover it. 00:28:50.279 [2024-07-15 16:10:19.078018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.279 [2024-07-15 16:10:19.078050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.279 qpair failed and we were unable to recover it. 00:28:50.279 [2024-07-15 16:10:19.078345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.279 [2024-07-15 16:10:19.078359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.279 qpair failed and we were unable to recover it. 00:28:50.279 [2024-07-15 16:10:19.078574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.279 [2024-07-15 16:10:19.078607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.279 qpair failed and we were unable to recover it. 00:28:50.279 [2024-07-15 16:10:19.078841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.279 [2024-07-15 16:10:19.078878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.279 qpair failed and we were unable to recover it. 00:28:50.279 [2024-07-15 16:10:19.079086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.279 [2024-07-15 16:10:19.079119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.279 qpair failed and we were unable to recover it. 00:28:50.279 [2024-07-15 16:10:19.079406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.279 [2024-07-15 16:10:19.079440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.279 qpair failed and we were unable to recover it. 00:28:50.279 [2024-07-15 16:10:19.079672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.279 [2024-07-15 16:10:19.079705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.279 qpair failed and we were unable to recover it. 00:28:50.279 [2024-07-15 16:10:19.079954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.279 [2024-07-15 16:10:19.079987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.279 qpair failed and we were unable to recover it. 00:28:50.279 [2024-07-15 16:10:19.080145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.279 [2024-07-15 16:10:19.080177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.279 qpair failed and we were unable to recover it. 00:28:50.279 [2024-07-15 16:10:19.080515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.279 [2024-07-15 16:10:19.080552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.279 qpair failed and we were unable to recover it. 00:28:50.279 [2024-07-15 16:10:19.080735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.279 [2024-07-15 16:10:19.080767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.279 qpair failed and we were unable to recover it. 00:28:50.279 [2024-07-15 16:10:19.081018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.279 [2024-07-15 16:10:19.081050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.279 qpair failed and we were unable to recover it. 00:28:50.279 [2024-07-15 16:10:19.081318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.279 [2024-07-15 16:10:19.081333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.279 qpair failed and we were unable to recover it. 00:28:50.279 [2024-07-15 16:10:19.081593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.279 [2024-07-15 16:10:19.081607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.279 qpair failed and we were unable to recover it. 00:28:50.279 [2024-07-15 16:10:19.081858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.279 [2024-07-15 16:10:19.081891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.279 qpair failed and we were unable to recover it. 00:28:50.279 [2024-07-15 16:10:19.082191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.279 [2024-07-15 16:10:19.082223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.279 qpair failed and we were unable to recover it. 00:28:50.279 [2024-07-15 16:10:19.082484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.280 [2024-07-15 16:10:19.082516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.280 qpair failed and we were unable to recover it. 00:28:50.280 [2024-07-15 16:10:19.082850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.280 [2024-07-15 16:10:19.082883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.280 qpair failed and we were unable to recover it. 00:28:50.280 [2024-07-15 16:10:19.083244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.280 [2024-07-15 16:10:19.083277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.280 qpair failed and we were unable to recover it. 00:28:50.280 [2024-07-15 16:10:19.083555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.280 [2024-07-15 16:10:19.083584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.280 qpair failed and we were unable to recover it. 00:28:50.280 [2024-07-15 16:10:19.083839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.280 [2024-07-15 16:10:19.083871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.280 qpair failed and we were unable to recover it. 00:28:50.280 [2024-07-15 16:10:19.084101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.280 [2024-07-15 16:10:19.084134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.280 qpair failed and we were unable to recover it. 00:28:50.280 [2024-07-15 16:10:19.084388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.280 [2024-07-15 16:10:19.084402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.280 qpair failed and we were unable to recover it. 00:28:50.280 [2024-07-15 16:10:19.084584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.280 [2024-07-15 16:10:19.084598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.280 qpair failed and we were unable to recover it. 00:28:50.280 [2024-07-15 16:10:19.084786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.280 [2024-07-15 16:10:19.084819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.280 qpair failed and we were unable to recover it. 00:28:50.280 [2024-07-15 16:10:19.085112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.280 [2024-07-15 16:10:19.085145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.280 qpair failed and we were unable to recover it. 00:28:50.280 [2024-07-15 16:10:19.085467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.280 [2024-07-15 16:10:19.085501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.280 qpair failed and we were unable to recover it. 00:28:50.280 [2024-07-15 16:10:19.085719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.280 [2024-07-15 16:10:19.085752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.280 qpair failed and we were unable to recover it. 00:28:50.280 [2024-07-15 16:10:19.085995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.280 [2024-07-15 16:10:19.086027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.280 qpair failed and we were unable to recover it. 00:28:50.280 [2024-07-15 16:10:19.086265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.280 [2024-07-15 16:10:19.086299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.280 qpair failed and we were unable to recover it. 00:28:50.280 [2024-07-15 16:10:19.086485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.280 [2024-07-15 16:10:19.086518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.280 qpair failed and we were unable to recover it. 00:28:50.280 [2024-07-15 16:10:19.086744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.280 [2024-07-15 16:10:19.086776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.280 qpair failed and we were unable to recover it. 00:28:50.280 [2024-07-15 16:10:19.087097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.280 [2024-07-15 16:10:19.087130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.280 qpair failed and we were unable to recover it. 00:28:50.280 [2024-07-15 16:10:19.087347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.280 [2024-07-15 16:10:19.087381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.280 qpair failed and we were unable to recover it. 00:28:50.280 [2024-07-15 16:10:19.087576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.280 [2024-07-15 16:10:19.087609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.280 qpair failed and we were unable to recover it. 00:28:50.280 [2024-07-15 16:10:19.087843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.280 [2024-07-15 16:10:19.087876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.280 qpair failed and we were unable to recover it. 00:28:50.280 [2024-07-15 16:10:19.088092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.280 [2024-07-15 16:10:19.088124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.280 qpair failed and we were unable to recover it. 00:28:50.280 [2024-07-15 16:10:19.088416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.280 [2024-07-15 16:10:19.088450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.280 qpair failed and we were unable to recover it. 00:28:50.280 [2024-07-15 16:10:19.088670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.280 [2024-07-15 16:10:19.088703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.280 qpair failed and we were unable to recover it. 00:28:50.280 [2024-07-15 16:10:19.088989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.280 [2024-07-15 16:10:19.089021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.280 qpair failed and we were unable to recover it. 00:28:50.280 [2024-07-15 16:10:19.089166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.280 [2024-07-15 16:10:19.089199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.280 qpair failed and we were unable to recover it. 00:28:50.280 [2024-07-15 16:10:19.089498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.280 [2024-07-15 16:10:19.089512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.280 qpair failed and we were unable to recover it. 00:28:50.280 [2024-07-15 16:10:19.089695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.280 [2024-07-15 16:10:19.089709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.280 qpair failed and we were unable to recover it. 00:28:50.280 [2024-07-15 16:10:19.089896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.280 [2024-07-15 16:10:19.089934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.280 qpair failed and we were unable to recover it. 00:28:50.280 [2024-07-15 16:10:19.090202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.280 [2024-07-15 16:10:19.090246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.280 qpair failed and we were unable to recover it. 00:28:50.280 [2024-07-15 16:10:19.090479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.280 [2024-07-15 16:10:19.090511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.280 qpair failed and we were unable to recover it. 00:28:50.280 [2024-07-15 16:10:19.090748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.280 [2024-07-15 16:10:19.090780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.280 qpair failed and we were unable to recover it. 00:28:50.280 [2024-07-15 16:10:19.091010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.280 [2024-07-15 16:10:19.091043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.280 qpair failed and we were unable to recover it. 00:28:50.280 [2024-07-15 16:10:19.091266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.280 [2024-07-15 16:10:19.091280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.280 qpair failed and we were unable to recover it. 00:28:50.280 [2024-07-15 16:10:19.091452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.280 [2024-07-15 16:10:19.091466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.280 qpair failed and we were unable to recover it. 00:28:50.280 [2024-07-15 16:10:19.091654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.280 [2024-07-15 16:10:19.091686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.280 qpair failed and we were unable to recover it. 00:28:50.280 [2024-07-15 16:10:19.091896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.280 [2024-07-15 16:10:19.091929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.280 qpair failed and we were unable to recover it. 00:28:50.280 [2024-07-15 16:10:19.092242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.280 [2024-07-15 16:10:19.092276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.280 qpair failed and we were unable to recover it. 00:28:50.280 [2024-07-15 16:10:19.092430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.280 [2024-07-15 16:10:19.092462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.280 qpair failed and we were unable to recover it. 00:28:50.280 [2024-07-15 16:10:19.092692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.280 [2024-07-15 16:10:19.092725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.280 qpair failed and we were unable to recover it. 00:28:50.280 [2024-07-15 16:10:19.092949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.280 [2024-07-15 16:10:19.092982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.280 qpair failed and we were unable to recover it. 00:28:50.280 [2024-07-15 16:10:19.093280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.280 [2024-07-15 16:10:19.093294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.280 qpair failed and we were unable to recover it. 00:28:50.280 [2024-07-15 16:10:19.093414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.281 [2024-07-15 16:10:19.093428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.281 qpair failed and we were unable to recover it. 00:28:50.281 [2024-07-15 16:10:19.093619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.281 [2024-07-15 16:10:19.093651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.281 qpair failed and we were unable to recover it. 00:28:50.281 [2024-07-15 16:10:19.093883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.281 [2024-07-15 16:10:19.093916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.281 qpair failed and we were unable to recover it. 00:28:50.281 [2024-07-15 16:10:19.094073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.281 [2024-07-15 16:10:19.094106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.281 qpair failed and we were unable to recover it. 00:28:50.281 [2024-07-15 16:10:19.094378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.281 [2024-07-15 16:10:19.094392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.281 qpair failed and we were unable to recover it. 00:28:50.281 [2024-07-15 16:10:19.094582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.281 [2024-07-15 16:10:19.094595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.281 qpair failed and we were unable to recover it. 00:28:50.281 [2024-07-15 16:10:19.094790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.281 [2024-07-15 16:10:19.094803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.281 qpair failed and we were unable to recover it. 00:28:50.281 [2024-07-15 16:10:19.095067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.281 [2024-07-15 16:10:19.095100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.281 qpair failed and we were unable to recover it. 00:28:50.281 [2024-07-15 16:10:19.095365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.281 [2024-07-15 16:10:19.095399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.281 qpair failed and we were unable to recover it. 00:28:50.281 [2024-07-15 16:10:19.095638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.281 [2024-07-15 16:10:19.095652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.281 qpair failed and we were unable to recover it. 00:28:50.281 [2024-07-15 16:10:19.095788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.281 [2024-07-15 16:10:19.095818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.281 qpair failed and we were unable to recover it. 00:28:50.281 [2024-07-15 16:10:19.096151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.281 [2024-07-15 16:10:19.096183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.281 qpair failed and we were unable to recover it. 00:28:50.281 [2024-07-15 16:10:19.096380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.281 [2024-07-15 16:10:19.096413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.281 qpair failed and we were unable to recover it. 00:28:50.281 [2024-07-15 16:10:19.096698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.281 [2024-07-15 16:10:19.096732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.281 qpair failed and we were unable to recover it. 00:28:50.281 [2024-07-15 16:10:19.097010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.281 [2024-07-15 16:10:19.097043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.281 qpair failed and we were unable to recover it. 00:28:50.281 [2024-07-15 16:10:19.097321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.281 [2024-07-15 16:10:19.097336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.281 qpair failed and we were unable to recover it. 00:28:50.281 [2024-07-15 16:10:19.097514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.281 [2024-07-15 16:10:19.097527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.281 qpair failed and we were unable to recover it. 00:28:50.281 [2024-07-15 16:10:19.097656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.281 [2024-07-15 16:10:19.097689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.281 qpair failed and we were unable to recover it. 00:28:50.281 [2024-07-15 16:10:19.097907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.281 [2024-07-15 16:10:19.097939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.281 qpair failed and we were unable to recover it. 00:28:50.281 [2024-07-15 16:10:19.098254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.281 [2024-07-15 16:10:19.098288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.281 qpair failed and we were unable to recover it. 00:28:50.281 [2024-07-15 16:10:19.098593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.281 [2024-07-15 16:10:19.098626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.281 qpair failed and we were unable to recover it. 00:28:50.281 [2024-07-15 16:10:19.098852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.281 [2024-07-15 16:10:19.098884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.281 qpair failed and we were unable to recover it. 00:28:50.281 [2024-07-15 16:10:19.099106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.281 [2024-07-15 16:10:19.099139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.281 qpair failed and we were unable to recover it. 00:28:50.281 [2024-07-15 16:10:19.099346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.281 [2024-07-15 16:10:19.099388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.281 qpair failed and we were unable to recover it. 00:28:50.281 [2024-07-15 16:10:19.099520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.281 [2024-07-15 16:10:19.099534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.281 qpair failed and we were unable to recover it. 00:28:50.281 [2024-07-15 16:10:19.099681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.281 [2024-07-15 16:10:19.099713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.281 qpair failed and we were unable to recover it. 00:28:50.281 [2024-07-15 16:10:19.099882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.281 [2024-07-15 16:10:19.099921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.281 qpair failed and we were unable to recover it. 00:28:50.281 [2024-07-15 16:10:19.100145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.281 [2024-07-15 16:10:19.100176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.281 qpair failed and we were unable to recover it. 00:28:50.281 [2024-07-15 16:10:19.100471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.281 [2024-07-15 16:10:19.100484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.281 qpair failed and we were unable to recover it. 00:28:50.281 [2024-07-15 16:10:19.100595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.281 [2024-07-15 16:10:19.100608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.281 qpair failed and we were unable to recover it. 00:28:50.281 [2024-07-15 16:10:19.100773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.281 [2024-07-15 16:10:19.100787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.281 qpair failed and we were unable to recover it. 00:28:50.281 [2024-07-15 16:10:19.100996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.281 [2024-07-15 16:10:19.101009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.281 qpair failed and we were unable to recover it. 00:28:50.281 [2024-07-15 16:10:19.101211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.281 [2024-07-15 16:10:19.101230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.281 qpair failed and we were unable to recover it. 00:28:50.281 [2024-07-15 16:10:19.101465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.281 [2024-07-15 16:10:19.101480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.281 qpair failed and we were unable to recover it. 00:28:50.281 [2024-07-15 16:10:19.101789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.281 [2024-07-15 16:10:19.101822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.282 qpair failed and we were unable to recover it. 00:28:50.282 [2024-07-15 16:10:19.102031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.282 [2024-07-15 16:10:19.102062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.282 qpair failed and we were unable to recover it. 00:28:50.282 [2024-07-15 16:10:19.102366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.282 [2024-07-15 16:10:19.102400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.282 qpair failed and we were unable to recover it. 00:28:50.282 [2024-07-15 16:10:19.102631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.282 [2024-07-15 16:10:19.102664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.282 qpair failed and we were unable to recover it. 00:28:50.282 [2024-07-15 16:10:19.102839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.282 [2024-07-15 16:10:19.102871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.282 qpair failed and we were unable to recover it. 00:28:50.282 [2024-07-15 16:10:19.103152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.282 [2024-07-15 16:10:19.103190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.282 qpair failed and we were unable to recover it. 00:28:50.282 [2024-07-15 16:10:19.103382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.282 [2024-07-15 16:10:19.103397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.282 qpair failed and we were unable to recover it. 00:28:50.282 [2024-07-15 16:10:19.103587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.282 [2024-07-15 16:10:19.103619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.282 qpair failed and we were unable to recover it. 00:28:50.282 [2024-07-15 16:10:19.103770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.282 [2024-07-15 16:10:19.103802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.282 qpair failed and we were unable to recover it. 00:28:50.282 [2024-07-15 16:10:19.104031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.282 [2024-07-15 16:10:19.104063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.282 qpair failed and we were unable to recover it. 00:28:50.282 [2024-07-15 16:10:19.104329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.282 [2024-07-15 16:10:19.104343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.282 qpair failed and we were unable to recover it. 00:28:50.282 [2024-07-15 16:10:19.104477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.282 [2024-07-15 16:10:19.104490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.282 qpair failed and we were unable to recover it. 00:28:50.282 [2024-07-15 16:10:19.104704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.282 [2024-07-15 16:10:19.104736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.282 qpair failed and we were unable to recover it. 00:28:50.282 [2024-07-15 16:10:19.104944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.282 [2024-07-15 16:10:19.104976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.282 qpair failed and we were unable to recover it. 00:28:50.282 [2024-07-15 16:10:19.105203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.282 [2024-07-15 16:10:19.105246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.282 qpair failed and we were unable to recover it. 00:28:50.282 [2024-07-15 16:10:19.105429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.282 [2024-07-15 16:10:19.105461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.282 qpair failed and we were unable to recover it. 00:28:50.282 [2024-07-15 16:10:19.105744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.282 [2024-07-15 16:10:19.105776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.282 qpair failed and we were unable to recover it. 00:28:50.282 [2024-07-15 16:10:19.106112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.282 [2024-07-15 16:10:19.106144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.282 qpair failed and we were unable to recover it. 00:28:50.282 [2024-07-15 16:10:19.106440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.282 [2024-07-15 16:10:19.106453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.282 qpair failed and we were unable to recover it. 00:28:50.282 [2024-07-15 16:10:19.106676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.282 [2024-07-15 16:10:19.106723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.282 qpair failed and we were unable to recover it. 00:28:50.282 [2024-07-15 16:10:19.106971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.282 [2024-07-15 16:10:19.107018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.282 qpair failed and we were unable to recover it. 00:28:50.282 [2024-07-15 16:10:19.107289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.282 [2024-07-15 16:10:19.107366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.282 qpair failed and we were unable to recover it. 00:28:50.282 [2024-07-15 16:10:19.107576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.282 [2024-07-15 16:10:19.107613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.282 qpair failed and we were unable to recover it. 00:28:50.282 [2024-07-15 16:10:19.107908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.282 [2024-07-15 16:10:19.107941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.282 qpair failed and we were unable to recover it. 00:28:50.282 [2024-07-15 16:10:19.108194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.282 [2024-07-15 16:10:19.108238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.282 qpair failed and we were unable to recover it. 00:28:50.282 [2024-07-15 16:10:19.108470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.282 [2024-07-15 16:10:19.108504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.282 qpair failed and we were unable to recover it. 00:28:50.282 [2024-07-15 16:10:19.108813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.282 [2024-07-15 16:10:19.108845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.282 qpair failed and we were unable to recover it. 00:28:50.282 [2024-07-15 16:10:19.109156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.282 [2024-07-15 16:10:19.109188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.282 qpair failed and we were unable to recover it. 00:28:50.282 [2024-07-15 16:10:19.109449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.282 [2024-07-15 16:10:19.109483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.282 qpair failed and we were unable to recover it. 00:28:50.282 [2024-07-15 16:10:19.109661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.282 [2024-07-15 16:10:19.109694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.282 qpair failed and we were unable to recover it. 00:28:50.282 [2024-07-15 16:10:19.109951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.282 [2024-07-15 16:10:19.109984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.282 qpair failed and we were unable to recover it. 00:28:50.282 [2024-07-15 16:10:19.110320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.282 [2024-07-15 16:10:19.110354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.282 qpair failed and we were unable to recover it. 00:28:50.282 [2024-07-15 16:10:19.110653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.282 [2024-07-15 16:10:19.110695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.282 qpair failed and we were unable to recover it. 00:28:50.282 [2024-07-15 16:10:19.110934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.282 [2024-07-15 16:10:19.110967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.282 qpair failed and we were unable to recover it. 00:28:50.282 [2024-07-15 16:10:19.111192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.282 [2024-07-15 16:10:19.111237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.282 qpair failed and we were unable to recover it. 00:28:50.282 [2024-07-15 16:10:19.111464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.282 [2024-07-15 16:10:19.111481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.282 qpair failed and we were unable to recover it. 00:28:50.282 [2024-07-15 16:10:19.111663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.282 [2024-07-15 16:10:19.111695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.282 qpair failed and we were unable to recover it. 00:28:50.282 [2024-07-15 16:10:19.111955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.282 [2024-07-15 16:10:19.111988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.282 qpair failed and we were unable to recover it. 00:28:50.282 [2024-07-15 16:10:19.112284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.282 [2024-07-15 16:10:19.112318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.282 qpair failed and we were unable to recover it. 00:28:50.282 [2024-07-15 16:10:19.112620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.282 [2024-07-15 16:10:19.112652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.282 qpair failed and we were unable to recover it. 00:28:50.282 [2024-07-15 16:10:19.112972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.282 [2024-07-15 16:10:19.113004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.282 qpair failed and we were unable to recover it. 00:28:50.283 [2024-07-15 16:10:19.113280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.283 [2024-07-15 16:10:19.113315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.283 qpair failed and we were unable to recover it. 00:28:50.283 [2024-07-15 16:10:19.113572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.283 [2024-07-15 16:10:19.113604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.283 qpair failed and we were unable to recover it. 00:28:50.283 [2024-07-15 16:10:19.113840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.283 [2024-07-15 16:10:19.113872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.283 qpair failed and we were unable to recover it. 00:28:50.283 [2024-07-15 16:10:19.114238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.283 [2024-07-15 16:10:19.114272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.283 qpair failed and we were unable to recover it. 00:28:50.283 [2024-07-15 16:10:19.114524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.283 [2024-07-15 16:10:19.114556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.283 qpair failed and we were unable to recover it. 00:28:50.283 [2024-07-15 16:10:19.114726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.283 [2024-07-15 16:10:19.114759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.283 qpair failed and we were unable to recover it. 00:28:50.283 [2024-07-15 16:10:19.115074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.283 [2024-07-15 16:10:19.115106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.283 qpair failed and we were unable to recover it. 00:28:50.283 [2024-07-15 16:10:19.115332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.283 [2024-07-15 16:10:19.115366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.283 qpair failed and we were unable to recover it. 00:28:50.283 [2024-07-15 16:10:19.115675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.283 [2024-07-15 16:10:19.115707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.283 qpair failed and we were unable to recover it. 00:28:50.283 [2024-07-15 16:10:19.116064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.283 [2024-07-15 16:10:19.116095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.283 qpair failed and we were unable to recover it. 00:28:50.283 [2024-07-15 16:10:19.116421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.283 [2024-07-15 16:10:19.116455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.283 qpair failed and we were unable to recover it. 00:28:50.283 [2024-07-15 16:10:19.116634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.283 [2024-07-15 16:10:19.116666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.283 qpair failed and we were unable to recover it. 00:28:50.283 [2024-07-15 16:10:19.116850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.283 [2024-07-15 16:10:19.116882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.283 qpair failed and we were unable to recover it. 00:28:50.283 [2024-07-15 16:10:19.117194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.283 [2024-07-15 16:10:19.117235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.283 qpair failed and we were unable to recover it. 00:28:50.283 [2024-07-15 16:10:19.117516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.283 [2024-07-15 16:10:19.117548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.283 qpair failed and we were unable to recover it. 00:28:50.283 [2024-07-15 16:10:19.117712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.283 [2024-07-15 16:10:19.117745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.283 qpair failed and we were unable to recover it. 00:28:50.283 [2024-07-15 16:10:19.118082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.283 [2024-07-15 16:10:19.118114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.283 qpair failed and we were unable to recover it. 00:28:50.283 [2024-07-15 16:10:19.118332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.283 [2024-07-15 16:10:19.118364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.283 qpair failed and we were unable to recover it. 00:28:50.283 [2024-07-15 16:10:19.118534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.283 [2024-07-15 16:10:19.118568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.283 qpair failed and we were unable to recover it. 00:28:50.283 [2024-07-15 16:10:19.118824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.283 [2024-07-15 16:10:19.118857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.283 qpair failed and we were unable to recover it. 00:28:50.283 [2024-07-15 16:10:19.119090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.283 [2024-07-15 16:10:19.119123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.283 qpair failed and we were unable to recover it. 00:28:50.283 [2024-07-15 16:10:19.119407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.283 [2024-07-15 16:10:19.119424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.283 qpair failed and we were unable to recover it. 00:28:50.283 [2024-07-15 16:10:19.119626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.283 [2024-07-15 16:10:19.119659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.283 qpair failed and we were unable to recover it. 00:28:50.283 [2024-07-15 16:10:19.119973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.283 [2024-07-15 16:10:19.120006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.283 qpair failed and we were unable to recover it. 00:28:50.283 [2024-07-15 16:10:19.120325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.283 [2024-07-15 16:10:19.120343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.283 qpair failed and we were unable to recover it. 00:28:50.283 [2024-07-15 16:10:19.120482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.283 [2024-07-15 16:10:19.120499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.283 qpair failed and we were unable to recover it. 00:28:50.283 [2024-07-15 16:10:19.120695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.283 [2024-07-15 16:10:19.120727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.283 qpair failed and we were unable to recover it. 00:28:50.283 [2024-07-15 16:10:19.121077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.283 [2024-07-15 16:10:19.121109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.283 qpair failed and we were unable to recover it. 00:28:50.283 [2024-07-15 16:10:19.121280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.283 [2024-07-15 16:10:19.121314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.283 qpair failed and we were unable to recover it. 00:28:50.283 [2024-07-15 16:10:19.121526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.283 [2024-07-15 16:10:19.121559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.283 qpair failed and we were unable to recover it. 00:28:50.283 [2024-07-15 16:10:19.121843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.283 [2024-07-15 16:10:19.121875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.283 qpair failed and we were unable to recover it. 00:28:50.283 [2024-07-15 16:10:19.122089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.283 [2024-07-15 16:10:19.122126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.283 qpair failed and we were unable to recover it. 00:28:50.283 [2024-07-15 16:10:19.122439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.283 [2024-07-15 16:10:19.122473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.283 qpair failed and we were unable to recover it. 00:28:50.283 [2024-07-15 16:10:19.122752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.283 [2024-07-15 16:10:19.122785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.283 qpair failed and we were unable to recover it. 00:28:50.283 [2024-07-15 16:10:19.123033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.283 [2024-07-15 16:10:19.123066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.283 qpair failed and we were unable to recover it. 00:28:50.283 [2024-07-15 16:10:19.123283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.283 [2024-07-15 16:10:19.123316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.283 qpair failed and we were unable to recover it. 00:28:50.283 [2024-07-15 16:10:19.123618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.283 [2024-07-15 16:10:19.123636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.283 qpair failed and we were unable to recover it. 00:28:50.283 [2024-07-15 16:10:19.123913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.283 [2024-07-15 16:10:19.123929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.283 qpair failed and we were unable to recover it. 00:28:50.283 [2024-07-15 16:10:19.124202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.283 [2024-07-15 16:10:19.124247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.283 qpair failed and we were unable to recover it. 00:28:50.283 [2024-07-15 16:10:19.124411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.283 [2024-07-15 16:10:19.124444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.283 qpair failed and we were unable to recover it. 00:28:50.283 [2024-07-15 16:10:19.124721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.284 [2024-07-15 16:10:19.124754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.284 qpair failed and we were unable to recover it. 00:28:50.284 [2024-07-15 16:10:19.124964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.284 [2024-07-15 16:10:19.124996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.284 qpair failed and we were unable to recover it. 00:28:50.284 [2024-07-15 16:10:19.125236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.284 [2024-07-15 16:10:19.125269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.284 qpair failed and we were unable to recover it. 00:28:50.284 [2024-07-15 16:10:19.125572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.284 [2024-07-15 16:10:19.125605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.284 qpair failed and we were unable to recover it. 00:28:50.284 [2024-07-15 16:10:19.125934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.284 [2024-07-15 16:10:19.125966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.284 qpair failed and we were unable to recover it. 00:28:50.284 [2024-07-15 16:10:19.126136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.284 [2024-07-15 16:10:19.126153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.284 qpair failed and we were unable to recover it. 00:28:50.284 [2024-07-15 16:10:19.126342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.284 [2024-07-15 16:10:19.126360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.284 qpair failed and we were unable to recover it. 00:28:50.284 [2024-07-15 16:10:19.126556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.284 [2024-07-15 16:10:19.126573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.284 qpair failed and we were unable to recover it. 00:28:50.284 [2024-07-15 16:10:19.126690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.284 [2024-07-15 16:10:19.126707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.284 qpair failed and we were unable to recover it. 00:28:50.284 [2024-07-15 16:10:19.126948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.284 [2024-07-15 16:10:19.126980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.284 qpair failed and we were unable to recover it. 00:28:50.284 [2024-07-15 16:10:19.127286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.284 [2024-07-15 16:10:19.127321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.284 qpair failed and we were unable to recover it. 00:28:50.284 [2024-07-15 16:10:19.127619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.284 [2024-07-15 16:10:19.127652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.284 qpair failed and we were unable to recover it. 00:28:50.284 [2024-07-15 16:10:19.127901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.284 [2024-07-15 16:10:19.127934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.284 qpair failed and we were unable to recover it. 00:28:50.284 [2024-07-15 16:10:19.128275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.284 [2024-07-15 16:10:19.128307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.284 qpair failed and we were unable to recover it. 00:28:50.284 [2024-07-15 16:10:19.128588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.284 [2024-07-15 16:10:19.128621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.284 qpair failed and we were unable to recover it. 00:28:50.284 [2024-07-15 16:10:19.128789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.284 [2024-07-15 16:10:19.128822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.284 qpair failed and we were unable to recover it. 00:28:50.284 [2024-07-15 16:10:19.129126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.284 [2024-07-15 16:10:19.129158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.284 qpair failed and we were unable to recover it. 00:28:50.284 [2024-07-15 16:10:19.129469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.284 [2024-07-15 16:10:19.129504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.284 qpair failed and we were unable to recover it. 00:28:50.284 [2024-07-15 16:10:19.129740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.284 [2024-07-15 16:10:19.129774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.284 qpair failed and we were unable to recover it. 00:28:50.284 [2024-07-15 16:10:19.130041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.284 [2024-07-15 16:10:19.130073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.284 qpair failed and we were unable to recover it. 00:28:50.284 [2024-07-15 16:10:19.130286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.284 [2024-07-15 16:10:19.130303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.284 qpair failed and we were unable to recover it. 00:28:50.284 [2024-07-15 16:10:19.130552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.284 [2024-07-15 16:10:19.130589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.284 qpair failed and we were unable to recover it. 00:28:50.284 [2024-07-15 16:10:19.130796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.284 [2024-07-15 16:10:19.130829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.284 qpair failed and we were unable to recover it. 00:28:50.284 [2024-07-15 16:10:19.131123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.284 [2024-07-15 16:10:19.131155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.284 qpair failed and we were unable to recover it. 00:28:50.284 [2024-07-15 16:10:19.131432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.284 [2024-07-15 16:10:19.131450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.284 qpair failed and we were unable to recover it. 00:28:50.284 [2024-07-15 16:10:19.131638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.284 [2024-07-15 16:10:19.131654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.284 qpair failed and we were unable to recover it. 00:28:50.284 [2024-07-15 16:10:19.131843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.284 [2024-07-15 16:10:19.131859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.284 qpair failed and we were unable to recover it. 00:28:50.284 [2024-07-15 16:10:19.132056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.284 [2024-07-15 16:10:19.132072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.284 qpair failed and we were unable to recover it. 00:28:50.284 [2024-07-15 16:10:19.132251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.284 [2024-07-15 16:10:19.132269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.284 qpair failed and we were unable to recover it. 00:28:50.284 [2024-07-15 16:10:19.132416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.284 [2024-07-15 16:10:19.132448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.284 qpair failed and we were unable to recover it. 00:28:50.284 [2024-07-15 16:10:19.132608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.284 [2024-07-15 16:10:19.132639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.284 qpair failed and we were unable to recover it. 00:28:50.284 [2024-07-15 16:10:19.132905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.284 [2024-07-15 16:10:19.132947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.284 qpair failed and we were unable to recover it. 00:28:50.284 [2024-07-15 16:10:19.133175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.284 [2024-07-15 16:10:19.133208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.284 qpair failed and we were unable to recover it. 00:28:50.284 [2024-07-15 16:10:19.133432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.284 [2024-07-15 16:10:19.133466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.284 qpair failed and we were unable to recover it. 00:28:50.284 [2024-07-15 16:10:19.133641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.284 [2024-07-15 16:10:19.133659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.284 qpair failed and we were unable to recover it. 00:28:50.284 [2024-07-15 16:10:19.133954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.284 [2024-07-15 16:10:19.133986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.284 qpair failed and we were unable to recover it. 00:28:50.284 [2024-07-15 16:10:19.134233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.284 [2024-07-15 16:10:19.134266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.284 qpair failed and we were unable to recover it. 00:28:50.284 [2024-07-15 16:10:19.134503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.284 [2024-07-15 16:10:19.134535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.284 qpair failed and we were unable to recover it. 00:28:50.284 [2024-07-15 16:10:19.134720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.284 [2024-07-15 16:10:19.134735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.284 qpair failed and we were unable to recover it. 00:28:50.284 [2024-07-15 16:10:19.134956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.284 [2024-07-15 16:10:19.134984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.284 qpair failed and we were unable to recover it. 00:28:50.284 [2024-07-15 16:10:19.135275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.284 [2024-07-15 16:10:19.135292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.284 qpair failed and we were unable to recover it. 00:28:50.284 [2024-07-15 16:10:19.135488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.285 [2024-07-15 16:10:19.135505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.285 qpair failed and we were unable to recover it. 00:28:50.285 [2024-07-15 16:10:19.135705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.285 [2024-07-15 16:10:19.135737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.285 qpair failed and we were unable to recover it. 00:28:50.285 [2024-07-15 16:10:19.136086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.285 [2024-07-15 16:10:19.136119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.285 qpair failed and we were unable to recover it. 00:28:50.285 [2024-07-15 16:10:19.136381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.285 [2024-07-15 16:10:19.136415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.285 qpair failed and we were unable to recover it. 00:28:50.285 [2024-07-15 16:10:19.136579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.285 [2024-07-15 16:10:19.136611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.285 qpair failed and we were unable to recover it. 00:28:50.285 [2024-07-15 16:10:19.136849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.285 [2024-07-15 16:10:19.136889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.285 qpair failed and we were unable to recover it. 00:28:50.285 [2024-07-15 16:10:19.137104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.285 [2024-07-15 16:10:19.137137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.285 qpair failed and we were unable to recover it. 00:28:50.285 [2024-07-15 16:10:19.137352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.285 [2024-07-15 16:10:19.137370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.285 qpair failed and we were unable to recover it. 00:28:50.285 [2024-07-15 16:10:19.138936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.285 [2024-07-15 16:10:19.138974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.285 qpair failed and we were unable to recover it. 00:28:50.285 [2024-07-15 16:10:19.139246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.285 [2024-07-15 16:10:19.139266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.285 qpair failed and we were unable to recover it. 00:28:50.285 [2024-07-15 16:10:19.139410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.285 [2024-07-15 16:10:19.139428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.285 qpair failed and we were unable to recover it. 00:28:50.285 [2024-07-15 16:10:19.139621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.285 [2024-07-15 16:10:19.139654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.285 qpair failed and we were unable to recover it. 00:28:50.285 [2024-07-15 16:10:19.141199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.285 [2024-07-15 16:10:19.141245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.285 qpair failed and we were unable to recover it. 00:28:50.285 [2024-07-15 16:10:19.141531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.285 [2024-07-15 16:10:19.141550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.285 qpair failed and we were unable to recover it. 00:28:50.285 [2024-07-15 16:10:19.141788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.285 [2024-07-15 16:10:19.141806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.285 qpair failed and we were unable to recover it. 00:28:50.285 [2024-07-15 16:10:19.141999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.285 [2024-07-15 16:10:19.142017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.285 qpair failed and we were unable to recover it. 00:28:50.285 [2024-07-15 16:10:19.142161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.285 [2024-07-15 16:10:19.142178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.285 qpair failed and we were unable to recover it. 00:28:50.285 [2024-07-15 16:10:19.142429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.285 [2024-07-15 16:10:19.142463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.285 qpair failed and we were unable to recover it. 00:28:50.285 [2024-07-15 16:10:19.142698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.285 [2024-07-15 16:10:19.142731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.285 qpair failed and we were unable to recover it. 00:28:50.285 [2024-07-15 16:10:19.142993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.285 [2024-07-15 16:10:19.143027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.285 qpair failed and we were unable to recover it. 00:28:50.285 [2024-07-15 16:10:19.143310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.285 [2024-07-15 16:10:19.143345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.285 qpair failed and we were unable to recover it. 00:28:50.285 [2024-07-15 16:10:19.143563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.285 [2024-07-15 16:10:19.143596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.285 qpair failed and we were unable to recover it. 00:28:50.285 [2024-07-15 16:10:19.143751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.285 [2024-07-15 16:10:19.143784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.285 qpair failed and we were unable to recover it. 00:28:50.285 [2024-07-15 16:10:19.144013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.285 [2024-07-15 16:10:19.144046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.285 qpair failed and we were unable to recover it. 00:28:50.285 [2024-07-15 16:10:19.144311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.285 [2024-07-15 16:10:19.144345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.285 qpair failed and we were unable to recover it. 00:28:50.285 [2024-07-15 16:10:19.144530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.285 [2024-07-15 16:10:19.144562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.285 qpair failed and we were unable to recover it. 00:28:50.285 [2024-07-15 16:10:19.144745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.285 [2024-07-15 16:10:19.144762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.285 qpair failed and we were unable to recover it. 00:28:50.285 [2024-07-15 16:10:19.145065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.285 [2024-07-15 16:10:19.145097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.285 qpair failed and we were unable to recover it. 00:28:50.285 [2024-07-15 16:10:19.145445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.285 [2024-07-15 16:10:19.145490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.285 qpair failed and we were unable to recover it. 00:28:50.285 [2024-07-15 16:10:19.145671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.285 [2024-07-15 16:10:19.145688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.285 qpair failed and we were unable to recover it. 00:28:50.285 [2024-07-15 16:10:19.145964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.285 [2024-07-15 16:10:19.146003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.285 qpair failed and we were unable to recover it. 00:28:50.285 [2024-07-15 16:10:19.146242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.285 [2024-07-15 16:10:19.146280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.285 qpair failed and we were unable to recover it. 00:28:50.285 [2024-07-15 16:10:19.146460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.285 [2024-07-15 16:10:19.146476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.285 qpair failed and we were unable to recover it. 00:28:50.285 [2024-07-15 16:10:19.146725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.285 [2024-07-15 16:10:19.146758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.285 qpair failed and we were unable to recover it. 00:28:50.285 [2024-07-15 16:10:19.147047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.285 [2024-07-15 16:10:19.147080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.285 qpair failed and we were unable to recover it. 00:28:50.285 [2024-07-15 16:10:19.147307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.285 [2024-07-15 16:10:19.147326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.285 qpair failed and we were unable to recover it. 00:28:50.285 [2024-07-15 16:10:19.147467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.285 [2024-07-15 16:10:19.147484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.285 qpair failed and we were unable to recover it. 00:28:50.285 [2024-07-15 16:10:19.147608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.285 [2024-07-15 16:10:19.147625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.285 qpair failed and we were unable to recover it. 00:28:50.285 [2024-07-15 16:10:19.147827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.285 [2024-07-15 16:10:19.147860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.285 qpair failed and we were unable to recover it. 00:28:50.285 [2024-07-15 16:10:19.148091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.285 [2024-07-15 16:10:19.148129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.285 qpair failed and we were unable to recover it. 00:28:50.285 [2024-07-15 16:10:19.148381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.285 [2024-07-15 16:10:19.148415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.285 qpair failed and we were unable to recover it. 00:28:50.286 [2024-07-15 16:10:19.148731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.286 [2024-07-15 16:10:19.148764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.286 qpair failed and we were unable to recover it. 00:28:50.286 [2024-07-15 16:10:19.148921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.286 [2024-07-15 16:10:19.148953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.286 qpair failed and we were unable to recover it. 00:28:50.286 [2024-07-15 16:10:19.149273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.286 [2024-07-15 16:10:19.149308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.286 qpair failed and we were unable to recover it. 00:28:50.286 [2024-07-15 16:10:19.149475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.286 [2024-07-15 16:10:19.149492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.286 qpair failed and we were unable to recover it. 00:28:50.286 [2024-07-15 16:10:19.149628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.286 [2024-07-15 16:10:19.149645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.286 qpair failed and we were unable to recover it. 00:28:50.286 [2024-07-15 16:10:19.149771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.286 [2024-07-15 16:10:19.149789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.286 qpair failed and we were unable to recover it. 00:28:50.286 [2024-07-15 16:10:19.149982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.286 [2024-07-15 16:10:19.150019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.286 qpair failed and we were unable to recover it. 00:28:50.286 [2024-07-15 16:10:19.150205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.286 [2024-07-15 16:10:19.150223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.286 qpair failed and we were unable to recover it. 00:28:50.286 [2024-07-15 16:10:19.150436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.286 [2024-07-15 16:10:19.150453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.286 qpair failed and we were unable to recover it. 00:28:50.286 [2024-07-15 16:10:19.150641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.286 [2024-07-15 16:10:19.150688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.286 qpair failed and we were unable to recover it. 00:28:50.286 [2024-07-15 16:10:19.150862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.286 [2024-07-15 16:10:19.150895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.286 qpair failed and we were unable to recover it. 00:28:50.286 [2024-07-15 16:10:19.151136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.286 [2024-07-15 16:10:19.151168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.286 qpair failed and we were unable to recover it. 00:28:50.286 [2024-07-15 16:10:19.151403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.286 [2024-07-15 16:10:19.151421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.286 qpair failed and we were unable to recover it. 00:28:50.286 [2024-07-15 16:10:19.151558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.286 [2024-07-15 16:10:19.151590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.286 qpair failed and we were unable to recover it. 00:28:50.286 [2024-07-15 16:10:19.151819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.286 [2024-07-15 16:10:19.151851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.286 qpair failed and we were unable to recover it. 00:28:50.286 [2024-07-15 16:10:19.152070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.286 [2024-07-15 16:10:19.152103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.286 qpair failed and we were unable to recover it. 00:28:50.286 [2024-07-15 16:10:19.152279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.286 [2024-07-15 16:10:19.152314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.286 qpair failed and we were unable to recover it. 00:28:50.286 [2024-07-15 16:10:19.152613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.286 [2024-07-15 16:10:19.152654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.286 qpair failed and we were unable to recover it. 00:28:50.286 [2024-07-15 16:10:19.152808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.286 [2024-07-15 16:10:19.152825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.286 qpair failed and we were unable to recover it. 00:28:50.286 [2024-07-15 16:10:19.153098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.286 [2024-07-15 16:10:19.153131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.286 qpair failed and we were unable to recover it. 00:28:50.286 [2024-07-15 16:10:19.153300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.286 [2024-07-15 16:10:19.153317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.286 qpair failed and we were unable to recover it. 00:28:50.286 [2024-07-15 16:10:19.153468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.286 [2024-07-15 16:10:19.153486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.286 qpair failed and we were unable to recover it. 00:28:50.286 [2024-07-15 16:10:19.153638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.286 [2024-07-15 16:10:19.153654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.286 qpair failed and we were unable to recover it. 00:28:50.286 [2024-07-15 16:10:19.153839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.286 [2024-07-15 16:10:19.153871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.286 qpair failed and we were unable to recover it. 00:28:50.286 [2024-07-15 16:10:19.154098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.286 [2024-07-15 16:10:19.154130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.286 qpair failed and we were unable to recover it. 00:28:50.286 [2024-07-15 16:10:19.154344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.286 [2024-07-15 16:10:19.154361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.286 qpair failed and we were unable to recover it. 00:28:50.286 [2024-07-15 16:10:19.154554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.286 [2024-07-15 16:10:19.154572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.286 qpair failed and we were unable to recover it. 00:28:50.286 [2024-07-15 16:10:19.154767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.286 [2024-07-15 16:10:19.154783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.286 qpair failed and we were unable to recover it. 00:28:50.286 [2024-07-15 16:10:19.155095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.286 [2024-07-15 16:10:19.155128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.286 qpair failed and we were unable to recover it. 00:28:50.286 [2024-07-15 16:10:19.155437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.286 [2024-07-15 16:10:19.155476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.286 qpair failed and we were unable to recover it. 00:28:50.286 [2024-07-15 16:10:19.155660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.286 [2024-07-15 16:10:19.155692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.286 qpair failed and we were unable to recover it. 00:28:50.286 [2024-07-15 16:10:19.155961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.286 [2024-07-15 16:10:19.155994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.286 qpair failed and we were unable to recover it. 00:28:50.286 [2024-07-15 16:10:19.156158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.286 [2024-07-15 16:10:19.156191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.286 qpair failed and we were unable to recover it. 00:28:50.286 [2024-07-15 16:10:19.156480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.286 [2024-07-15 16:10:19.156498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.286 qpair failed and we were unable to recover it. 00:28:50.286 [2024-07-15 16:10:19.156633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.286 [2024-07-15 16:10:19.156650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.286 qpair failed and we were unable to recover it. 00:28:50.286 [2024-07-15 16:10:19.156853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.287 [2024-07-15 16:10:19.156871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.287 qpair failed and we were unable to recover it. 00:28:50.287 [2024-07-15 16:10:19.156982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.287 [2024-07-15 16:10:19.156999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.287 qpair failed and we were unable to recover it. 00:28:50.287 [2024-07-15 16:10:19.157276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.287 [2024-07-15 16:10:19.157295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.287 qpair failed and we were unable to recover it. 00:28:50.287 [2024-07-15 16:10:19.157487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.287 [2024-07-15 16:10:19.157504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.287 qpair failed and we were unable to recover it. 00:28:50.287 [2024-07-15 16:10:19.157647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.287 [2024-07-15 16:10:19.157664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.287 qpair failed and we were unable to recover it. 00:28:50.287 [2024-07-15 16:10:19.157954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.287 [2024-07-15 16:10:19.157987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.287 qpair failed and we were unable to recover it. 00:28:50.287 [2024-07-15 16:10:19.158152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.287 [2024-07-15 16:10:19.158184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.287 qpair failed and we were unable to recover it. 00:28:50.287 [2024-07-15 16:10:19.158494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.287 [2024-07-15 16:10:19.158511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.287 qpair failed and we were unable to recover it. 00:28:50.287 [2024-07-15 16:10:19.158695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.287 [2024-07-15 16:10:19.158712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.287 qpair failed and we were unable to recover it. 00:28:50.287 [2024-07-15 16:10:19.158949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.287 [2024-07-15 16:10:19.158980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.287 qpair failed and we were unable to recover it. 00:28:50.287 [2024-07-15 16:10:19.159155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.287 [2024-07-15 16:10:19.159187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.287 qpair failed and we were unable to recover it. 00:28:50.287 [2024-07-15 16:10:19.159456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.287 [2024-07-15 16:10:19.159474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.287 qpair failed and we were unable to recover it. 00:28:50.287 [2024-07-15 16:10:19.159676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.287 [2024-07-15 16:10:19.159709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.287 qpair failed and we were unable to recover it. 00:28:50.287 [2024-07-15 16:10:19.159872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.287 [2024-07-15 16:10:19.159904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.287 qpair failed and we were unable to recover it. 00:28:50.287 [2024-07-15 16:10:19.160164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.287 [2024-07-15 16:10:19.160196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.287 qpair failed and we were unable to recover it. 00:28:50.287 [2024-07-15 16:10:19.160399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.287 [2024-07-15 16:10:19.160432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.287 qpair failed and we were unable to recover it. 00:28:50.287 [2024-07-15 16:10:19.160711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.287 [2024-07-15 16:10:19.160728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.287 qpair failed and we were unable to recover it. 00:28:50.287 [2024-07-15 16:10:19.160976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.287 [2024-07-15 16:10:19.160993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.287 qpair failed and we were unable to recover it. 00:28:50.287 [2024-07-15 16:10:19.161272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.287 [2024-07-15 16:10:19.161312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.287 qpair failed and we were unable to recover it. 00:28:50.287 [2024-07-15 16:10:19.161518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.287 [2024-07-15 16:10:19.161536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.287 qpair failed and we were unable to recover it. 00:28:50.287 [2024-07-15 16:10:19.161751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.287 [2024-07-15 16:10:19.161783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.287 qpair failed and we were unable to recover it. 00:28:50.287 [2024-07-15 16:10:19.162012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.287 [2024-07-15 16:10:19.162049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.287 qpair failed and we were unable to recover it. 00:28:50.287 [2024-07-15 16:10:19.162368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.287 [2024-07-15 16:10:19.162403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.287 qpair failed and we were unable to recover it. 00:28:50.287 [2024-07-15 16:10:19.162685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.287 [2024-07-15 16:10:19.162726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.287 qpair failed and we were unable to recover it. 00:28:50.287 [2024-07-15 16:10:19.162968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.287 [2024-07-15 16:10:19.162986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.287 qpair failed and we were unable to recover it. 00:28:50.287 [2024-07-15 16:10:19.163094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.287 [2024-07-15 16:10:19.163111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.287 qpair failed and we were unable to recover it. 00:28:50.287 [2024-07-15 16:10:19.163242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.287 [2024-07-15 16:10:19.163260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.287 qpair failed and we were unable to recover it. 00:28:50.287 [2024-07-15 16:10:19.163448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.287 [2024-07-15 16:10:19.163464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.287 qpair failed and we were unable to recover it. 00:28:50.287 [2024-07-15 16:10:19.164607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.287 [2024-07-15 16:10:19.164644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.287 qpair failed and we were unable to recover it. 00:28:50.287 [2024-07-15 16:10:19.164881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.287 [2024-07-15 16:10:19.164898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.287 qpair failed and we were unable to recover it. 00:28:50.287 [2024-07-15 16:10:19.165170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.287 [2024-07-15 16:10:19.165187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.287 qpair failed and we were unable to recover it. 00:28:50.287 [2024-07-15 16:10:19.165456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.287 [2024-07-15 16:10:19.165474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.287 qpair failed and we were unable to recover it. 00:28:50.287 [2024-07-15 16:10:19.165659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.287 [2024-07-15 16:10:19.165677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.287 qpair failed and we were unable to recover it. 00:28:50.287 [2024-07-15 16:10:19.165898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.287 [2024-07-15 16:10:19.165918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.287 qpair failed and we were unable to recover it. 00:28:50.287 [2024-07-15 16:10:19.166171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.287 [2024-07-15 16:10:19.166203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.287 qpair failed and we were unable to recover it. 00:28:50.287 [2024-07-15 16:10:19.166556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.287 [2024-07-15 16:10:19.166573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.287 qpair failed and we were unable to recover it. 00:28:50.287 [2024-07-15 16:10:19.166779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.287 [2024-07-15 16:10:19.166797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.287 qpair failed and we were unable to recover it. 00:28:50.287 [2024-07-15 16:10:19.167029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.287 [2024-07-15 16:10:19.167047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.287 qpair failed and we were unable to recover it. 00:28:50.287 [2024-07-15 16:10:19.167161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.287 [2024-07-15 16:10:19.167178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.287 qpair failed and we were unable to recover it. 00:28:50.287 [2024-07-15 16:10:19.167414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.287 [2024-07-15 16:10:19.167433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.287 qpair failed and we were unable to recover it. 00:28:50.287 [2024-07-15 16:10:19.167649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.287 [2024-07-15 16:10:19.167696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.287 qpair failed and we were unable to recover it. 00:28:50.287 [2024-07-15 16:10:19.168008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.288 [2024-07-15 16:10:19.168041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.288 qpair failed and we were unable to recover it. 00:28:50.288 [2024-07-15 16:10:19.168259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.288 [2024-07-15 16:10:19.168292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.288 qpair failed and we were unable to recover it. 00:28:50.288 [2024-07-15 16:10:19.168548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.288 [2024-07-15 16:10:19.168566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.288 qpair failed and we were unable to recover it. 00:28:50.288 [2024-07-15 16:10:19.168696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.288 [2024-07-15 16:10:19.168714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.288 qpair failed and we were unable to recover it. 00:28:50.288 [2024-07-15 16:10:19.168951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.288 [2024-07-15 16:10:19.168968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.288 qpair failed and we were unable to recover it. 00:28:50.288 [2024-07-15 16:10:19.169169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.288 [2024-07-15 16:10:19.169203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.288 qpair failed and we were unable to recover it. 00:28:50.288 [2024-07-15 16:10:19.169404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.288 [2024-07-15 16:10:19.169439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.288 qpair failed and we were unable to recover it. 00:28:50.288 [2024-07-15 16:10:19.169731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.288 [2024-07-15 16:10:19.169764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.288 qpair failed and we were unable to recover it. 00:28:50.288 [2024-07-15 16:10:19.169990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.288 [2024-07-15 16:10:19.170027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.288 qpair failed and we were unable to recover it. 00:28:50.288 [2024-07-15 16:10:19.170272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.288 [2024-07-15 16:10:19.170307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.288 qpair failed and we were unable to recover it. 00:28:50.288 [2024-07-15 16:10:19.170639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.288 [2024-07-15 16:10:19.170674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.288 qpair failed and we were unable to recover it. 00:28:50.288 [2024-07-15 16:10:19.170987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.288 [2024-07-15 16:10:19.171020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.288 qpair failed and we were unable to recover it. 00:28:50.288 [2024-07-15 16:10:19.172374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.288 [2024-07-15 16:10:19.172410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.288 qpair failed and we were unable to recover it. 00:28:50.288 [2024-07-15 16:10:19.172639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.288 [2024-07-15 16:10:19.172657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.288 qpair failed and we were unable to recover it. 00:28:50.288 [2024-07-15 16:10:19.172849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.288 [2024-07-15 16:10:19.172866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.288 qpair failed and we were unable to recover it. 00:28:50.288 [2024-07-15 16:10:19.173114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.288 [2024-07-15 16:10:19.173131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.288 qpair failed and we were unable to recover it. 00:28:50.288 [2024-07-15 16:10:19.173406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.288 [2024-07-15 16:10:19.173425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.288 qpair failed and we were unable to recover it. 00:28:50.288 [2024-07-15 16:10:19.173603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.288 [2024-07-15 16:10:19.173620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.288 qpair failed and we were unable to recover it. 00:28:50.288 [2024-07-15 16:10:19.173811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.288 [2024-07-15 16:10:19.173828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.288 qpair failed and we were unable to recover it. 00:28:50.288 [2024-07-15 16:10:19.174071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.288 [2024-07-15 16:10:19.174088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.288 qpair failed and we were unable to recover it. 00:28:50.288 [2024-07-15 16:10:19.174292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.288 [2024-07-15 16:10:19.174336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.288 qpair failed and we were unable to recover it. 00:28:50.288 [2024-07-15 16:10:19.174528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.288 [2024-07-15 16:10:19.174561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.288 qpair failed and we were unable to recover it. 00:28:50.288 [2024-07-15 16:10:19.174760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.288 [2024-07-15 16:10:19.174778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.288 qpair failed and we were unable to recover it. 00:28:50.288 [2024-07-15 16:10:19.174979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.288 [2024-07-15 16:10:19.174996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.288 qpair failed and we were unable to recover it. 00:28:50.288 [2024-07-15 16:10:19.175193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.288 [2024-07-15 16:10:19.175210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.288 qpair failed and we were unable to recover it. 00:28:50.288 [2024-07-15 16:10:19.175463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.288 [2024-07-15 16:10:19.175481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.288 qpair failed and we were unable to recover it. 00:28:50.288 [2024-07-15 16:10:19.175780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.288 [2024-07-15 16:10:19.175797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.288 qpair failed and we were unable to recover it. 00:28:50.288 [2024-07-15 16:10:19.175997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.288 [2024-07-15 16:10:19.176014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.288 qpair failed and we were unable to recover it. 00:28:50.288 [2024-07-15 16:10:19.176198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.288 [2024-07-15 16:10:19.176216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.288 qpair failed and we were unable to recover it. 00:28:50.288 [2024-07-15 16:10:19.176416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.288 [2024-07-15 16:10:19.176450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.288 qpair failed and we were unable to recover it. 00:28:50.288 [2024-07-15 16:10:19.176602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.288 [2024-07-15 16:10:19.176636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.288 qpair failed and we were unable to recover it. 00:28:50.288 [2024-07-15 16:10:19.176916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.288 [2024-07-15 16:10:19.176949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.288 qpair failed and we were unable to recover it. 00:28:50.288 [2024-07-15 16:10:19.177274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.288 [2024-07-15 16:10:19.177309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.288 qpair failed and we were unable to recover it. 00:28:50.565 [2024-07-15 16:10:19.177530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.565 [2024-07-15 16:10:19.177562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.565 qpair failed and we were unable to recover it. 00:28:50.565 [2024-07-15 16:10:19.177852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.565 [2024-07-15 16:10:19.177869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.565 qpair failed and we were unable to recover it. 00:28:50.565 [2024-07-15 16:10:19.178007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.565 [2024-07-15 16:10:19.178023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.565 qpair failed and we were unable to recover it. 00:28:50.565 [2024-07-15 16:10:19.178285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.565 [2024-07-15 16:10:19.178303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.565 qpair failed and we were unable to recover it. 00:28:50.565 [2024-07-15 16:10:19.178553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.565 [2024-07-15 16:10:19.178570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.565 qpair failed and we were unable to recover it. 00:28:50.565 [2024-07-15 16:10:19.178740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.565 [2024-07-15 16:10:19.178757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.565 qpair failed and we were unable to recover it. 00:28:50.565 [2024-07-15 16:10:19.179049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.565 [2024-07-15 16:10:19.179066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.566 qpair failed and we were unable to recover it. 00:28:50.566 [2024-07-15 16:10:19.179189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.566 [2024-07-15 16:10:19.179206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.566 qpair failed and we were unable to recover it. 00:28:50.566 [2024-07-15 16:10:19.179462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.566 [2024-07-15 16:10:19.179480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.566 qpair failed and we were unable to recover it. 00:28:50.566 [2024-07-15 16:10:19.179615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.566 [2024-07-15 16:10:19.179630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.566 qpair failed and we were unable to recover it. 00:28:50.566 [2024-07-15 16:10:19.179926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.566 [2024-07-15 16:10:19.179942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.566 qpair failed and we were unable to recover it. 00:28:50.566 [2024-07-15 16:10:19.180126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.566 [2024-07-15 16:10:19.180142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.566 qpair failed and we were unable to recover it. 00:28:50.566 [2024-07-15 16:10:19.180339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.566 [2024-07-15 16:10:19.180356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.566 qpair failed and we were unable to recover it. 00:28:50.566 [2024-07-15 16:10:19.180481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.566 [2024-07-15 16:10:19.180498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.566 qpair failed and we were unable to recover it. 00:28:50.566 [2024-07-15 16:10:19.180763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.566 [2024-07-15 16:10:19.180781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.566 qpair failed and we were unable to recover it. 00:28:50.566 [2024-07-15 16:10:19.180971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.566 [2024-07-15 16:10:19.180988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.566 qpair failed and we were unable to recover it. 00:28:50.566 [2024-07-15 16:10:19.181272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.566 [2024-07-15 16:10:19.181290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.566 qpair failed and we were unable to recover it. 00:28:50.566 [2024-07-15 16:10:19.181463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.566 [2024-07-15 16:10:19.181482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.566 qpair failed and we were unable to recover it. 00:28:50.566 [2024-07-15 16:10:19.181668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.566 [2024-07-15 16:10:19.181700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.566 qpair failed and we were unable to recover it. 00:28:50.566 [2024-07-15 16:10:19.181962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.566 [2024-07-15 16:10:19.181994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.566 qpair failed and we were unable to recover it. 00:28:50.566 [2024-07-15 16:10:19.182277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.566 [2024-07-15 16:10:19.182296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.566 qpair failed and we were unable to recover it. 00:28:50.566 [2024-07-15 16:10:19.182543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.566 [2024-07-15 16:10:19.182579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.566 qpair failed and we were unable to recover it. 00:28:50.566 [2024-07-15 16:10:19.182748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.566 [2024-07-15 16:10:19.182780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.566 qpair failed and we were unable to recover it. 00:28:50.566 [2024-07-15 16:10:19.183007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.566 [2024-07-15 16:10:19.183040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.566 qpair failed and we were unable to recover it. 00:28:50.566 [2024-07-15 16:10:19.183286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.566 [2024-07-15 16:10:19.183320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.566 qpair failed and we were unable to recover it. 00:28:50.566 [2024-07-15 16:10:19.183536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.566 [2024-07-15 16:10:19.183568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.566 qpair failed and we were unable to recover it. 00:28:50.566 [2024-07-15 16:10:19.183782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.566 [2024-07-15 16:10:19.183814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.566 qpair failed and we were unable to recover it. 00:28:50.566 [2024-07-15 16:10:19.184051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.566 [2024-07-15 16:10:19.184090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.566 qpair failed and we were unable to recover it. 00:28:50.566 [2024-07-15 16:10:19.184414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.566 [2024-07-15 16:10:19.184448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.566 qpair failed and we were unable to recover it. 00:28:50.566 [2024-07-15 16:10:19.184669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.566 [2024-07-15 16:10:19.184686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.566 qpair failed and we were unable to recover it. 00:28:50.566 [2024-07-15 16:10:19.185907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.566 [2024-07-15 16:10:19.185940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.566 qpair failed and we were unable to recover it. 00:28:50.566 [2024-07-15 16:10:19.186165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.566 [2024-07-15 16:10:19.186181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.566 qpair failed and we were unable to recover it. 00:28:50.566 [2024-07-15 16:10:19.186431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.566 [2024-07-15 16:10:19.186450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.566 qpair failed and we were unable to recover it. 00:28:50.566 [2024-07-15 16:10:19.186653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.566 [2024-07-15 16:10:19.186670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.566 qpair failed and we were unable to recover it. 00:28:50.566 [2024-07-15 16:10:19.186803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.566 [2024-07-15 16:10:19.186820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.566 qpair failed and we were unable to recover it. 00:28:50.566 [2024-07-15 16:10:19.187136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.566 [2024-07-15 16:10:19.187169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.566 qpair failed and we were unable to recover it. 00:28:50.566 [2024-07-15 16:10:19.187474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.566 [2024-07-15 16:10:19.187508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.566 qpair failed and we were unable to recover it. 00:28:50.566 [2024-07-15 16:10:19.187677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.566 [2024-07-15 16:10:19.187709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.566 qpair failed and we were unable to recover it. 00:28:50.566 [2024-07-15 16:10:19.187874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.566 [2024-07-15 16:10:19.187905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.566 qpair failed and we were unable to recover it. 00:28:50.566 [2024-07-15 16:10:19.188146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.566 [2024-07-15 16:10:19.188178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.566 qpair failed and we were unable to recover it. 00:28:50.566 [2024-07-15 16:10:19.188482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.566 [2024-07-15 16:10:19.188516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.566 qpair failed and we were unable to recover it. 00:28:50.566 [2024-07-15 16:10:19.188741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.566 [2024-07-15 16:10:19.188774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.566 qpair failed and we were unable to recover it. 00:28:50.566 [2024-07-15 16:10:19.189062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.566 [2024-07-15 16:10:19.189094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.566 qpair failed and we were unable to recover it. 00:28:50.566 [2024-07-15 16:10:19.189317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.566 [2024-07-15 16:10:19.189351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.566 qpair failed and we were unable to recover it. 00:28:50.566 [2024-07-15 16:10:19.189543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.566 [2024-07-15 16:10:19.189560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.566 qpair failed and we were unable to recover it. 00:28:50.566 [2024-07-15 16:10:19.189771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.566 [2024-07-15 16:10:19.189803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.566 qpair failed and we were unable to recover it. 00:28:50.566 [2024-07-15 16:10:19.190028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.566 [2024-07-15 16:10:19.190060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.567 qpair failed and we were unable to recover it. 00:28:50.567 [2024-07-15 16:10:19.190342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.567 [2024-07-15 16:10:19.190375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.567 qpair failed and we were unable to recover it. 00:28:50.567 [2024-07-15 16:10:19.190620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.567 [2024-07-15 16:10:19.190637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.567 qpair failed and we were unable to recover it. 00:28:50.567 [2024-07-15 16:10:19.191679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.567 [2024-07-15 16:10:19.191713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.567 qpair failed and we were unable to recover it. 00:28:50.567 [2024-07-15 16:10:19.191986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.567 [2024-07-15 16:10:19.192005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.567 qpair failed and we were unable to recover it. 00:28:50.567 [2024-07-15 16:10:19.192149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.567 [2024-07-15 16:10:19.192164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.567 qpair failed and we were unable to recover it. 00:28:50.567 [2024-07-15 16:10:19.193123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.567 [2024-07-15 16:10:19.193155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.567 qpair failed and we were unable to recover it. 00:28:50.567 [2024-07-15 16:10:19.193487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.567 [2024-07-15 16:10:19.193506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.567 qpair failed and we were unable to recover it. 00:28:50.567 [2024-07-15 16:10:19.193653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.567 [2024-07-15 16:10:19.193670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.567 qpair failed and we were unable to recover it. 00:28:50.567 [2024-07-15 16:10:19.193845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.567 [2024-07-15 16:10:19.193861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.567 qpair failed and we were unable to recover it. 00:28:50.567 [2024-07-15 16:10:19.194097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.567 [2024-07-15 16:10:19.194129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.567 qpair failed and we were unable to recover it. 00:28:50.567 [2024-07-15 16:10:19.194317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.567 [2024-07-15 16:10:19.194351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.567 qpair failed and we were unable to recover it. 00:28:50.567 [2024-07-15 16:10:19.194516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.567 [2024-07-15 16:10:19.194547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.567 qpair failed and we were unable to recover it. 00:28:50.567 [2024-07-15 16:10:19.194799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.567 [2024-07-15 16:10:19.194831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.567 qpair failed and we were unable to recover it. 00:28:50.567 [2024-07-15 16:10:19.195149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.567 [2024-07-15 16:10:19.195197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.567 qpair failed and we were unable to recover it. 00:28:50.567 [2024-07-15 16:10:19.195518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.567 [2024-07-15 16:10:19.195552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.567 qpair failed and we were unable to recover it. 00:28:50.567 [2024-07-15 16:10:19.195898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.567 [2024-07-15 16:10:19.195930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.567 qpair failed and we were unable to recover it. 00:28:50.567 [2024-07-15 16:10:19.196164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.567 [2024-07-15 16:10:19.196195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.567 qpair failed and we were unable to recover it. 00:28:50.567 [2024-07-15 16:10:19.196383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.567 [2024-07-15 16:10:19.196415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.567 qpair failed and we were unable to recover it. 00:28:50.567 [2024-07-15 16:10:19.196622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.567 [2024-07-15 16:10:19.196639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.567 qpair failed and we were unable to recover it. 00:28:50.567 [2024-07-15 16:10:19.196846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.567 [2024-07-15 16:10:19.196863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.567 qpair failed and we were unable to recover it. 00:28:50.567 [2024-07-15 16:10:19.197112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.567 [2024-07-15 16:10:19.197132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.567 qpair failed and we were unable to recover it. 00:28:50.567 [2024-07-15 16:10:19.197428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.567 [2024-07-15 16:10:19.197445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.567 qpair failed and we were unable to recover it. 00:28:50.567 [2024-07-15 16:10:19.197587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.567 [2024-07-15 16:10:19.197603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.567 qpair failed and we were unable to recover it. 00:28:50.567 [2024-07-15 16:10:19.197802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.567 [2024-07-15 16:10:19.197834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.567 qpair failed and we were unable to recover it. 00:28:50.567 [2024-07-15 16:10:19.198006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.567 [2024-07-15 16:10:19.198038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.567 qpair failed and we were unable to recover it. 00:28:50.567 [2024-07-15 16:10:19.198294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.567 [2024-07-15 16:10:19.198329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.567 qpair failed and we were unable to recover it. 00:28:50.567 [2024-07-15 16:10:19.198643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.567 [2024-07-15 16:10:19.198660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.567 qpair failed and we were unable to recover it. 00:28:50.567 [2024-07-15 16:10:19.198845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.567 [2024-07-15 16:10:19.198862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.567 qpair failed and we were unable to recover it. 00:28:50.567 [2024-07-15 16:10:19.199059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.567 [2024-07-15 16:10:19.199076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.567 qpair failed and we were unable to recover it. 00:28:50.567 [2024-07-15 16:10:19.199284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.567 [2024-07-15 16:10:19.199302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.567 qpair failed and we were unable to recover it. 00:28:50.567 [2024-07-15 16:10:19.199506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.567 [2024-07-15 16:10:19.199524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.567 qpair failed and we were unable to recover it. 00:28:50.567 [2024-07-15 16:10:19.199718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.567 [2024-07-15 16:10:19.199736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.567 qpair failed and we were unable to recover it. 00:28:50.567 [2024-07-15 16:10:19.199954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.567 [2024-07-15 16:10:19.199974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.567 qpair failed and we were unable to recover it. 00:28:50.567 [2024-07-15 16:10:19.200170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.567 [2024-07-15 16:10:19.200185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.567 qpair failed and we were unable to recover it. 00:28:50.567 [2024-07-15 16:10:19.200458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.567 [2024-07-15 16:10:19.200498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.567 qpair failed and we were unable to recover it. 00:28:50.567 [2024-07-15 16:10:19.200727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.567 [2024-07-15 16:10:19.200759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.567 qpair failed and we were unable to recover it. 00:28:50.567 [2024-07-15 16:10:19.201117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.567 [2024-07-15 16:10:19.201150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.567 qpair failed and we were unable to recover it. 00:28:50.567 [2024-07-15 16:10:19.201414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.567 [2024-07-15 16:10:19.201450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.567 qpair failed and we were unable to recover it. 00:28:50.567 [2024-07-15 16:10:19.201620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.567 [2024-07-15 16:10:19.201652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.567 qpair failed and we were unable to recover it. 00:28:50.567 [2024-07-15 16:10:19.201938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.567 [2024-07-15 16:10:19.201970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.567 qpair failed and we were unable to recover it. 00:28:50.567 [2024-07-15 16:10:19.202241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.568 [2024-07-15 16:10:19.202275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.568 qpair failed and we were unable to recover it. 00:28:50.568 [2024-07-15 16:10:19.202500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.568 [2024-07-15 16:10:19.202517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.568 qpair failed and we were unable to recover it. 00:28:50.568 [2024-07-15 16:10:19.202639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.568 [2024-07-15 16:10:19.202656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.568 qpair failed and we were unable to recover it. 00:28:50.568 [2024-07-15 16:10:19.202783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.568 [2024-07-15 16:10:19.202800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.568 qpair failed and we were unable to recover it. 00:28:50.568 [2024-07-15 16:10:19.203047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.568 [2024-07-15 16:10:19.203063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.568 qpair failed and we were unable to recover it. 00:28:50.568 [2024-07-15 16:10:19.203275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.568 [2024-07-15 16:10:19.203292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.568 qpair failed and we were unable to recover it. 00:28:50.568 [2024-07-15 16:10:19.203407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.568 [2024-07-15 16:10:19.203424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.568 qpair failed and we were unable to recover it. 00:28:50.568 [2024-07-15 16:10:19.203621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.568 [2024-07-15 16:10:19.203653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.568 qpair failed and we were unable to recover it. 00:28:50.568 [2024-07-15 16:10:19.203868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.568 [2024-07-15 16:10:19.203901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.568 qpair failed and we were unable to recover it. 00:28:50.568 [2024-07-15 16:10:19.204137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.568 [2024-07-15 16:10:19.204169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.568 qpair failed and we were unable to recover it. 00:28:50.568 [2024-07-15 16:10:19.204419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.568 [2024-07-15 16:10:19.204453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.568 qpair failed and we were unable to recover it. 00:28:50.568 [2024-07-15 16:10:19.204697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.568 [2024-07-15 16:10:19.204729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.568 qpair failed and we were unable to recover it. 00:28:50.568 [2024-07-15 16:10:19.205014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.568 [2024-07-15 16:10:19.205046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.568 qpair failed and we were unable to recover it. 00:28:50.568 [2024-07-15 16:10:19.205280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.568 [2024-07-15 16:10:19.205314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.568 qpair failed and we were unable to recover it. 00:28:50.568 [2024-07-15 16:10:19.205572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.568 [2024-07-15 16:10:19.205603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.568 qpair failed and we were unable to recover it. 00:28:50.568 [2024-07-15 16:10:19.205949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.568 [2024-07-15 16:10:19.205981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.568 qpair failed and we were unable to recover it. 00:28:50.568 [2024-07-15 16:10:19.206203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.568 [2024-07-15 16:10:19.206273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.568 qpair failed and we were unable to recover it. 00:28:50.568 [2024-07-15 16:10:19.206437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.568 [2024-07-15 16:10:19.206455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.568 qpair failed and we were unable to recover it. 00:28:50.568 [2024-07-15 16:10:19.206704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.568 [2024-07-15 16:10:19.206721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.568 qpair failed and we were unable to recover it. 00:28:50.568 [2024-07-15 16:10:19.206909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.568 [2024-07-15 16:10:19.206926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.568 qpair failed and we were unable to recover it. 00:28:50.568 [2024-07-15 16:10:19.207142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.568 [2024-07-15 16:10:19.207162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.568 qpair failed and we were unable to recover it. 00:28:50.568 [2024-07-15 16:10:19.207414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.568 [2024-07-15 16:10:19.207432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.568 qpair failed and we were unable to recover it. 00:28:50.568 [2024-07-15 16:10:19.207714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.568 [2024-07-15 16:10:19.207732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.568 qpair failed and we were unable to recover it. 00:28:50.568 [2024-07-15 16:10:19.207943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.568 [2024-07-15 16:10:19.207975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.568 qpair failed and we were unable to recover it. 00:28:50.568 [2024-07-15 16:10:19.208146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.568 [2024-07-15 16:10:19.208177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.568 qpair failed and we were unable to recover it. 00:28:50.568 [2024-07-15 16:10:19.208360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.568 [2024-07-15 16:10:19.208393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.568 qpair failed and we were unable to recover it. 00:28:50.568 [2024-07-15 16:10:19.208696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.568 [2024-07-15 16:10:19.208729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.568 qpair failed and we were unable to recover it. 00:28:50.568 [2024-07-15 16:10:19.208897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.568 [2024-07-15 16:10:19.208930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.568 qpair failed and we were unable to recover it. 00:28:50.568 [2024-07-15 16:10:19.209108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.568 [2024-07-15 16:10:19.209141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.568 qpair failed and we were unable to recover it. 00:28:50.568 [2024-07-15 16:10:19.209470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.568 [2024-07-15 16:10:19.209487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.568 qpair failed and we were unable to recover it. 00:28:50.568 [2024-07-15 16:10:19.209621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.568 [2024-07-15 16:10:19.209639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.568 qpair failed and we were unable to recover it. 00:28:50.568 [2024-07-15 16:10:19.211071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.568 [2024-07-15 16:10:19.211107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.568 qpair failed and we were unable to recover it. 00:28:50.568 [2024-07-15 16:10:19.211355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.568 [2024-07-15 16:10:19.211375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.568 qpair failed and we were unable to recover it. 00:28:50.568 [2024-07-15 16:10:19.211541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.568 [2024-07-15 16:10:19.211558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.568 qpair failed and we were unable to recover it. 00:28:50.568 [2024-07-15 16:10:19.211832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.568 [2024-07-15 16:10:19.211879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.568 qpair failed and we were unable to recover it. 00:28:50.568 [2024-07-15 16:10:19.212142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.568 [2024-07-15 16:10:19.212175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.568 qpair failed and we were unable to recover it. 00:28:50.568 [2024-07-15 16:10:19.212497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.568 [2024-07-15 16:10:19.212514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.568 qpair failed and we were unable to recover it. 00:28:50.568 [2024-07-15 16:10:19.212807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.568 [2024-07-15 16:10:19.212839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.568 qpair failed and we were unable to recover it. 00:28:50.568 [2024-07-15 16:10:19.213155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.568 [2024-07-15 16:10:19.213187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.568 qpair failed and we were unable to recover it. 00:28:50.568 [2024-07-15 16:10:19.213400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.568 [2024-07-15 16:10:19.213434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.568 qpair failed and we were unable to recover it. 00:28:50.568 [2024-07-15 16:10:19.213695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.568 [2024-07-15 16:10:19.213711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.568 qpair failed and we were unable to recover it. 00:28:50.569 [2024-07-15 16:10:19.213844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.569 [2024-07-15 16:10:19.213861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.569 qpair failed and we were unable to recover it. 00:28:50.569 [2024-07-15 16:10:19.214064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.569 [2024-07-15 16:10:19.214096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.569 qpair failed and we were unable to recover it. 00:28:50.569 [2024-07-15 16:10:19.214280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.569 [2024-07-15 16:10:19.214315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.569 qpair failed and we were unable to recover it. 00:28:50.569 [2024-07-15 16:10:19.214625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.569 [2024-07-15 16:10:19.214658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.569 qpair failed and we were unable to recover it. 00:28:50.569 [2024-07-15 16:10:19.214857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.569 [2024-07-15 16:10:19.214893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.569 qpair failed and we were unable to recover it. 00:28:50.569 [2024-07-15 16:10:19.215111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.569 [2024-07-15 16:10:19.215143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.569 qpair failed and we were unable to recover it. 00:28:50.569 [2024-07-15 16:10:19.215360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.569 [2024-07-15 16:10:19.215378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.569 qpair failed and we were unable to recover it. 00:28:50.569 [2024-07-15 16:10:19.215646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.569 [2024-07-15 16:10:19.215680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.569 qpair failed and we were unable to recover it. 00:28:50.569 [2024-07-15 16:10:19.215992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.569 [2024-07-15 16:10:19.216025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.569 qpair failed and we were unable to recover it. 00:28:50.569 [2024-07-15 16:10:19.216248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.569 [2024-07-15 16:10:19.216280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.569 qpair failed and we were unable to recover it. 00:28:50.569 [2024-07-15 16:10:19.216489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.569 [2024-07-15 16:10:19.216521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.569 qpair failed and we were unable to recover it. 00:28:50.569 [2024-07-15 16:10:19.216822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.569 [2024-07-15 16:10:19.216863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.569 qpair failed and we were unable to recover it. 00:28:50.569 [2024-07-15 16:10:19.217172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.569 [2024-07-15 16:10:19.217205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.569 qpair failed and we were unable to recover it. 00:28:50.569 [2024-07-15 16:10:19.217437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.569 [2024-07-15 16:10:19.217471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.569 qpair failed and we were unable to recover it. 00:28:50.569 [2024-07-15 16:10:19.217692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.569 [2024-07-15 16:10:19.217725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.569 qpair failed and we were unable to recover it. 00:28:50.569 [2024-07-15 16:10:19.217985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.569 [2024-07-15 16:10:19.218018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.569 qpair failed and we were unable to recover it. 00:28:50.569 [2024-07-15 16:10:19.218280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.569 [2024-07-15 16:10:19.218315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.569 qpair failed and we were unable to recover it. 00:28:50.569 [2024-07-15 16:10:19.218495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.569 [2024-07-15 16:10:19.218527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.569 qpair failed and we were unable to recover it. 00:28:50.569 [2024-07-15 16:10:19.218684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.569 [2024-07-15 16:10:19.218701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.569 qpair failed and we were unable to recover it. 00:28:50.569 [2024-07-15 16:10:19.218892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.569 [2024-07-15 16:10:19.218931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.569 qpair failed and we were unable to recover it. 00:28:50.569 [2024-07-15 16:10:19.219155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.569 [2024-07-15 16:10:19.219187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.569 qpair failed and we were unable to recover it. 00:28:50.569 [2024-07-15 16:10:19.219549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.569 [2024-07-15 16:10:19.219582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.569 qpair failed and we were unable to recover it. 00:28:50.569 [2024-07-15 16:10:19.219842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.569 [2024-07-15 16:10:19.219874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.569 qpair failed and we were unable to recover it. 00:28:50.569 [2024-07-15 16:10:19.220163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.569 [2024-07-15 16:10:19.220196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.569 qpair failed and we were unable to recover it. 00:28:50.569 [2024-07-15 16:10:19.220433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.569 [2024-07-15 16:10:19.220466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.569 qpair failed and we were unable to recover it. 00:28:50.569 [2024-07-15 16:10:19.220713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.569 [2024-07-15 16:10:19.220745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.569 qpair failed and we were unable to recover it. 00:28:50.569 [2024-07-15 16:10:19.220899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.569 [2024-07-15 16:10:19.220931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.569 qpair failed and we were unable to recover it. 00:28:50.569 [2024-07-15 16:10:19.221246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.569 [2024-07-15 16:10:19.221279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.569 qpair failed and we were unable to recover it. 00:28:50.569 [2024-07-15 16:10:19.221422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.569 [2024-07-15 16:10:19.221454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.569 qpair failed and we were unable to recover it. 00:28:50.569 [2024-07-15 16:10:19.221608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.569 [2024-07-15 16:10:19.221640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.569 qpair failed and we were unable to recover it. 00:28:50.569 [2024-07-15 16:10:19.221871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.569 [2024-07-15 16:10:19.221904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.569 qpair failed and we were unable to recover it. 00:28:50.569 [2024-07-15 16:10:19.222092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.569 [2024-07-15 16:10:19.222124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.569 qpair failed and we were unable to recover it. 00:28:50.569 [2024-07-15 16:10:19.222364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.569 [2024-07-15 16:10:19.222399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.569 qpair failed and we were unable to recover it. 00:28:50.569 [2024-07-15 16:10:19.222627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.569 [2024-07-15 16:10:19.222644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.569 qpair failed and we were unable to recover it. 00:28:50.569 [2024-07-15 16:10:19.222825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.569 [2024-07-15 16:10:19.222843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.569 qpair failed and we were unable to recover it. 00:28:50.569 [2024-07-15 16:10:19.223110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.569 [2024-07-15 16:10:19.223143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.569 qpair failed and we were unable to recover it. 00:28:50.569 [2024-07-15 16:10:19.223393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.569 [2024-07-15 16:10:19.223426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.569 qpair failed and we were unable to recover it. 00:28:50.569 [2024-07-15 16:10:19.223671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.569 [2024-07-15 16:10:19.223713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.569 qpair failed and we were unable to recover it. 00:28:50.569 [2024-07-15 16:10:19.223911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.569 [2024-07-15 16:10:19.223928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.569 qpair failed and we were unable to recover it. 00:28:50.569 [2024-07-15 16:10:19.224176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.569 [2024-07-15 16:10:19.224194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.569 qpair failed and we were unable to recover it. 00:28:50.569 [2024-07-15 16:10:19.224393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.569 [2024-07-15 16:10:19.224410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.570 qpair failed and we were unable to recover it. 00:28:50.570 [2024-07-15 16:10:19.224536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.570 [2024-07-15 16:10:19.224553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.570 qpair failed and we were unable to recover it. 00:28:50.570 [2024-07-15 16:10:19.224727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.570 [2024-07-15 16:10:19.224765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.570 qpair failed and we were unable to recover it. 00:28:50.570 [2024-07-15 16:10:19.225090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.570 [2024-07-15 16:10:19.225123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.570 qpair failed and we were unable to recover it. 00:28:50.570 [2024-07-15 16:10:19.225379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.570 [2024-07-15 16:10:19.225412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.570 qpair failed and we were unable to recover it. 00:28:50.570 [2024-07-15 16:10:19.225664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.570 [2024-07-15 16:10:19.225682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.570 qpair failed and we were unable to recover it. 00:28:50.570 [2024-07-15 16:10:19.225917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.570 [2024-07-15 16:10:19.225935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.570 qpair failed and we were unable to recover it. 00:28:50.570 [2024-07-15 16:10:19.226131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.570 [2024-07-15 16:10:19.226148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.570 qpair failed and we were unable to recover it. 00:28:50.570 [2024-07-15 16:10:19.226352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.570 [2024-07-15 16:10:19.226385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.570 qpair failed and we were unable to recover it. 00:28:50.570 [2024-07-15 16:10:19.226669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.570 [2024-07-15 16:10:19.226702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.570 qpair failed and we were unable to recover it. 00:28:50.570 [2024-07-15 16:10:19.226985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.570 [2024-07-15 16:10:19.227017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.570 qpair failed and we were unable to recover it. 00:28:50.570 [2024-07-15 16:10:19.227239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.570 [2024-07-15 16:10:19.227274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.570 qpair failed and we were unable to recover it. 00:28:50.570 [2024-07-15 16:10:19.227454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.570 [2024-07-15 16:10:19.227487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.570 qpair failed and we were unable to recover it. 00:28:50.570 [2024-07-15 16:10:19.227727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.570 [2024-07-15 16:10:19.227759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.570 qpair failed and we were unable to recover it. 00:28:50.570 [2024-07-15 16:10:19.227972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.570 [2024-07-15 16:10:19.228005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.570 qpair failed and we were unable to recover it. 00:28:50.570 [2024-07-15 16:10:19.228249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.570 [2024-07-15 16:10:19.228283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.570 qpair failed and we were unable to recover it. 00:28:50.570 [2024-07-15 16:10:19.228469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.570 [2024-07-15 16:10:19.228486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.570 qpair failed and we were unable to recover it. 00:28:50.570 [2024-07-15 16:10:19.228691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.570 [2024-07-15 16:10:19.228723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.570 qpair failed and we were unable to recover it. 00:28:50.570 [2024-07-15 16:10:19.228945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.570 [2024-07-15 16:10:19.228978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.570 qpair failed and we were unable to recover it. 00:28:50.570 [2024-07-15 16:10:19.229233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.570 [2024-07-15 16:10:19.229273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.570 qpair failed and we were unable to recover it. 00:28:50.570 [2024-07-15 16:10:19.229486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.570 [2024-07-15 16:10:19.229519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.570 qpair failed and we were unable to recover it. 00:28:50.570 [2024-07-15 16:10:19.229803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.570 [2024-07-15 16:10:19.229847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.570 qpair failed and we were unable to recover it. 00:28:50.570 [2024-07-15 16:10:19.230045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.570 [2024-07-15 16:10:19.230063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.570 qpair failed and we were unable to recover it. 00:28:50.570 [2024-07-15 16:10:19.230347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.570 [2024-07-15 16:10:19.230381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.570 qpair failed and we were unable to recover it. 00:28:50.570 [2024-07-15 16:10:19.230572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.570 [2024-07-15 16:10:19.230605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.570 qpair failed and we were unable to recover it. 00:28:50.570 [2024-07-15 16:10:19.230774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.570 [2024-07-15 16:10:19.230807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.570 qpair failed and we were unable to recover it. 00:28:50.570 [2024-07-15 16:10:19.231111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.570 [2024-07-15 16:10:19.231143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.570 qpair failed and we were unable to recover it. 00:28:50.570 [2024-07-15 16:10:19.231375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.570 [2024-07-15 16:10:19.231409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.570 qpair failed and we were unable to recover it. 00:28:50.570 [2024-07-15 16:10:19.231619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.570 [2024-07-15 16:10:19.231652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.570 qpair failed and we were unable to recover it. 00:28:50.570 [2024-07-15 16:10:19.231824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.570 [2024-07-15 16:10:19.231841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.570 qpair failed and we were unable to recover it. 00:28:50.570 [2024-07-15 16:10:19.232088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.570 [2024-07-15 16:10:19.232105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.570 qpair failed and we were unable to recover it. 00:28:50.570 [2024-07-15 16:10:19.232319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.570 [2024-07-15 16:10:19.232345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.570 qpair failed and we were unable to recover it. 00:28:50.570 [2024-07-15 16:10:19.232567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.570 [2024-07-15 16:10:19.232585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.570 qpair failed and we were unable to recover it. 00:28:50.571 [2024-07-15 16:10:19.232786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.571 [2024-07-15 16:10:19.232803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.571 qpair failed and we were unable to recover it. 00:28:50.571 [2024-07-15 16:10:19.233096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.571 [2024-07-15 16:10:19.233115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.571 qpair failed and we were unable to recover it. 00:28:50.571 [2024-07-15 16:10:19.233262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.571 [2024-07-15 16:10:19.233282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.571 qpair failed and we were unable to recover it. 00:28:50.571 [2024-07-15 16:10:19.233501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.571 [2024-07-15 16:10:19.233531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.571 qpair failed and we were unable to recover it. 00:28:50.571 [2024-07-15 16:10:19.233766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.571 [2024-07-15 16:10:19.233800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.571 qpair failed and we were unable to recover it. 00:28:50.571 [2024-07-15 16:10:19.234044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.571 [2024-07-15 16:10:19.234077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.571 qpair failed and we were unable to recover it. 00:28:50.571 [2024-07-15 16:10:19.234307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.571 [2024-07-15 16:10:19.234341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.571 qpair failed and we were unable to recover it. 00:28:50.571 [2024-07-15 16:10:19.234640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.571 [2024-07-15 16:10:19.234657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.571 qpair failed and we were unable to recover it. 00:28:50.571 [2024-07-15 16:10:19.234854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.571 [2024-07-15 16:10:19.234871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.571 qpair failed and we were unable to recover it. 00:28:50.571 [2024-07-15 16:10:19.235074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.571 [2024-07-15 16:10:19.235091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.571 qpair failed and we were unable to recover it. 00:28:50.571 [2024-07-15 16:10:19.235392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.571 [2024-07-15 16:10:19.235427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.571 qpair failed and we were unable to recover it. 00:28:50.571 [2024-07-15 16:10:19.235734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.571 [2024-07-15 16:10:19.235767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.571 qpair failed and we were unable to recover it. 00:28:50.571 [2024-07-15 16:10:19.236076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.571 [2024-07-15 16:10:19.236093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.571 qpair failed and we were unable to recover it. 00:28:50.571 [2024-07-15 16:10:19.236395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.571 [2024-07-15 16:10:19.236429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.571 qpair failed and we were unable to recover it. 00:28:50.571 [2024-07-15 16:10:19.236682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.571 [2024-07-15 16:10:19.236715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.571 qpair failed and we were unable to recover it. 00:28:50.571 [2024-07-15 16:10:19.236925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.571 [2024-07-15 16:10:19.236958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.571 qpair failed and we were unable to recover it. 00:28:50.571 [2024-07-15 16:10:19.237172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.571 [2024-07-15 16:10:19.237204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.571 qpair failed and we were unable to recover it. 00:28:50.571 [2024-07-15 16:10:19.237438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.571 [2024-07-15 16:10:19.237472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.571 qpair failed and we were unable to recover it. 00:28:50.571 [2024-07-15 16:10:19.237700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.571 [2024-07-15 16:10:19.237717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.571 qpair failed and we were unable to recover it. 00:28:50.571 [2024-07-15 16:10:19.237972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.571 [2024-07-15 16:10:19.238005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.571 qpair failed and we were unable to recover it. 00:28:50.571 [2024-07-15 16:10:19.238300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.571 [2024-07-15 16:10:19.238335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.571 qpair failed and we were unable to recover it. 00:28:50.571 [2024-07-15 16:10:19.238639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.571 [2024-07-15 16:10:19.238672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.571 qpair failed and we were unable to recover it. 00:28:50.571 [2024-07-15 16:10:19.238975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.571 [2024-07-15 16:10:19.239007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.571 qpair failed and we were unable to recover it. 00:28:50.571 [2024-07-15 16:10:19.239254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.571 [2024-07-15 16:10:19.239289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.571 qpair failed and we were unable to recover it. 00:28:50.571 [2024-07-15 16:10:19.239537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.571 [2024-07-15 16:10:19.239570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.571 qpair failed and we were unable to recover it. 00:28:50.571 [2024-07-15 16:10:19.239854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.571 [2024-07-15 16:10:19.239886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.571 qpair failed and we were unable to recover it. 00:28:50.571 [2024-07-15 16:10:19.240048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.571 [2024-07-15 16:10:19.240085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.571 qpair failed and we were unable to recover it. 00:28:50.571 [2024-07-15 16:10:19.240250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.571 [2024-07-15 16:10:19.240284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.571 qpair failed and we were unable to recover it. 00:28:50.571 [2024-07-15 16:10:19.240520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.571 [2024-07-15 16:10:19.240552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.571 qpair failed and we were unable to recover it. 00:28:50.571 [2024-07-15 16:10:19.240854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.571 [2024-07-15 16:10:19.240886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.571 qpair failed and we were unable to recover it. 00:28:50.571 [2024-07-15 16:10:19.241187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.571 [2024-07-15 16:10:19.241220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.571 qpair failed and we were unable to recover it. 00:28:50.571 [2024-07-15 16:10:19.241471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.571 [2024-07-15 16:10:19.241505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.571 qpair failed and we were unable to recover it. 00:28:50.571 [2024-07-15 16:10:19.241690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.571 [2024-07-15 16:10:19.241722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.571 qpair failed and we were unable to recover it. 00:28:50.571 [2024-07-15 16:10:19.242017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.571 [2024-07-15 16:10:19.242050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.571 qpair failed and we were unable to recover it. 00:28:50.571 [2024-07-15 16:10:19.242288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.571 [2024-07-15 16:10:19.242322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.571 qpair failed and we were unable to recover it. 00:28:50.571 [2024-07-15 16:10:19.242624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.571 [2024-07-15 16:10:19.242656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.571 qpair failed and we were unable to recover it. 00:28:50.571 [2024-07-15 16:10:19.242913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.571 [2024-07-15 16:10:19.242945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.571 qpair failed and we were unable to recover it. 00:28:50.571 [2024-07-15 16:10:19.243169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.571 [2024-07-15 16:10:19.243201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.571 qpair failed and we were unable to recover it. 00:28:50.571 [2024-07-15 16:10:19.243436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.571 [2024-07-15 16:10:19.243470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.571 qpair failed and we were unable to recover it. 00:28:50.571 [2024-07-15 16:10:19.243688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.571 [2024-07-15 16:10:19.243728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.571 qpair failed and we were unable to recover it. 00:28:50.571 [2024-07-15 16:10:19.244015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.572 [2024-07-15 16:10:19.244048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.572 qpair failed and we were unable to recover it. 00:28:50.572 [2024-07-15 16:10:19.244355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.572 [2024-07-15 16:10:19.244388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.572 qpair failed and we were unable to recover it. 00:28:50.572 [2024-07-15 16:10:19.244572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.572 [2024-07-15 16:10:19.244604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.572 qpair failed and we were unable to recover it. 00:28:50.572 [2024-07-15 16:10:19.244837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.572 [2024-07-15 16:10:19.244869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.572 qpair failed and we were unable to recover it. 00:28:50.572 [2024-07-15 16:10:19.245013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.572 [2024-07-15 16:10:19.245045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.572 qpair failed and we were unable to recover it. 00:28:50.572 [2024-07-15 16:10:19.245261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.572 [2024-07-15 16:10:19.245295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.572 qpair failed and we were unable to recover it. 00:28:50.572 [2024-07-15 16:10:19.245534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.572 [2024-07-15 16:10:19.245554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.572 qpair failed and we were unable to recover it. 00:28:50.572 [2024-07-15 16:10:19.245800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.572 [2024-07-15 16:10:19.245818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.572 qpair failed and we were unable to recover it. 00:28:50.572 [2024-07-15 16:10:19.246003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.572 [2024-07-15 16:10:19.246020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.572 qpair failed and we were unable to recover it. 00:28:50.572 [2024-07-15 16:10:19.246318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.572 [2024-07-15 16:10:19.246351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.572 qpair failed and we were unable to recover it. 00:28:50.572 [2024-07-15 16:10:19.246562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.572 [2024-07-15 16:10:19.246595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.572 qpair failed and we were unable to recover it. 00:28:50.572 [2024-07-15 16:10:19.246873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.572 [2024-07-15 16:10:19.246911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.572 qpair failed and we were unable to recover it. 00:28:50.572 [2024-07-15 16:10:19.247143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.572 [2024-07-15 16:10:19.247161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.572 qpair failed and we were unable to recover it. 00:28:50.572 [2024-07-15 16:10:19.247419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.572 [2024-07-15 16:10:19.247464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.572 qpair failed and we were unable to recover it. 00:28:50.572 [2024-07-15 16:10:19.247647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.572 [2024-07-15 16:10:19.247679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.572 qpair failed and we were unable to recover it. 00:28:50.572 [2024-07-15 16:10:19.247959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.572 [2024-07-15 16:10:19.247991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.572 qpair failed and we were unable to recover it. 00:28:50.572 [2024-07-15 16:10:19.248230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.572 [2024-07-15 16:10:19.248265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.572 qpair failed and we were unable to recover it. 00:28:50.572 [2024-07-15 16:10:19.248495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.572 [2024-07-15 16:10:19.248527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.572 qpair failed and we were unable to recover it. 00:28:50.572 [2024-07-15 16:10:19.248699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.572 [2024-07-15 16:10:19.248717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.572 qpair failed and we were unable to recover it. 00:28:50.572 [2024-07-15 16:10:19.248874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.572 [2024-07-15 16:10:19.248906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.572 qpair failed and we were unable to recover it. 00:28:50.572 [2024-07-15 16:10:19.249124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.572 [2024-07-15 16:10:19.249156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.572 qpair failed and we were unable to recover it. 00:28:50.572 [2024-07-15 16:10:19.249458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.572 [2024-07-15 16:10:19.249492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.572 qpair failed and we were unable to recover it. 00:28:50.572 [2024-07-15 16:10:19.249724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.572 [2024-07-15 16:10:19.249742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.572 qpair failed and we were unable to recover it. 00:28:50.572 [2024-07-15 16:10:19.249931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.572 [2024-07-15 16:10:19.249948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.572 qpair failed and we were unable to recover it. 00:28:50.572 [2024-07-15 16:10:19.250180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.572 [2024-07-15 16:10:19.250212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.572 qpair failed and we were unable to recover it. 00:28:50.572 [2024-07-15 16:10:19.250471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.572 [2024-07-15 16:10:19.250487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.572 qpair failed and we were unable to recover it. 00:28:50.572 [2024-07-15 16:10:19.250644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.572 [2024-07-15 16:10:19.250682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.572 qpair failed and we were unable to recover it. 00:28:50.572 [2024-07-15 16:10:19.250941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.572 [2024-07-15 16:10:19.250973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.572 qpair failed and we were unable to recover it. 00:28:50.572 [2024-07-15 16:10:19.251200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.572 [2024-07-15 16:10:19.251244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.572 qpair failed and we were unable to recover it. 00:28:50.572 [2024-07-15 16:10:19.251502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.572 [2024-07-15 16:10:19.251534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.572 qpair failed and we were unable to recover it. 00:28:50.572 [2024-07-15 16:10:19.251854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.572 [2024-07-15 16:10:19.251887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.572 qpair failed and we were unable to recover it. 00:28:50.572 [2024-07-15 16:10:19.252169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.572 [2024-07-15 16:10:19.252201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.572 qpair failed and we were unable to recover it. 00:28:50.572 [2024-07-15 16:10:19.252354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.572 [2024-07-15 16:10:19.252388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.572 qpair failed and we were unable to recover it. 00:28:50.572 [2024-07-15 16:10:19.252643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.572 [2024-07-15 16:10:19.252675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.572 qpair failed and we were unable to recover it. 00:28:50.572 [2024-07-15 16:10:19.252969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.572 [2024-07-15 16:10:19.253002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.572 qpair failed and we were unable to recover it. 00:28:50.572 [2024-07-15 16:10:19.253254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.572 [2024-07-15 16:10:19.253288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.572 qpair failed and we were unable to recover it. 00:28:50.572 [2024-07-15 16:10:19.253544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.572 [2024-07-15 16:10:19.253576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.572 qpair failed and we were unable to recover it. 00:28:50.572 [2024-07-15 16:10:19.253852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.572 [2024-07-15 16:10:19.253885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.572 qpair failed and we were unable to recover it. 00:28:50.572 [2024-07-15 16:10:19.254195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.572 [2024-07-15 16:10:19.254250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.572 qpair failed and we were unable to recover it. 00:28:50.572 [2024-07-15 16:10:19.254489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.572 [2024-07-15 16:10:19.254521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.572 qpair failed and we were unable to recover it. 00:28:50.572 [2024-07-15 16:10:19.254779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.572 [2024-07-15 16:10:19.254811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.572 qpair failed and we were unable to recover it. 00:28:50.573 [2024-07-15 16:10:19.254991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.573 [2024-07-15 16:10:19.255024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.573 qpair failed and we were unable to recover it. 00:28:50.573 [2024-07-15 16:10:19.255243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.573 [2024-07-15 16:10:19.255277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.573 qpair failed and we were unable to recover it. 00:28:50.573 [2024-07-15 16:10:19.255499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.573 [2024-07-15 16:10:19.255531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.573 qpair failed and we were unable to recover it. 00:28:50.573 [2024-07-15 16:10:19.255811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.573 [2024-07-15 16:10:19.255842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.573 qpair failed and we were unable to recover it. 00:28:50.573 [2024-07-15 16:10:19.256079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.573 [2024-07-15 16:10:19.256110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.573 qpair failed and we were unable to recover it. 00:28:50.573 [2024-07-15 16:10:19.256352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.573 [2024-07-15 16:10:19.256386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.573 qpair failed and we were unable to recover it. 00:28:50.573 [2024-07-15 16:10:19.256701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.573 [2024-07-15 16:10:19.256734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.573 qpair failed and we were unable to recover it. 00:28:50.573 [2024-07-15 16:10:19.256947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.573 [2024-07-15 16:10:19.256980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.573 qpair failed and we were unable to recover it. 00:28:50.573 [2024-07-15 16:10:19.257193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.573 [2024-07-15 16:10:19.257236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.573 qpair failed and we were unable to recover it. 00:28:50.573 [2024-07-15 16:10:19.257393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.573 [2024-07-15 16:10:19.257425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.573 qpair failed and we were unable to recover it. 00:28:50.573 [2024-07-15 16:10:19.257704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.573 [2024-07-15 16:10:19.257743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.573 qpair failed and we were unable to recover it. 00:28:50.573 [2024-07-15 16:10:19.258028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.573 [2024-07-15 16:10:19.258061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.573 qpair failed and we were unable to recover it. 00:28:50.573 [2024-07-15 16:10:19.258399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.573 [2024-07-15 16:10:19.258480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.573 qpair failed and we were unable to recover it. 00:28:50.573 [2024-07-15 16:10:19.258676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.573 [2024-07-15 16:10:19.258694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.573 qpair failed and we were unable to recover it. 00:28:50.573 [2024-07-15 16:10:19.258967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.573 [2024-07-15 16:10:19.259002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.573 qpair failed and we were unable to recover it. 00:28:50.573 [2024-07-15 16:10:19.259274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.573 [2024-07-15 16:10:19.259310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.573 qpair failed and we were unable to recover it. 00:28:50.573 [2024-07-15 16:10:19.259567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.573 [2024-07-15 16:10:19.259601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.573 qpair failed and we were unable to recover it. 00:28:50.573 [2024-07-15 16:10:19.259945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.573 [2024-07-15 16:10:19.259978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.573 qpair failed and we were unable to recover it. 00:28:50.573 [2024-07-15 16:10:19.260282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.573 [2024-07-15 16:10:19.260316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.573 qpair failed and we were unable to recover it. 00:28:50.573 [2024-07-15 16:10:19.260495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.573 [2024-07-15 16:10:19.260527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.573 qpair failed and we were unable to recover it. 00:28:50.573 [2024-07-15 16:10:19.260806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.573 [2024-07-15 16:10:19.260839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.573 qpair failed and we were unable to recover it. 00:28:50.573 [2024-07-15 16:10:19.261068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.573 [2024-07-15 16:10:19.261100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.573 qpair failed and we were unable to recover it. 00:28:50.573 [2024-07-15 16:10:19.261399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.573 [2024-07-15 16:10:19.261434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.573 qpair failed and we were unable to recover it. 00:28:50.573 [2024-07-15 16:10:19.261720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.573 [2024-07-15 16:10:19.261754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.573 qpair failed and we were unable to recover it. 00:28:50.573 [2024-07-15 16:10:19.262070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.573 [2024-07-15 16:10:19.262102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.573 qpair failed and we were unable to recover it. 00:28:50.573 [2024-07-15 16:10:19.262363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.573 [2024-07-15 16:10:19.262396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.573 qpair failed and we were unable to recover it. 00:28:50.573 [2024-07-15 16:10:19.262568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.573 [2024-07-15 16:10:19.262611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.573 qpair failed and we were unable to recover it. 00:28:50.573 [2024-07-15 16:10:19.262803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.573 [2024-07-15 16:10:19.262819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.573 qpair failed and we were unable to recover it. 00:28:50.573 [2024-07-15 16:10:19.263106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.573 [2024-07-15 16:10:19.263140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.573 qpair failed and we were unable to recover it. 00:28:50.573 [2024-07-15 16:10:19.263357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.573 [2024-07-15 16:10:19.263390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.573 qpair failed and we were unable to recover it. 00:28:50.573 [2024-07-15 16:10:19.263679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.573 [2024-07-15 16:10:19.263711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.573 qpair failed and we were unable to recover it. 00:28:50.573 [2024-07-15 16:10:19.263946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.573 [2024-07-15 16:10:19.263979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.573 qpair failed and we were unable to recover it. 00:28:50.573 [2024-07-15 16:10:19.264192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.573 [2024-07-15 16:10:19.264250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.573 qpair failed and we were unable to recover it. 00:28:50.573 [2024-07-15 16:10:19.264488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.573 [2024-07-15 16:10:19.264521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.573 qpair failed and we were unable to recover it. 00:28:50.573 [2024-07-15 16:10:19.264813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.573 [2024-07-15 16:10:19.264846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.573 qpair failed and we were unable to recover it. 00:28:50.573 [2024-07-15 16:10:19.265018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.573 [2024-07-15 16:10:19.265051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.573 qpair failed and we were unable to recover it. 00:28:50.573 [2024-07-15 16:10:19.265337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.573 [2024-07-15 16:10:19.265371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.573 qpair failed and we were unable to recover it. 00:28:50.573 [2024-07-15 16:10:19.265555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.573 [2024-07-15 16:10:19.265588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.573 qpair failed and we were unable to recover it. 00:28:50.573 [2024-07-15 16:10:19.265896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.573 [2024-07-15 16:10:19.265928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.573 qpair failed and we were unable to recover it. 00:28:50.573 [2024-07-15 16:10:19.266180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.573 [2024-07-15 16:10:19.266219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.573 qpair failed and we were unable to recover it. 00:28:50.573 [2024-07-15 16:10:19.266458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.573 [2024-07-15 16:10:19.266491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.573 qpair failed and we were unable to recover it. 00:28:50.574 [2024-07-15 16:10:19.266713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.574 [2024-07-15 16:10:19.266746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.574 qpair failed and we were unable to recover it. 00:28:50.574 [2024-07-15 16:10:19.266944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.574 [2024-07-15 16:10:19.266961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.574 qpair failed and we were unable to recover it. 00:28:50.574 [2024-07-15 16:10:19.267155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.574 [2024-07-15 16:10:19.267171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.574 qpair failed and we were unable to recover it. 00:28:50.574 [2024-07-15 16:10:19.267442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.574 [2024-07-15 16:10:19.267460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.574 qpair failed and we were unable to recover it. 00:28:50.574 [2024-07-15 16:10:19.267675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.574 [2024-07-15 16:10:19.267692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.574 qpair failed and we were unable to recover it. 00:28:50.574 [2024-07-15 16:10:19.267884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.574 [2024-07-15 16:10:19.267900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.574 qpair failed and we were unable to recover it. 00:28:50.574 [2024-07-15 16:10:19.268172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.574 [2024-07-15 16:10:19.268215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.574 qpair failed and we were unable to recover it. 00:28:50.574 [2024-07-15 16:10:19.268504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.574 [2024-07-15 16:10:19.268537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.574 qpair failed and we were unable to recover it. 00:28:50.574 [2024-07-15 16:10:19.268707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.574 [2024-07-15 16:10:19.268740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.574 qpair failed and we were unable to recover it. 00:28:50.574 [2024-07-15 16:10:19.269068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.574 [2024-07-15 16:10:19.269086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.574 qpair failed and we were unable to recover it. 00:28:50.574 [2024-07-15 16:10:19.269389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.574 [2024-07-15 16:10:19.269423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.574 qpair failed and we were unable to recover it. 00:28:50.574 [2024-07-15 16:10:19.269704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.574 [2024-07-15 16:10:19.269737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.574 qpair failed and we were unable to recover it. 00:28:50.574 [2024-07-15 16:10:19.269986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.574 [2024-07-15 16:10:19.270003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.574 qpair failed and we were unable to recover it. 00:28:50.574 [2024-07-15 16:10:19.270282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.574 [2024-07-15 16:10:19.270299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.574 qpair failed and we were unable to recover it. 00:28:50.574 [2024-07-15 16:10:19.270492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.574 [2024-07-15 16:10:19.270508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.574 qpair failed and we were unable to recover it. 00:28:50.574 [2024-07-15 16:10:19.270701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.574 [2024-07-15 16:10:19.270718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.574 qpair failed and we were unable to recover it. 00:28:50.574 [2024-07-15 16:10:19.270943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.574 [2024-07-15 16:10:19.270961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.574 qpair failed and we were unable to recover it. 00:28:50.574 [2024-07-15 16:10:19.271209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.574 [2024-07-15 16:10:19.271256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.574 qpair failed and we were unable to recover it. 00:28:50.574 [2024-07-15 16:10:19.271490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.574 [2024-07-15 16:10:19.271523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.574 qpair failed and we were unable to recover it. 00:28:50.574 [2024-07-15 16:10:19.271804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.574 [2024-07-15 16:10:19.271836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.574 qpair failed and we were unable to recover it. 00:28:50.574 [2024-07-15 16:10:19.272119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.574 [2024-07-15 16:10:19.272151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.574 qpair failed and we were unable to recover it. 00:28:50.574 [2024-07-15 16:10:19.272394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.574 [2024-07-15 16:10:19.272427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.574 qpair failed and we were unable to recover it. 00:28:50.574 [2024-07-15 16:10:19.272708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.574 [2024-07-15 16:10:19.272741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.574 qpair failed and we were unable to recover it. 00:28:50.574 [2024-07-15 16:10:19.272979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.574 [2024-07-15 16:10:19.272996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.574 qpair failed and we were unable to recover it. 00:28:50.574 [2024-07-15 16:10:19.273207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.574 [2024-07-15 16:10:19.273229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.574 qpair failed and we were unable to recover it. 00:28:50.574 [2024-07-15 16:10:19.273441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.574 [2024-07-15 16:10:19.273461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.574 qpair failed and we were unable to recover it. 00:28:50.574 [2024-07-15 16:10:19.273723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.574 [2024-07-15 16:10:19.273740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.574 qpair failed and we were unable to recover it. 00:28:50.574 [2024-07-15 16:10:19.273949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.574 [2024-07-15 16:10:19.273982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.574 qpair failed and we were unable to recover it. 00:28:50.574 [2024-07-15 16:10:19.274201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.574 [2024-07-15 16:10:19.274244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.574 qpair failed and we were unable to recover it. 00:28:50.574 [2024-07-15 16:10:19.274490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.574 [2024-07-15 16:10:19.274522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.574 qpair failed and we were unable to recover it. 00:28:50.574 [2024-07-15 16:10:19.274694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.574 [2024-07-15 16:10:19.274726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.574 qpair failed and we were unable to recover it. 00:28:50.574 [2024-07-15 16:10:19.275075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.574 [2024-07-15 16:10:19.275107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.574 qpair failed and we were unable to recover it. 00:28:50.574 [2024-07-15 16:10:19.275379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.574 [2024-07-15 16:10:19.275412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.574 qpair failed and we were unable to recover it. 00:28:50.574 [2024-07-15 16:10:19.275644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.574 [2024-07-15 16:10:19.275676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.574 qpair failed and we were unable to recover it. 00:28:50.574 [2024-07-15 16:10:19.275866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.574 [2024-07-15 16:10:19.275899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.574 qpair failed and we were unable to recover it. 00:28:50.575 [2024-07-15 16:10:19.276181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.575 [2024-07-15 16:10:19.276213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.575 qpair failed and we were unable to recover it. 00:28:50.575 [2024-07-15 16:10:19.276486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.575 [2024-07-15 16:10:19.276519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.575 qpair failed and we were unable to recover it. 00:28:50.575 [2024-07-15 16:10:19.276682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.575 [2024-07-15 16:10:19.276724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.575 qpair failed and we were unable to recover it. 00:28:50.575 [2024-07-15 16:10:19.276972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.575 [2024-07-15 16:10:19.276989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.575 qpair failed and we were unable to recover it. 00:28:50.575 [2024-07-15 16:10:19.277257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.575 [2024-07-15 16:10:19.277291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.575 qpair failed and we were unable to recover it. 00:28:50.575 [2024-07-15 16:10:19.277519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.575 [2024-07-15 16:10:19.277551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.575 qpair failed and we were unable to recover it. 00:28:50.575 [2024-07-15 16:10:19.277810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.575 [2024-07-15 16:10:19.277842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.575 qpair failed and we were unable to recover it. 00:28:50.575 [2024-07-15 16:10:19.278139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.575 [2024-07-15 16:10:19.278171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.575 qpair failed and we were unable to recover it. 00:28:50.575 [2024-07-15 16:10:19.278367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.575 [2024-07-15 16:10:19.278401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.575 qpair failed and we were unable to recover it. 00:28:50.575 [2024-07-15 16:10:19.278594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.575 [2024-07-15 16:10:19.278629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.575 qpair failed and we were unable to recover it. 00:28:50.575 [2024-07-15 16:10:19.278946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.575 [2024-07-15 16:10:19.278979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.575 qpair failed and we were unable to recover it. 00:28:50.575 [2024-07-15 16:10:19.279206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.575 [2024-07-15 16:10:19.279249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.575 qpair failed and we were unable to recover it. 00:28:50.575 [2024-07-15 16:10:19.279440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.575 [2024-07-15 16:10:19.279473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.575 qpair failed and we were unable to recover it. 00:28:50.575 [2024-07-15 16:10:19.279681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.575 [2024-07-15 16:10:19.279697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.575 qpair failed and we were unable to recover it. 00:28:50.575 [2024-07-15 16:10:19.279890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.575 [2024-07-15 16:10:19.279922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.575 qpair failed and we were unable to recover it. 00:28:50.575 [2024-07-15 16:10:19.280221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.575 [2024-07-15 16:10:19.280263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.575 qpair failed and we were unable to recover it. 00:28:50.575 [2024-07-15 16:10:19.280525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.575 [2024-07-15 16:10:19.280557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.575 qpair failed and we were unable to recover it. 00:28:50.575 [2024-07-15 16:10:19.280721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.575 [2024-07-15 16:10:19.280759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.575 qpair failed and we were unable to recover it. 00:28:50.575 [2024-07-15 16:10:19.280903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.575 [2024-07-15 16:10:19.280937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.575 qpair failed and we were unable to recover it. 00:28:50.575 [2024-07-15 16:10:19.281147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.575 [2024-07-15 16:10:19.281180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.575 qpair failed and we were unable to recover it. 00:28:50.575 [2024-07-15 16:10:19.281413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.575 [2024-07-15 16:10:19.281447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.575 qpair failed and we were unable to recover it. 00:28:50.575 [2024-07-15 16:10:19.281733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.575 [2024-07-15 16:10:19.281766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.575 qpair failed and we were unable to recover it. 00:28:50.575 [2024-07-15 16:10:19.282105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.575 [2024-07-15 16:10:19.282137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.575 qpair failed and we were unable to recover it. 00:28:50.575 [2024-07-15 16:10:19.282391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.575 [2024-07-15 16:10:19.282424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.575 qpair failed and we were unable to recover it. 00:28:50.575 [2024-07-15 16:10:19.282661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.575 [2024-07-15 16:10:19.282700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.575 qpair failed and we were unable to recover it. 00:28:50.575 [2024-07-15 16:10:19.282928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.575 [2024-07-15 16:10:19.282945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.575 qpair failed and we were unable to recover it. 00:28:50.575 [2024-07-15 16:10:19.283187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.575 [2024-07-15 16:10:19.283204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.575 qpair failed and we were unable to recover it. 00:28:50.575 [2024-07-15 16:10:19.283466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.575 [2024-07-15 16:10:19.283483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.575 qpair failed and we were unable to recover it. 00:28:50.575 [2024-07-15 16:10:19.283635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.575 [2024-07-15 16:10:19.283652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.575 qpair failed and we were unable to recover it. 00:28:50.575 [2024-07-15 16:10:19.283793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.575 [2024-07-15 16:10:19.283810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.575 qpair failed and we were unable to recover it. 00:28:50.575 [2024-07-15 16:10:19.284029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.575 [2024-07-15 16:10:19.284047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.575 qpair failed and we were unable to recover it. 00:28:50.575 [2024-07-15 16:10:19.284235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.575 [2024-07-15 16:10:19.284252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.575 qpair failed and we were unable to recover it. 00:28:50.575 [2024-07-15 16:10:19.284430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.575 [2024-07-15 16:10:19.284449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.575 qpair failed and we were unable to recover it. 00:28:50.575 [2024-07-15 16:10:19.284564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.575 [2024-07-15 16:10:19.284580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.575 qpair failed and we were unable to recover it. 00:28:50.575 [2024-07-15 16:10:19.284773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.575 [2024-07-15 16:10:19.284805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.575 qpair failed and we were unable to recover it. 00:28:50.575 [2024-07-15 16:10:19.285039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.575 [2024-07-15 16:10:19.285073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.575 qpair failed and we were unable to recover it. 00:28:50.575 [2024-07-15 16:10:19.285356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.575 [2024-07-15 16:10:19.285388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.575 qpair failed and we were unable to recover it. 00:28:50.575 [2024-07-15 16:10:19.285620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.575 [2024-07-15 16:10:19.285653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.575 qpair failed and we were unable to recover it. 00:28:50.575 [2024-07-15 16:10:19.285815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.575 [2024-07-15 16:10:19.285847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.575 qpair failed and we were unable to recover it. 00:28:50.575 [2024-07-15 16:10:19.286067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.575 [2024-07-15 16:10:19.286100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.575 qpair failed and we were unable to recover it. 00:28:50.575 [2024-07-15 16:10:19.286397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.575 [2024-07-15 16:10:19.286431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.575 qpair failed and we were unable to recover it. 00:28:50.575 [2024-07-15 16:10:19.286666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.576 [2024-07-15 16:10:19.286699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.576 qpair failed and we were unable to recover it. 00:28:50.576 [2024-07-15 16:10:19.286955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.576 [2024-07-15 16:10:19.286973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.576 qpair failed and we were unable to recover it. 00:28:50.576 [2024-07-15 16:10:19.287218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.576 [2024-07-15 16:10:19.287241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.576 qpair failed and we were unable to recover it. 00:28:50.576 [2024-07-15 16:10:19.287375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.576 [2024-07-15 16:10:19.287392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.576 qpair failed and we were unable to recover it. 00:28:50.576 [2024-07-15 16:10:19.287604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.576 [2024-07-15 16:10:19.287637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.576 qpair failed and we were unable to recover it. 00:28:50.576 [2024-07-15 16:10:19.287875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.576 [2024-07-15 16:10:19.287906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.576 qpair failed and we were unable to recover it. 00:28:50.576 [2024-07-15 16:10:19.288220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.576 [2024-07-15 16:10:19.288260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.576 qpair failed and we were unable to recover it. 00:28:50.576 [2024-07-15 16:10:19.288481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.576 [2024-07-15 16:10:19.288513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.576 qpair failed and we were unable to recover it. 00:28:50.576 [2024-07-15 16:10:19.288676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.576 [2024-07-15 16:10:19.288693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.576 qpair failed and we were unable to recover it. 00:28:50.576 [2024-07-15 16:10:19.288910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.576 [2024-07-15 16:10:19.288943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.576 qpair failed and we were unable to recover it. 00:28:50.576 [2024-07-15 16:10:19.289246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.576 [2024-07-15 16:10:19.289281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.576 qpair failed and we were unable to recover it. 00:28:50.576 [2024-07-15 16:10:19.289533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.576 [2024-07-15 16:10:19.289566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.576 qpair failed and we were unable to recover it. 00:28:50.576 [2024-07-15 16:10:19.289799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.576 [2024-07-15 16:10:19.289815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.576 qpair failed and we were unable to recover it. 00:28:50.576 [2024-07-15 16:10:19.289955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.576 [2024-07-15 16:10:19.289972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.576 qpair failed and we were unable to recover it. 00:28:50.576 [2024-07-15 16:10:19.290107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.576 [2024-07-15 16:10:19.290142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.576 qpair failed and we were unable to recover it. 00:28:50.576 [2024-07-15 16:10:19.290452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.576 [2024-07-15 16:10:19.290485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.576 qpair failed and we were unable to recover it. 00:28:50.576 [2024-07-15 16:10:19.290770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.576 [2024-07-15 16:10:19.290802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.576 qpair failed and we were unable to recover it. 00:28:50.576 [2024-07-15 16:10:19.291061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.576 [2024-07-15 16:10:19.291095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.576 qpair failed and we were unable to recover it. 00:28:50.576 [2024-07-15 16:10:19.291395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.576 [2024-07-15 16:10:19.291428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.576 qpair failed and we were unable to recover it. 00:28:50.576 [2024-07-15 16:10:19.291725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.576 [2024-07-15 16:10:19.291758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.576 qpair failed and we were unable to recover it. 00:28:50.576 [2024-07-15 16:10:19.292061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.576 [2024-07-15 16:10:19.292093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.576 qpair failed and we were unable to recover it. 00:28:50.576 [2024-07-15 16:10:19.292361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.576 [2024-07-15 16:10:19.292394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.576 qpair failed and we were unable to recover it. 00:28:50.576 [2024-07-15 16:10:19.292722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.576 [2024-07-15 16:10:19.292753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.576 qpair failed and we were unable to recover it. 00:28:50.576 [2024-07-15 16:10:19.293041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.576 [2024-07-15 16:10:19.293074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.576 qpair failed and we were unable to recover it. 00:28:50.576 [2024-07-15 16:10:19.293388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.576 [2024-07-15 16:10:19.293422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.576 qpair failed and we were unable to recover it. 00:28:50.576 [2024-07-15 16:10:19.293636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.576 [2024-07-15 16:10:19.293669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.576 qpair failed and we were unable to recover it. 00:28:50.576 [2024-07-15 16:10:19.293999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.576 [2024-07-15 16:10:19.294032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.576 qpair failed and we were unable to recover it. 00:28:50.576 [2024-07-15 16:10:19.294211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.576 [2024-07-15 16:10:19.294254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.576 qpair failed and we were unable to recover it. 00:28:50.576 [2024-07-15 16:10:19.294418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.576 [2024-07-15 16:10:19.294450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.576 qpair failed and we were unable to recover it. 00:28:50.576 [2024-07-15 16:10:19.294660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.576 [2024-07-15 16:10:19.294692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.576 qpair failed and we were unable to recover it. 00:28:50.576 [2024-07-15 16:10:19.294964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.576 [2024-07-15 16:10:19.294982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.576 qpair failed and we were unable to recover it. 00:28:50.576 [2024-07-15 16:10:19.295182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.576 [2024-07-15 16:10:19.295198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.576 qpair failed and we were unable to recover it. 00:28:50.576 [2024-07-15 16:10:19.295333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.576 [2024-07-15 16:10:19.295351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.576 qpair failed and we were unable to recover it. 00:28:50.576 [2024-07-15 16:10:19.295605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.576 [2024-07-15 16:10:19.295638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.576 qpair failed and we were unable to recover it. 00:28:50.576 [2024-07-15 16:10:19.295861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.576 [2024-07-15 16:10:19.295895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.576 qpair failed and we were unable to recover it. 00:28:50.576 [2024-07-15 16:10:19.296175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.576 [2024-07-15 16:10:19.296206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.576 qpair failed and we were unable to recover it. 00:28:50.576 [2024-07-15 16:10:19.296450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.576 [2024-07-15 16:10:19.296483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.576 qpair failed and we were unable to recover it. 00:28:50.576 [2024-07-15 16:10:19.296646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.576 [2024-07-15 16:10:19.296663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.576 qpair failed and we were unable to recover it. 00:28:50.576 [2024-07-15 16:10:19.296915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.576 [2024-07-15 16:10:19.296948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.576 qpair failed and we were unable to recover it. 00:28:50.576 [2024-07-15 16:10:19.297128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.576 [2024-07-15 16:10:19.297161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.576 qpair failed and we were unable to recover it. 00:28:50.576 [2024-07-15 16:10:19.297405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.576 [2024-07-15 16:10:19.297438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.576 qpair failed and we were unable to recover it. 00:28:50.576 [2024-07-15 16:10:19.297670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.577 [2024-07-15 16:10:19.297702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.577 qpair failed and we were unable to recover it. 00:28:50.577 [2024-07-15 16:10:19.297946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.577 [2024-07-15 16:10:19.297987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.577 qpair failed and we were unable to recover it. 00:28:50.577 [2024-07-15 16:10:19.298164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.577 [2024-07-15 16:10:19.298181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.577 qpair failed and we were unable to recover it. 00:28:50.577 [2024-07-15 16:10:19.298364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.577 [2024-07-15 16:10:19.298386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.577 qpair failed and we were unable to recover it. 00:28:50.577 [2024-07-15 16:10:19.298605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.577 [2024-07-15 16:10:19.298637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.577 qpair failed and we were unable to recover it. 00:28:50.577 [2024-07-15 16:10:19.298819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.577 [2024-07-15 16:10:19.298850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.577 qpair failed and we were unable to recover it. 00:28:50.577 [2024-07-15 16:10:19.299078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.577 [2024-07-15 16:10:19.299111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.577 qpair failed and we were unable to recover it. 00:28:50.577 [2024-07-15 16:10:19.299333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.577 [2024-07-15 16:10:19.299367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.577 qpair failed and we were unable to recover it. 00:28:50.577 [2024-07-15 16:10:19.299660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.577 [2024-07-15 16:10:19.299694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.577 qpair failed and we were unable to recover it. 00:28:50.577 [2024-07-15 16:10:19.300030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.577 [2024-07-15 16:10:19.300062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.577 qpair failed and we were unable to recover it. 00:28:50.577 [2024-07-15 16:10:19.300392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.577 [2024-07-15 16:10:19.300427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.577 qpair failed and we were unable to recover it. 00:28:50.577 [2024-07-15 16:10:19.300659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.577 [2024-07-15 16:10:19.300676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.577 qpair failed and we were unable to recover it. 00:28:50.577 [2024-07-15 16:10:19.300916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.577 [2024-07-15 16:10:19.300933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.577 qpair failed and we were unable to recover it. 00:28:50.577 [2024-07-15 16:10:19.301219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.577 [2024-07-15 16:10:19.301260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.577 qpair failed and we were unable to recover it. 00:28:50.577 [2024-07-15 16:10:19.301506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.577 [2024-07-15 16:10:19.301539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.577 qpair failed and we were unable to recover it. 00:28:50.577 [2024-07-15 16:10:19.301729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.577 [2024-07-15 16:10:19.301760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.577 qpair failed and we were unable to recover it. 00:28:50.577 [2024-07-15 16:10:19.302016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.577 [2024-07-15 16:10:19.302033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.577 qpair failed and we were unable to recover it. 00:28:50.577 [2024-07-15 16:10:19.302238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.577 [2024-07-15 16:10:19.302256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.577 qpair failed and we were unable to recover it. 00:28:50.577 [2024-07-15 16:10:19.302367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.577 [2024-07-15 16:10:19.302382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.577 qpair failed and we were unable to recover it. 00:28:50.577 [2024-07-15 16:10:19.302657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.577 [2024-07-15 16:10:19.302690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.577 qpair failed and we were unable to recover it. 00:28:50.577 [2024-07-15 16:10:19.302849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.577 [2024-07-15 16:10:19.302881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.577 qpair failed and we were unable to recover it. 00:28:50.577 [2024-07-15 16:10:19.303182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.577 [2024-07-15 16:10:19.303214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.577 qpair failed and we were unable to recover it. 00:28:50.577 [2024-07-15 16:10:19.303478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.577 [2024-07-15 16:10:19.303512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.577 qpair failed and we were unable to recover it. 00:28:50.577 [2024-07-15 16:10:19.303783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.577 [2024-07-15 16:10:19.303826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.577 qpair failed and we were unable to recover it. 00:28:50.577 [2024-07-15 16:10:19.304020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.577 [2024-07-15 16:10:19.304037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.577 qpair failed and we were unable to recover it. 00:28:50.577 [2024-07-15 16:10:19.304181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.577 [2024-07-15 16:10:19.304199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.577 qpair failed and we were unable to recover it. 00:28:50.577 [2024-07-15 16:10:19.304422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.577 [2024-07-15 16:10:19.304455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.577 qpair failed and we were unable to recover it. 00:28:50.577 [2024-07-15 16:10:19.304759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.577 [2024-07-15 16:10:19.304792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.577 qpair failed and we were unable to recover it. 00:28:50.577 [2024-07-15 16:10:19.304952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.577 [2024-07-15 16:10:19.304984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.577 qpair failed and we were unable to recover it. 00:28:50.577 [2024-07-15 16:10:19.305242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.577 [2024-07-15 16:10:19.305276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.577 qpair failed and we were unable to recover it. 00:28:50.577 [2024-07-15 16:10:19.305550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.577 [2024-07-15 16:10:19.305570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.577 qpair failed and we were unable to recover it. 00:28:50.577 [2024-07-15 16:10:19.305780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.577 [2024-07-15 16:10:19.305798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.577 qpair failed and we were unable to recover it. 00:28:50.577 [2024-07-15 16:10:19.306020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.577 [2024-07-15 16:10:19.306052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.577 qpair failed and we were unable to recover it. 00:28:50.577 [2024-07-15 16:10:19.306380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.577 [2024-07-15 16:10:19.306414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.577 qpair failed and we were unable to recover it. 00:28:50.577 [2024-07-15 16:10:19.306629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.577 [2024-07-15 16:10:19.306646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.577 qpair failed and we were unable to recover it. 00:28:50.577 [2024-07-15 16:10:19.306845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.577 [2024-07-15 16:10:19.306862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.577 qpair failed and we were unable to recover it. 00:28:50.577 [2024-07-15 16:10:19.307130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.577 [2024-07-15 16:10:19.307148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.577 qpair failed and we were unable to recover it. 00:28:50.577 [2024-07-15 16:10:19.307348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.577 [2024-07-15 16:10:19.307382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.577 qpair failed and we were unable to recover it. 00:28:50.577 [2024-07-15 16:10:19.307541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.577 [2024-07-15 16:10:19.307573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.577 qpair failed and we were unable to recover it. 00:28:50.577 [2024-07-15 16:10:19.307784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.577 [2024-07-15 16:10:19.307827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.577 qpair failed and we were unable to recover it. 00:28:50.577 [2024-07-15 16:10:19.308039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.577 [2024-07-15 16:10:19.308056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.578 qpair failed and we were unable to recover it. 00:28:50.578 [2024-07-15 16:10:19.308343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.578 [2024-07-15 16:10:19.308377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.578 qpair failed and we were unable to recover it. 00:28:50.578 [2024-07-15 16:10:19.308680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.578 [2024-07-15 16:10:19.308712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.578 qpair failed and we were unable to recover it. 00:28:50.578 [2024-07-15 16:10:19.308974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.578 [2024-07-15 16:10:19.309005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.578 qpair failed and we were unable to recover it. 00:28:50.578 [2024-07-15 16:10:19.309329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.578 [2024-07-15 16:10:19.309364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.578 qpair failed and we were unable to recover it. 00:28:50.578 [2024-07-15 16:10:19.309600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.578 [2024-07-15 16:10:19.309632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.578 qpair failed and we were unable to recover it. 00:28:50.578 [2024-07-15 16:10:19.309900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.578 [2024-07-15 16:10:19.309917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.578 qpair failed and we were unable to recover it. 00:28:50.578 [2024-07-15 16:10:19.310127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.578 [2024-07-15 16:10:19.310144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.578 qpair failed and we were unable to recover it. 00:28:50.578 [2024-07-15 16:10:19.310390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.578 [2024-07-15 16:10:19.310426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.578 qpair failed and we were unable to recover it. 00:28:50.578 [2024-07-15 16:10:19.310658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.578 [2024-07-15 16:10:19.310691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.578 qpair failed and we were unable to recover it. 00:28:50.578 [2024-07-15 16:10:19.310999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.578 [2024-07-15 16:10:19.311032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.578 qpair failed and we were unable to recover it. 00:28:50.578 [2024-07-15 16:10:19.311350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.578 [2024-07-15 16:10:19.311384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.578 qpair failed and we were unable to recover it. 00:28:50.578 [2024-07-15 16:10:19.311561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.578 [2024-07-15 16:10:19.311594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.578 qpair failed and we were unable to recover it. 00:28:50.578 [2024-07-15 16:10:19.311817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.578 [2024-07-15 16:10:19.311850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.578 qpair failed and we were unable to recover it. 00:28:50.578 [2024-07-15 16:10:19.312018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.578 [2024-07-15 16:10:19.312057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.578 qpair failed and we were unable to recover it. 00:28:50.578 [2024-07-15 16:10:19.312266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.578 [2024-07-15 16:10:19.312284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.578 qpair failed and we were unable to recover it. 00:28:50.578 [2024-07-15 16:10:19.312496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.578 [2024-07-15 16:10:19.312513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.578 qpair failed and we were unable to recover it. 00:28:50.578 [2024-07-15 16:10:19.312662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.578 [2024-07-15 16:10:19.312679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.578 qpair failed and we were unable to recover it. 00:28:50.578 [2024-07-15 16:10:19.312963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.578 [2024-07-15 16:10:19.312983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.578 qpair failed and we were unable to recover it. 00:28:50.578 [2024-07-15 16:10:19.313142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.578 [2024-07-15 16:10:19.313160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.578 qpair failed and we were unable to recover it. 00:28:50.578 [2024-07-15 16:10:19.313295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.578 [2024-07-15 16:10:19.313330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.578 qpair failed and we were unable to recover it. 00:28:50.578 [2024-07-15 16:10:19.313514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.578 [2024-07-15 16:10:19.313546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.578 qpair failed and we were unable to recover it. 00:28:50.578 [2024-07-15 16:10:19.313769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.578 [2024-07-15 16:10:19.313801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.578 qpair failed and we were unable to recover it. 00:28:50.578 [2024-07-15 16:10:19.314009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.578 [2024-07-15 16:10:19.314026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.578 qpair failed and we were unable to recover it. 00:28:50.578 [2024-07-15 16:10:19.314307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.578 [2024-07-15 16:10:19.314339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.578 qpair failed and we were unable to recover it. 00:28:50.578 [2024-07-15 16:10:19.314563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.578 [2024-07-15 16:10:19.314595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.578 qpair failed and we were unable to recover it. 00:28:50.578 [2024-07-15 16:10:19.314873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.578 [2024-07-15 16:10:19.314905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.578 qpair failed and we were unable to recover it. 00:28:50.578 [2024-07-15 16:10:19.315120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.578 [2024-07-15 16:10:19.315153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.578 qpair failed and we were unable to recover it. 00:28:50.578 [2024-07-15 16:10:19.315423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.578 [2024-07-15 16:10:19.315458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.578 qpair failed and we were unable to recover it. 00:28:50.578 [2024-07-15 16:10:19.315719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.578 [2024-07-15 16:10:19.315735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.578 qpair failed and we were unable to recover it. 00:28:50.578 [2024-07-15 16:10:19.316058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.578 [2024-07-15 16:10:19.316091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.578 qpair failed and we were unable to recover it. 00:28:50.578 [2024-07-15 16:10:19.316345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.578 [2024-07-15 16:10:19.316380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.578 qpair failed and we were unable to recover it. 00:28:50.578 [2024-07-15 16:10:19.316663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.579 [2024-07-15 16:10:19.316695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.579 qpair failed and we were unable to recover it. 00:28:50.579 [2024-07-15 16:10:19.316880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.579 [2024-07-15 16:10:19.316913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.579 qpair failed and we were unable to recover it. 00:28:50.579 [2024-07-15 16:10:19.317126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.579 [2024-07-15 16:10:19.317144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.579 qpair failed and we were unable to recover it. 00:28:50.579 [2024-07-15 16:10:19.317342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.579 [2024-07-15 16:10:19.317376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.579 qpair failed and we were unable to recover it. 00:28:50.579 [2024-07-15 16:10:19.317659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.579 [2024-07-15 16:10:19.317691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.579 qpair failed and we were unable to recover it. 00:28:50.579 [2024-07-15 16:10:19.318044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.579 [2024-07-15 16:10:19.318076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.579 qpair failed and we were unable to recover it. 00:28:50.579 [2024-07-15 16:10:19.318362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.579 [2024-07-15 16:10:19.318396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.579 qpair failed and we were unable to recover it. 00:28:50.579 [2024-07-15 16:10:19.318650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.579 [2024-07-15 16:10:19.318683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.579 qpair failed and we were unable to recover it. 00:28:50.579 [2024-07-15 16:10:19.318945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.579 [2024-07-15 16:10:19.318978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.579 qpair failed and we were unable to recover it. 00:28:50.579 [2024-07-15 16:10:19.319245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.579 [2024-07-15 16:10:19.319279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.579 qpair failed and we were unable to recover it. 00:28:50.579 [2024-07-15 16:10:19.319513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.579 [2024-07-15 16:10:19.319545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.579 qpair failed and we were unable to recover it. 00:28:50.579 [2024-07-15 16:10:19.319773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.579 [2024-07-15 16:10:19.319790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.579 qpair failed and we were unable to recover it. 00:28:50.579 [2024-07-15 16:10:19.320095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.579 [2024-07-15 16:10:19.320127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.579 qpair failed and we were unable to recover it. 00:28:50.579 [2024-07-15 16:10:19.320384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.579 [2024-07-15 16:10:19.320419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.579 qpair failed and we were unable to recover it. 00:28:50.579 [2024-07-15 16:10:19.320654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.579 [2024-07-15 16:10:19.320686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.579 qpair failed and we were unable to recover it. 00:28:50.579 [2024-07-15 16:10:19.320924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.579 [2024-07-15 16:10:19.320956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.579 qpair failed and we were unable to recover it. 00:28:50.579 [2024-07-15 16:10:19.321124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.579 [2024-07-15 16:10:19.321141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.579 qpair failed and we were unable to recover it. 00:28:50.579 [2024-07-15 16:10:19.321399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.579 [2024-07-15 16:10:19.321418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.579 qpair failed and we were unable to recover it. 00:28:50.579 [2024-07-15 16:10:19.321567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.579 [2024-07-15 16:10:19.321599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.579 qpair failed and we were unable to recover it. 00:28:50.579 [2024-07-15 16:10:19.321757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.579 [2024-07-15 16:10:19.321791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.579 qpair failed and we were unable to recover it. 00:28:50.579 [2024-07-15 16:10:19.321968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.579 [2024-07-15 16:10:19.321999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.579 qpair failed and we were unable to recover it. 00:28:50.579 [2024-07-15 16:10:19.322240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.579 [2024-07-15 16:10:19.322275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.579 qpair failed and we were unable to recover it. 00:28:50.579 [2024-07-15 16:10:19.322448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.579 [2024-07-15 16:10:19.322479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.579 qpair failed and we were unable to recover it. 00:28:50.579 [2024-07-15 16:10:19.322707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.579 [2024-07-15 16:10:19.322740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.579 qpair failed and we were unable to recover it. 00:28:50.579 [2024-07-15 16:10:19.322977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.579 [2024-07-15 16:10:19.323009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.579 qpair failed and we were unable to recover it. 00:28:50.579 [2024-07-15 16:10:19.323311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.579 [2024-07-15 16:10:19.323344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.579 qpair failed and we were unable to recover it. 00:28:50.579 [2024-07-15 16:10:19.323582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.579 [2024-07-15 16:10:19.323620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.579 qpair failed and we were unable to recover it. 00:28:50.579 [2024-07-15 16:10:19.323875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.579 [2024-07-15 16:10:19.323907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.579 qpair failed and we were unable to recover it. 00:28:50.579 [2024-07-15 16:10:19.324062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.579 [2024-07-15 16:10:19.324079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.579 qpair failed and we were unable to recover it. 00:28:50.579 [2024-07-15 16:10:19.324355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.579 [2024-07-15 16:10:19.324390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.579 qpair failed and we were unable to recover it. 00:28:50.579 [2024-07-15 16:10:19.324608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.579 [2024-07-15 16:10:19.324639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.579 qpair failed and we were unable to recover it. 00:28:50.579 [2024-07-15 16:10:19.324810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.579 [2024-07-15 16:10:19.324826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.579 qpair failed and we were unable to recover it. 00:28:50.579 [2024-07-15 16:10:19.325032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.579 [2024-07-15 16:10:19.325065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.579 qpair failed and we were unable to recover it. 00:28:50.579 [2024-07-15 16:10:19.325234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.579 [2024-07-15 16:10:19.325267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.579 qpair failed and we were unable to recover it. 00:28:50.579 [2024-07-15 16:10:19.325489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.580 [2024-07-15 16:10:19.325522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.580 qpair failed and we were unable to recover it. 00:28:50.580 [2024-07-15 16:10:19.325744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.580 [2024-07-15 16:10:19.325761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.580 qpair failed and we were unable to recover it. 00:28:50.580 [2024-07-15 16:10:19.325893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.580 [2024-07-15 16:10:19.325923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.580 qpair failed and we were unable to recover it. 00:28:50.580 [2024-07-15 16:10:19.326234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.580 [2024-07-15 16:10:19.326268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.580 qpair failed and we were unable to recover it. 00:28:50.580 [2024-07-15 16:10:19.326440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.580 [2024-07-15 16:10:19.326473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.580 qpair failed and we were unable to recover it. 00:28:50.580 [2024-07-15 16:10:19.326704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.580 [2024-07-15 16:10:19.326737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.580 qpair failed and we were unable to recover it. 00:28:50.580 [2024-07-15 16:10:19.327069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.580 [2024-07-15 16:10:19.327102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.580 qpair failed and we were unable to recover it. 00:28:50.580 [2024-07-15 16:10:19.327336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.580 [2024-07-15 16:10:19.327370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.580 qpair failed and we were unable to recover it. 00:28:50.580 [2024-07-15 16:10:19.327604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.580 [2024-07-15 16:10:19.327637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.580 qpair failed and we were unable to recover it. 00:28:50.580 [2024-07-15 16:10:19.327863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.580 [2024-07-15 16:10:19.327880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.580 qpair failed and we were unable to recover it. 00:28:50.580 [2024-07-15 16:10:19.328073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.580 [2024-07-15 16:10:19.328090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.580 qpair failed and we were unable to recover it. 00:28:50.580 [2024-07-15 16:10:19.328396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.580 [2024-07-15 16:10:19.328430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.580 qpair failed and we were unable to recover it. 00:28:50.580 [2024-07-15 16:10:19.328743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.580 [2024-07-15 16:10:19.328775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.580 qpair failed and we were unable to recover it. 00:28:50.580 [2024-07-15 16:10:19.329040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.580 [2024-07-15 16:10:19.329072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.580 qpair failed and we were unable to recover it. 00:28:50.580 [2024-07-15 16:10:19.329395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.580 [2024-07-15 16:10:19.329430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.580 qpair failed and we were unable to recover it. 00:28:50.580 [2024-07-15 16:10:19.329661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.580 [2024-07-15 16:10:19.329693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.580 qpair failed and we were unable to recover it. 00:28:50.580 [2024-07-15 16:10:19.330017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.580 [2024-07-15 16:10:19.330050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.580 qpair failed and we were unable to recover it. 00:28:50.580 [2024-07-15 16:10:19.330350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.580 [2024-07-15 16:10:19.330383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.580 qpair failed and we were unable to recover it. 00:28:50.580 [2024-07-15 16:10:19.330605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.580 [2024-07-15 16:10:19.330637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.580 qpair failed and we were unable to recover it. 00:28:50.580 [2024-07-15 16:10:19.330796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.580 [2024-07-15 16:10:19.330816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.580 qpair failed and we were unable to recover it. 00:28:50.580 [2024-07-15 16:10:19.331139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.580 [2024-07-15 16:10:19.331171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.580 qpair failed and we were unable to recover it. 00:28:50.580 [2024-07-15 16:10:19.331447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.580 [2024-07-15 16:10:19.331479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.580 qpair failed and we were unable to recover it. 00:28:50.580 [2024-07-15 16:10:19.331761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.580 [2024-07-15 16:10:19.331794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.580 qpair failed and we were unable to recover it. 00:28:50.580 [2024-07-15 16:10:19.332032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.580 [2024-07-15 16:10:19.332049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.580 qpair failed and we were unable to recover it. 00:28:50.580 [2024-07-15 16:10:19.332312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.580 [2024-07-15 16:10:19.332330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.580 qpair failed and we were unable to recover it. 00:28:50.580 [2024-07-15 16:10:19.332549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.580 [2024-07-15 16:10:19.332566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.580 qpair failed and we were unable to recover it. 00:28:50.580 [2024-07-15 16:10:19.332759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.580 [2024-07-15 16:10:19.332775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.580 qpair failed and we were unable to recover it. 00:28:50.580 [2024-07-15 16:10:19.333054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.580 [2024-07-15 16:10:19.333085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.580 qpair failed and we were unable to recover it. 00:28:50.580 [2024-07-15 16:10:19.333386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.580 [2024-07-15 16:10:19.333420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.580 qpair failed and we were unable to recover it. 00:28:50.580 [2024-07-15 16:10:19.333661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.580 [2024-07-15 16:10:19.333694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.580 qpair failed and we were unable to recover it. 00:28:50.580 [2024-07-15 16:10:19.334036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.580 [2024-07-15 16:10:19.334068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.580 qpair failed and we were unable to recover it. 00:28:50.580 [2024-07-15 16:10:19.334314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.580 [2024-07-15 16:10:19.334348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.580 qpair failed and we were unable to recover it. 00:28:50.580 [2024-07-15 16:10:19.334504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.580 [2024-07-15 16:10:19.334536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.580 qpair failed and we were unable to recover it. 00:28:50.580 [2024-07-15 16:10:19.334757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.580 [2024-07-15 16:10:19.334774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.580 qpair failed and we were unable to recover it. 00:28:50.580 [2024-07-15 16:10:19.334969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.580 [2024-07-15 16:10:19.334987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.580 qpair failed and we were unable to recover it. 00:28:50.580 [2024-07-15 16:10:19.335292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.580 [2024-07-15 16:10:19.335326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.580 qpair failed and we were unable to recover it. 00:28:50.580 [2024-07-15 16:10:19.335544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.580 [2024-07-15 16:10:19.335578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.580 qpair failed and we were unable to recover it. 00:28:50.580 [2024-07-15 16:10:19.335841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.580 [2024-07-15 16:10:19.335873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.580 qpair failed and we were unable to recover it. 00:28:50.580 [2024-07-15 16:10:19.336219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.580 [2024-07-15 16:10:19.336268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.580 qpair failed and we were unable to recover it. 00:28:50.580 [2024-07-15 16:10:19.336502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.580 [2024-07-15 16:10:19.336534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.580 qpair failed and we were unable to recover it. 00:28:50.580 [2024-07-15 16:10:19.336765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.581 [2024-07-15 16:10:19.336797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.581 qpair failed and we were unable to recover it. 00:28:50.581 [2024-07-15 16:10:19.337096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.581 [2024-07-15 16:10:19.337113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.581 qpair failed and we were unable to recover it. 00:28:50.581 [2024-07-15 16:10:19.337403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.581 [2024-07-15 16:10:19.337421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.581 qpair failed and we were unable to recover it. 00:28:50.581 [2024-07-15 16:10:19.337672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.581 [2024-07-15 16:10:19.337705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.581 qpair failed and we were unable to recover it. 00:28:50.581 [2024-07-15 16:10:19.337929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.581 [2024-07-15 16:10:19.337962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.581 qpair failed and we were unable to recover it. 00:28:50.581 [2024-07-15 16:10:19.338242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.581 [2024-07-15 16:10:19.338276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.581 qpair failed and we were unable to recover it. 00:28:50.581 [2024-07-15 16:10:19.338513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.581 [2024-07-15 16:10:19.338551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.581 qpair failed and we were unable to recover it. 00:28:50.581 [2024-07-15 16:10:19.338719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.581 [2024-07-15 16:10:19.338751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.581 qpair failed and we were unable to recover it. 00:28:50.581 [2024-07-15 16:10:19.339083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.581 [2024-07-15 16:10:19.339116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.581 qpair failed and we were unable to recover it. 00:28:50.581 [2024-07-15 16:10:19.339360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.581 [2024-07-15 16:10:19.339393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.581 qpair failed and we were unable to recover it. 00:28:50.581 [2024-07-15 16:10:19.339585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.581 [2024-07-15 16:10:19.339618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.581 qpair failed and we were unable to recover it. 00:28:50.581 [2024-07-15 16:10:19.339913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.581 [2024-07-15 16:10:19.339947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.581 qpair failed and we were unable to recover it. 00:28:50.581 [2024-07-15 16:10:19.340127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.581 [2024-07-15 16:10:19.340159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.581 qpair failed and we were unable to recover it. 00:28:50.581 [2024-07-15 16:10:19.340367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.581 [2024-07-15 16:10:19.340401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.581 qpair failed and we were unable to recover it. 00:28:50.581 [2024-07-15 16:10:19.340568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.581 [2024-07-15 16:10:19.340600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.581 qpair failed and we were unable to recover it. 00:28:50.581 [2024-07-15 16:10:19.340758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.581 [2024-07-15 16:10:19.340790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.581 qpair failed and we were unable to recover it. 00:28:50.581 [2024-07-15 16:10:19.341027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.581 [2024-07-15 16:10:19.341060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.581 qpair failed and we were unable to recover it. 00:28:50.581 [2024-07-15 16:10:19.341298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.581 [2024-07-15 16:10:19.341331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.581 qpair failed and we were unable to recover it. 00:28:50.581 [2024-07-15 16:10:19.341507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.581 [2024-07-15 16:10:19.341538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.581 qpair failed and we were unable to recover it. 00:28:50.581 [2024-07-15 16:10:19.341772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.581 [2024-07-15 16:10:19.341804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.581 qpair failed and we were unable to recover it. 00:28:50.581 [2024-07-15 16:10:19.342033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.581 [2024-07-15 16:10:19.342065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.581 qpair failed and we were unable to recover it. 00:28:50.581 [2024-07-15 16:10:19.342280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.581 [2024-07-15 16:10:19.342313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.581 qpair failed and we were unable to recover it. 00:28:50.581 [2024-07-15 16:10:19.342595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.581 [2024-07-15 16:10:19.342629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.581 qpair failed and we were unable to recover it. 00:28:50.581 [2024-07-15 16:10:19.342859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.581 [2024-07-15 16:10:19.342891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.581 qpair failed and we were unable to recover it. 00:28:50.581 [2024-07-15 16:10:19.343194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.581 [2024-07-15 16:10:19.343233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.581 qpair failed and we were unable to recover it. 00:28:50.581 [2024-07-15 16:10:19.343411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.581 [2024-07-15 16:10:19.343444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.581 qpair failed and we were unable to recover it. 00:28:50.581 [2024-07-15 16:10:19.343703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.581 [2024-07-15 16:10:19.343737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.581 qpair failed and we were unable to recover it. 00:28:50.581 [2024-07-15 16:10:19.344030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.581 [2024-07-15 16:10:19.344072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.581 qpair failed and we were unable to recover it. 00:28:50.581 [2024-07-15 16:10:19.344314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.581 [2024-07-15 16:10:19.344348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.581 qpair failed and we were unable to recover it. 00:28:50.581 [2024-07-15 16:10:19.344492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.581 [2024-07-15 16:10:19.344524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.581 qpair failed and we were unable to recover it. 00:28:50.581 [2024-07-15 16:10:19.344805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.581 [2024-07-15 16:10:19.344838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.581 qpair failed and we were unable to recover it. 00:28:50.581 [2024-07-15 16:10:19.345068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.581 [2024-07-15 16:10:19.345101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.581 qpair failed and we were unable to recover it. 00:28:50.581 [2024-07-15 16:10:19.345343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.581 [2024-07-15 16:10:19.345362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.581 qpair failed and we were unable to recover it. 00:28:50.582 [2024-07-15 16:10:19.345517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.582 [2024-07-15 16:10:19.345549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.582 qpair failed and we were unable to recover it. 00:28:50.582 [2024-07-15 16:10:19.345803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.582 [2024-07-15 16:10:19.345836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.582 qpair failed and we were unable to recover it. 00:28:50.582 [2024-07-15 16:10:19.346069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.582 [2024-07-15 16:10:19.346102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.582 qpair failed and we were unable to recover it. 00:28:50.582 [2024-07-15 16:10:19.346249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.582 [2024-07-15 16:10:19.346282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.582 qpair failed and we were unable to recover it. 00:28:50.582 [2024-07-15 16:10:19.346454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.582 [2024-07-15 16:10:19.346487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.582 qpair failed and we were unable to recover it. 00:28:50.582 [2024-07-15 16:10:19.346716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.582 [2024-07-15 16:10:19.346732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.582 qpair failed and we were unable to recover it. 00:28:50.582 [2024-07-15 16:10:19.346974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.582 [2024-07-15 16:10:19.347006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.582 qpair failed and we were unable to recover it. 00:28:50.582 [2024-07-15 16:10:19.347248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.582 [2024-07-15 16:10:19.347281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.582 qpair failed and we were unable to recover it. 00:28:50.582 [2024-07-15 16:10:19.347475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.582 [2024-07-15 16:10:19.347506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.582 qpair failed and we were unable to recover it. 00:28:50.582 [2024-07-15 16:10:19.347721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.582 [2024-07-15 16:10:19.347754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.582 qpair failed and we were unable to recover it. 00:28:50.582 [2024-07-15 16:10:19.347916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.582 [2024-07-15 16:10:19.347949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.582 qpair failed and we were unable to recover it. 00:28:50.582 [2024-07-15 16:10:19.348242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.582 [2024-07-15 16:10:19.348276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.582 qpair failed and we were unable to recover it. 00:28:50.582 [2024-07-15 16:10:19.348508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.582 [2024-07-15 16:10:19.348540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.582 qpair failed and we were unable to recover it. 00:28:50.582 [2024-07-15 16:10:19.348721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.582 [2024-07-15 16:10:19.348753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.582 qpair failed and we were unable to recover it. 00:28:50.582 [2024-07-15 16:10:19.348998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.582 [2024-07-15 16:10:19.349015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.582 qpair failed and we were unable to recover it. 00:28:50.582 [2024-07-15 16:10:19.349199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.582 [2024-07-15 16:10:19.349216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.582 qpair failed and we were unable to recover it. 00:28:50.582 [2024-07-15 16:10:19.349458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.582 [2024-07-15 16:10:19.349490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.582 qpair failed and we were unable to recover it. 00:28:50.582 [2024-07-15 16:10:19.349704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.582 [2024-07-15 16:10:19.349738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.582 qpair failed and we were unable to recover it. 00:28:50.582 [2024-07-15 16:10:19.349993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.582 [2024-07-15 16:10:19.350010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.582 qpair failed and we were unable to recover it. 00:28:50.582 [2024-07-15 16:10:19.350145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.582 [2024-07-15 16:10:19.350163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.582 qpair failed and we were unable to recover it. 00:28:50.582 [2024-07-15 16:10:19.350419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.582 [2024-07-15 16:10:19.350453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.582 qpair failed and we were unable to recover it. 00:28:50.582 [2024-07-15 16:10:19.350626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.582 [2024-07-15 16:10:19.350658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.582 qpair failed and we were unable to recover it. 00:28:50.582 [2024-07-15 16:10:19.350887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.582 [2024-07-15 16:10:19.350920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.582 qpair failed and we were unable to recover it. 00:28:50.582 [2024-07-15 16:10:19.351154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.582 [2024-07-15 16:10:19.351172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.582 qpair failed and we were unable to recover it. 00:28:50.582 [2024-07-15 16:10:19.351361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.582 [2024-07-15 16:10:19.351379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.582 qpair failed and we were unable to recover it. 00:28:50.582 [2024-07-15 16:10:19.351638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.582 [2024-07-15 16:10:19.351669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.582 qpair failed and we were unable to recover it. 00:28:50.582 [2024-07-15 16:10:19.352077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.582 [2024-07-15 16:10:19.352111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.582 qpair failed and we were unable to recover it. 00:28:50.582 [2024-07-15 16:10:19.353304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.582 [2024-07-15 16:10:19.353339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.582 qpair failed and we were unable to recover it. 00:28:50.582 [2024-07-15 16:10:19.353533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.582 [2024-07-15 16:10:19.353550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.582 qpair failed and we were unable to recover it. 00:28:50.582 [2024-07-15 16:10:19.353704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.582 [2024-07-15 16:10:19.353720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.582 qpair failed and we were unable to recover it. 00:28:50.582 [2024-07-15 16:10:19.353921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.582 [2024-07-15 16:10:19.353953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.582 qpair failed and we were unable to recover it. 00:28:50.582 [2024-07-15 16:10:19.354199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.582 [2024-07-15 16:10:19.354245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.582 qpair failed and we were unable to recover it. 00:28:50.582 [2024-07-15 16:10:19.354512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.582 [2024-07-15 16:10:19.354544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.582 qpair failed and we were unable to recover it. 00:28:50.582 [2024-07-15 16:10:19.354804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.582 [2024-07-15 16:10:19.354836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.582 qpair failed and we were unable to recover it. 00:28:50.582 [2024-07-15 16:10:19.355135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.582 [2024-07-15 16:10:19.355167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.582 qpair failed and we were unable to recover it. 00:28:50.582 [2024-07-15 16:10:19.355419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.582 [2024-07-15 16:10:19.355451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.582 qpair failed and we were unable to recover it. 00:28:50.582 [2024-07-15 16:10:19.355663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.582 [2024-07-15 16:10:19.355696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.582 qpair failed and we were unable to recover it. 00:28:50.582 [2024-07-15 16:10:19.355912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.582 [2024-07-15 16:10:19.355929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.582 qpair failed and we were unable to recover it. 00:28:50.582 [2024-07-15 16:10:19.356115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.582 [2024-07-15 16:10:19.356148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.582 qpair failed and we were unable to recover it. 00:28:50.582 [2024-07-15 16:10:19.356341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.582 [2024-07-15 16:10:19.356375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.582 qpair failed and we were unable to recover it. 00:28:50.582 [2024-07-15 16:10:19.356667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.582 [2024-07-15 16:10:19.356700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.582 qpair failed and we were unable to recover it. 00:28:50.583 [2024-07-15 16:10:19.357047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.583 [2024-07-15 16:10:19.357085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.583 qpair failed and we were unable to recover it. 00:28:50.583 [2024-07-15 16:10:19.357315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.583 [2024-07-15 16:10:19.357348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.583 qpair failed and we were unable to recover it. 00:28:50.583 [2024-07-15 16:10:19.357603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.583 [2024-07-15 16:10:19.357635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.583 qpair failed and we were unable to recover it. 00:28:50.583 [2024-07-15 16:10:19.357904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.583 [2024-07-15 16:10:19.357941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.583 qpair failed and we were unable to recover it. 00:28:50.583 [2024-07-15 16:10:19.358253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.583 [2024-07-15 16:10:19.358287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.583 qpair failed and we were unable to recover it. 00:28:50.583 [2024-07-15 16:10:19.358463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.583 [2024-07-15 16:10:19.358496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.583 qpair failed and we were unable to recover it. 00:28:50.583 [2024-07-15 16:10:19.358676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.583 [2024-07-15 16:10:19.358708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.583 qpair failed and we were unable to recover it. 00:28:50.583 [2024-07-15 16:10:19.359016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.583 [2024-07-15 16:10:19.359049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.583 qpair failed and we were unable to recover it. 00:28:50.583 [2024-07-15 16:10:19.359306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.583 [2024-07-15 16:10:19.359338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.583 qpair failed and we were unable to recover it. 00:28:50.583 [2024-07-15 16:10:19.359512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.583 [2024-07-15 16:10:19.359544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.583 qpair failed and we were unable to recover it. 00:28:50.583 [2024-07-15 16:10:19.359754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.583 [2024-07-15 16:10:19.359786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.583 qpair failed and we were unable to recover it. 00:28:50.583 [2024-07-15 16:10:19.360016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.583 [2024-07-15 16:10:19.360049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.583 qpair failed and we were unable to recover it. 00:28:50.583 [2024-07-15 16:10:19.360276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.583 [2024-07-15 16:10:19.360311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.583 qpair failed and we were unable to recover it. 00:28:50.583 [2024-07-15 16:10:19.360539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.583 [2024-07-15 16:10:19.360572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.583 qpair failed and we were unable to recover it. 00:28:50.583 [2024-07-15 16:10:19.360810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.583 [2024-07-15 16:10:19.360842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.583 qpair failed and we were unable to recover it. 00:28:50.583 [2024-07-15 16:10:19.361136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.583 [2024-07-15 16:10:19.361153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.583 qpair failed and we were unable to recover it. 00:28:50.583 [2024-07-15 16:10:19.361360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.583 [2024-07-15 16:10:19.361377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.583 qpair failed and we were unable to recover it. 00:28:50.583 [2024-07-15 16:10:19.361601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.583 [2024-07-15 16:10:19.361617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.583 qpair failed and we were unable to recover it. 00:28:50.583 [2024-07-15 16:10:19.361937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.583 [2024-07-15 16:10:19.361969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.583 qpair failed and we were unable to recover it. 00:28:50.583 [2024-07-15 16:10:19.362186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.583 [2024-07-15 16:10:19.362219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.583 qpair failed and we were unable to recover it. 00:28:50.583 [2024-07-15 16:10:19.362539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.583 [2024-07-15 16:10:19.362573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.583 qpair failed and we were unable to recover it. 00:28:50.583 [2024-07-15 16:10:19.362826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.583 [2024-07-15 16:10:19.362858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.583 qpair failed and we were unable to recover it. 00:28:50.583 [2024-07-15 16:10:19.363136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.583 [2024-07-15 16:10:19.363169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.583 qpair failed and we were unable to recover it. 00:28:50.583 [2024-07-15 16:10:19.363410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.583 [2024-07-15 16:10:19.363443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.583 qpair failed and we were unable to recover it. 00:28:50.583 [2024-07-15 16:10:19.363624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.583 [2024-07-15 16:10:19.363658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.583 qpair failed and we were unable to recover it. 00:28:50.583 [2024-07-15 16:10:19.363962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.584 [2024-07-15 16:10:19.363994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.584 qpair failed and we were unable to recover it. 00:28:50.584 [2024-07-15 16:10:19.364272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.584 [2024-07-15 16:10:19.364308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.584 qpair failed and we were unable to recover it. 00:28:50.584 [2024-07-15 16:10:19.364461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.584 [2024-07-15 16:10:19.364498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.584 qpair failed and we were unable to recover it. 00:28:50.584 [2024-07-15 16:10:19.364736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.584 [2024-07-15 16:10:19.364769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.584 qpair failed and we were unable to recover it. 00:28:50.584 [2024-07-15 16:10:19.364924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.584 [2024-07-15 16:10:19.364956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.584 qpair failed and we were unable to recover it. 00:28:50.584 [2024-07-15 16:10:19.365216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.584 [2024-07-15 16:10:19.365273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.584 qpair failed and we were unable to recover it. 00:28:50.584 [2024-07-15 16:10:19.365581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.584 [2024-07-15 16:10:19.365613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.584 qpair failed and we were unable to recover it. 00:28:50.584 [2024-07-15 16:10:19.365797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.584 [2024-07-15 16:10:19.365813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.584 qpair failed and we were unable to recover it. 00:28:50.584 [2024-07-15 16:10:19.366017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.584 [2024-07-15 16:10:19.366049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.584 qpair failed and we were unable to recover it. 00:28:50.584 [2024-07-15 16:10:19.366394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.584 [2024-07-15 16:10:19.366429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.584 qpair failed and we were unable to recover it. 00:28:50.584 [2024-07-15 16:10:19.366713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.584 [2024-07-15 16:10:19.366745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.584 qpair failed and we were unable to recover it. 00:28:50.584 [2024-07-15 16:10:19.367081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.584 [2024-07-15 16:10:19.367113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.584 qpair failed and we were unable to recover it. 00:28:50.584 [2024-07-15 16:10:19.367431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.584 [2024-07-15 16:10:19.367465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.584 qpair failed and we were unable to recover it. 00:28:50.584 [2024-07-15 16:10:19.367698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.584 [2024-07-15 16:10:19.367730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.584 qpair failed and we were unable to recover it. 00:28:50.584 [2024-07-15 16:10:19.367965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.584 [2024-07-15 16:10:19.367998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.584 qpair failed and we were unable to recover it. 00:28:50.584 [2024-07-15 16:10:19.368208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.584 [2024-07-15 16:10:19.368267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.584 qpair failed and we were unable to recover it. 00:28:50.584 [2024-07-15 16:10:19.368509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.584 [2024-07-15 16:10:19.368542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.584 qpair failed and we were unable to recover it. 00:28:50.584 [2024-07-15 16:10:19.368770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.584 [2024-07-15 16:10:19.368803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.584 qpair failed and we were unable to recover it. 00:28:50.584 [2024-07-15 16:10:19.369106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.584 [2024-07-15 16:10:19.369124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.584 qpair failed and we were unable to recover it. 00:28:50.584 [2024-07-15 16:10:19.369316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.584 [2024-07-15 16:10:19.369334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.584 qpair failed and we were unable to recover it. 00:28:50.584 [2024-07-15 16:10:19.369519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.584 [2024-07-15 16:10:19.369536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.584 qpair failed and we were unable to recover it. 00:28:50.584 [2024-07-15 16:10:19.369733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.584 [2024-07-15 16:10:19.369765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.584 qpair failed and we were unable to recover it. 00:28:50.584 [2024-07-15 16:10:19.370068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.584 [2024-07-15 16:10:19.370101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.584 qpair failed and we were unable to recover it. 00:28:50.584 [2024-07-15 16:10:19.370383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.584 [2024-07-15 16:10:19.370401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.584 qpair failed and we were unable to recover it. 00:28:50.584 [2024-07-15 16:10:19.370589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.584 [2024-07-15 16:10:19.370622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.584 qpair failed and we were unable to recover it. 00:28:50.584 [2024-07-15 16:10:19.370786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.584 [2024-07-15 16:10:19.370819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.584 qpair failed and we were unable to recover it. 00:28:50.584 [2024-07-15 16:10:19.371031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.584 [2024-07-15 16:10:19.371074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.584 qpair failed and we were unable to recover it. 00:28:50.584 [2024-07-15 16:10:19.371294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.584 [2024-07-15 16:10:19.371312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.584 qpair failed and we were unable to recover it. 00:28:50.584 [2024-07-15 16:10:19.371431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.584 [2024-07-15 16:10:19.371448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.584 qpair failed and we were unable to recover it. 00:28:50.584 [2024-07-15 16:10:19.371647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.584 [2024-07-15 16:10:19.371681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.584 qpair failed and we were unable to recover it. 00:28:50.584 [2024-07-15 16:10:19.371862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.584 [2024-07-15 16:10:19.371895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.584 qpair failed and we were unable to recover it. 00:28:50.584 [2024-07-15 16:10:19.372201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.584 [2024-07-15 16:10:19.372241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.584 qpair failed and we were unable to recover it. 00:28:50.584 [2024-07-15 16:10:19.372474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.584 [2024-07-15 16:10:19.372508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.584 qpair failed and we were unable to recover it. 00:28:50.584 [2024-07-15 16:10:19.372672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.584 [2024-07-15 16:10:19.372705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.584 qpair failed and we were unable to recover it. 00:28:50.584 [2024-07-15 16:10:19.372944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.584 [2024-07-15 16:10:19.372960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.584 qpair failed and we were unable to recover it. 00:28:50.584 [2024-07-15 16:10:19.373143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.584 [2024-07-15 16:10:19.373159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.584 qpair failed and we were unable to recover it. 00:28:50.584 [2024-07-15 16:10:19.373306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.584 [2024-07-15 16:10:19.373323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.584 qpair failed and we were unable to recover it. 00:28:50.584 [2024-07-15 16:10:19.373463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.584 [2024-07-15 16:10:19.373480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.584 qpair failed and we were unable to recover it. 00:28:50.584 [2024-07-15 16:10:19.373663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.584 [2024-07-15 16:10:19.373680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.584 qpair failed and we were unable to recover it. 00:28:50.584 [2024-07-15 16:10:19.373808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.584 [2024-07-15 16:10:19.373824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.584 qpair failed and we were unable to recover it. 00:28:50.584 [2024-07-15 16:10:19.374019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.584 [2024-07-15 16:10:19.374037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.584 qpair failed and we were unable to recover it. 00:28:50.584 [2024-07-15 16:10:19.374229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.585 [2024-07-15 16:10:19.374246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.585 qpair failed and we were unable to recover it. 00:28:50.585 [2024-07-15 16:10:19.374381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.585 [2024-07-15 16:10:19.374399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.585 qpair failed and we were unable to recover it. 00:28:50.585 [2024-07-15 16:10:19.374580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.585 [2024-07-15 16:10:19.374597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.585 qpair failed and we were unable to recover it. 00:28:50.585 [2024-07-15 16:10:19.374780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.585 [2024-07-15 16:10:19.374812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.585 qpair failed and we were unable to recover it. 00:28:50.585 [2024-07-15 16:10:19.375020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.585 [2024-07-15 16:10:19.375052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.585 qpair failed and we were unable to recover it. 00:28:50.585 [2024-07-15 16:10:19.375251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.585 [2024-07-15 16:10:19.375284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.585 qpair failed and we were unable to recover it. 00:28:50.585 [2024-07-15 16:10:19.375504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.585 [2024-07-15 16:10:19.375536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.585 qpair failed and we were unable to recover it. 00:28:50.585 [2024-07-15 16:10:19.375750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.585 [2024-07-15 16:10:19.375782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.585 qpair failed and we were unable to recover it. 00:28:50.585 [2024-07-15 16:10:19.376104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.585 [2024-07-15 16:10:19.376136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.585 qpair failed and we were unable to recover it. 00:28:50.585 [2024-07-15 16:10:19.376330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.585 [2024-07-15 16:10:19.376367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.585 qpair failed and we were unable to recover it. 00:28:50.585 [2024-07-15 16:10:19.376543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.585 [2024-07-15 16:10:19.376575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.585 qpair failed and we were unable to recover it. 00:28:50.585 [2024-07-15 16:10:19.376856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.585 [2024-07-15 16:10:19.376888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.585 qpair failed and we were unable to recover it. 00:28:50.585 [2024-07-15 16:10:19.377109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.585 [2024-07-15 16:10:19.377126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.585 qpair failed and we were unable to recover it. 00:28:50.585 [2024-07-15 16:10:19.377240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.585 [2024-07-15 16:10:19.377257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.585 qpair failed and we were unable to recover it. 00:28:50.585 [2024-07-15 16:10:19.377377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.585 [2024-07-15 16:10:19.377393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.585 qpair failed and we were unable to recover it. 00:28:50.585 [2024-07-15 16:10:19.377662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.585 [2024-07-15 16:10:19.377679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.585 qpair failed and we were unable to recover it. 00:28:50.585 [2024-07-15 16:10:19.377883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.585 [2024-07-15 16:10:19.377900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.585 qpair failed and we were unable to recover it. 00:28:50.585 [2024-07-15 16:10:19.378145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.585 [2024-07-15 16:10:19.378162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.585 qpair failed and we were unable to recover it. 00:28:50.585 [2024-07-15 16:10:19.378332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.585 [2024-07-15 16:10:19.378350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.585 qpair failed and we were unable to recover it. 00:28:50.585 [2024-07-15 16:10:19.378611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.585 [2024-07-15 16:10:19.378647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.585 qpair failed and we were unable to recover it. 00:28:50.585 [2024-07-15 16:10:19.378824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.585 [2024-07-15 16:10:19.378857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.585 qpair failed and we were unable to recover it. 00:28:50.585 [2024-07-15 16:10:19.379134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.585 [2024-07-15 16:10:19.379166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.585 qpair failed and we were unable to recover it. 00:28:50.585 [2024-07-15 16:10:19.379412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.585 [2024-07-15 16:10:19.379446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.585 qpair failed and we were unable to recover it. 00:28:50.585 [2024-07-15 16:10:19.379692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.585 [2024-07-15 16:10:19.379724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.585 qpair failed and we were unable to recover it. 00:28:50.585 [2024-07-15 16:10:19.379899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.585 [2024-07-15 16:10:19.379916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.585 qpair failed and we were unable to recover it. 00:28:50.585 [2024-07-15 16:10:19.380166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.585 [2024-07-15 16:10:19.380199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.585 qpair failed and we were unable to recover it. 00:28:50.585 [2024-07-15 16:10:19.380488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.585 [2024-07-15 16:10:19.380522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.585 qpair failed and we were unable to recover it. 00:28:50.585 [2024-07-15 16:10:19.380694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.585 [2024-07-15 16:10:19.380727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.585 qpair failed and we were unable to recover it. 00:28:50.585 [2024-07-15 16:10:19.380965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.585 [2024-07-15 16:10:19.380997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.585 qpair failed and we were unable to recover it. 00:28:50.585 [2024-07-15 16:10:19.381239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.585 [2024-07-15 16:10:19.381260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.585 qpair failed and we were unable to recover it. 00:28:50.585 [2024-07-15 16:10:19.381418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.585 [2024-07-15 16:10:19.381451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.585 qpair failed and we were unable to recover it. 00:28:50.585 [2024-07-15 16:10:19.381643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.585 [2024-07-15 16:10:19.381675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.585 qpair failed and we were unable to recover it. 00:28:50.586 [2024-07-15 16:10:19.381859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.586 [2024-07-15 16:10:19.381906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.586 qpair failed and we were unable to recover it. 00:28:50.586 [2024-07-15 16:10:19.382107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.586 [2024-07-15 16:10:19.382123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.586 qpair failed and we were unable to recover it. 00:28:50.586 [2024-07-15 16:10:19.382333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.586 [2024-07-15 16:10:19.382352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.586 qpair failed and we were unable to recover it. 00:28:50.586 [2024-07-15 16:10:19.382649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.586 [2024-07-15 16:10:19.382684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.586 qpair failed and we were unable to recover it. 00:28:50.586 [2024-07-15 16:10:19.382876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.586 [2024-07-15 16:10:19.382909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.586 qpair failed and we were unable to recover it. 00:28:50.586 [2024-07-15 16:10:19.383123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.586 [2024-07-15 16:10:19.383156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.586 qpair failed and we were unable to recover it. 00:28:50.586 [2024-07-15 16:10:19.383389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.586 [2024-07-15 16:10:19.383407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.586 qpair failed and we were unable to recover it. 00:28:50.586 [2024-07-15 16:10:19.383548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.586 [2024-07-15 16:10:19.383564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.586 qpair failed and we were unable to recover it. 00:28:50.586 [2024-07-15 16:10:19.383756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.586 [2024-07-15 16:10:19.383790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.586 qpair failed and we were unable to recover it. 00:28:50.586 [2024-07-15 16:10:19.384013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.586 [2024-07-15 16:10:19.384045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.586 qpair failed and we were unable to recover it. 00:28:50.586 [2024-07-15 16:10:19.384215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.586 [2024-07-15 16:10:19.384277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.586 qpair failed and we were unable to recover it. 00:28:50.586 [2024-07-15 16:10:19.384519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.586 [2024-07-15 16:10:19.384551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.586 qpair failed and we were unable to recover it. 00:28:50.586 [2024-07-15 16:10:19.384788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.586 [2024-07-15 16:10:19.384820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.586 qpair failed and we were unable to recover it. 00:28:50.586 [2024-07-15 16:10:19.385140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.586 [2024-07-15 16:10:19.385173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.586 qpair failed and we were unable to recover it. 00:28:50.586 [2024-07-15 16:10:19.385385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.586 [2024-07-15 16:10:19.385420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.586 qpair failed and we were unable to recover it. 00:28:50.586 [2024-07-15 16:10:19.385681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.586 [2024-07-15 16:10:19.385713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.586 qpair failed and we were unable to recover it. 00:28:50.586 [2024-07-15 16:10:19.385869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.586 [2024-07-15 16:10:19.385885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.586 qpair failed and we were unable to recover it. 00:28:50.586 [2024-07-15 16:10:19.386215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.586 [2024-07-15 16:10:19.386259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.586 qpair failed and we were unable to recover it. 00:28:50.586 [2024-07-15 16:10:19.386501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.586 [2024-07-15 16:10:19.386533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.586 qpair failed and we were unable to recover it. 00:28:50.586 [2024-07-15 16:10:19.388038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.586 [2024-07-15 16:10:19.388077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.586 qpair failed and we were unable to recover it. 00:28:50.586 [2024-07-15 16:10:19.388348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.586 [2024-07-15 16:10:19.388366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.586 qpair failed and we were unable to recover it. 00:28:50.586 [2024-07-15 16:10:19.388623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.586 [2024-07-15 16:10:19.388656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.586 qpair failed and we were unable to recover it. 00:28:50.586 [2024-07-15 16:10:19.388839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.586 [2024-07-15 16:10:19.388872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.586 qpair failed and we were unable to recover it. 00:28:50.586 [2024-07-15 16:10:19.389092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.586 [2024-07-15 16:10:19.389124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.586 qpair failed and we were unable to recover it. 00:28:50.586 [2024-07-15 16:10:19.389337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.586 [2024-07-15 16:10:19.389359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.586 qpair failed and we were unable to recover it. 00:28:50.586 [2024-07-15 16:10:19.389490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.586 [2024-07-15 16:10:19.389507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.586 qpair failed and we were unable to recover it. 00:28:50.586 [2024-07-15 16:10:19.389622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.586 [2024-07-15 16:10:19.389639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.586 qpair failed and we were unable to recover it. 00:28:50.586 [2024-07-15 16:10:19.389899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.586 [2024-07-15 16:10:19.389917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.586 qpair failed and we were unable to recover it. 00:28:50.586 [2024-07-15 16:10:19.390217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.586 [2024-07-15 16:10:19.390239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.586 qpair failed and we were unable to recover it. 00:28:50.586 [2024-07-15 16:10:19.390432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.586 [2024-07-15 16:10:19.390450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.586 qpair failed and we were unable to recover it. 00:28:50.586 [2024-07-15 16:10:19.390630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.586 [2024-07-15 16:10:19.390646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.586 qpair failed and we were unable to recover it. 00:28:50.586 [2024-07-15 16:10:19.390848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.586 [2024-07-15 16:10:19.390865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.586 qpair failed and we were unable to recover it. 00:28:50.586 [2024-07-15 16:10:19.391086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.586 [2024-07-15 16:10:19.391102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.586 qpair failed and we were unable to recover it. 00:28:50.586 [2024-07-15 16:10:19.391214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.586 [2024-07-15 16:10:19.391239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.586 qpair failed and we were unable to recover it. 00:28:50.586 [2024-07-15 16:10:19.391382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.586 [2024-07-15 16:10:19.391400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.586 qpair failed and we were unable to recover it. 00:28:50.586 [2024-07-15 16:10:19.391600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.586 [2024-07-15 16:10:19.391631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.586 qpair failed and we were unable to recover it. 00:28:50.586 [2024-07-15 16:10:19.391952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.586 [2024-07-15 16:10:19.391986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.586 qpair failed and we were unable to recover it. 00:28:50.586 [2024-07-15 16:10:19.392330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.586 [2024-07-15 16:10:19.392367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.586 qpair failed and we were unable to recover it. 00:28:50.586 [2024-07-15 16:10:19.392545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.586 [2024-07-15 16:10:19.392579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.586 qpair failed and we were unable to recover it. 00:28:50.586 [2024-07-15 16:10:19.392863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.586 [2024-07-15 16:10:19.392895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.586 qpair failed and we were unable to recover it. 00:28:50.586 [2024-07-15 16:10:19.394039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.587 [2024-07-15 16:10:19.394075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.587 qpair failed and we were unable to recover it. 00:28:50.587 [2024-07-15 16:10:19.394314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.587 [2024-07-15 16:10:19.394334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.587 qpair failed and we were unable to recover it. 00:28:50.587 [2024-07-15 16:10:19.394466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.587 [2024-07-15 16:10:19.394483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.587 qpair failed and we were unable to recover it. 00:28:50.587 [2024-07-15 16:10:19.394731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.587 [2024-07-15 16:10:19.394748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.587 qpair failed and we were unable to recover it. 00:28:50.587 [2024-07-15 16:10:19.394886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.587 [2024-07-15 16:10:19.394903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.587 qpair failed and we were unable to recover it. 00:28:50.587 [2024-07-15 16:10:19.395139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.587 [2024-07-15 16:10:19.395173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.587 qpair failed and we were unable to recover it. 00:28:50.587 [2024-07-15 16:10:19.395394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.587 [2024-07-15 16:10:19.395427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.587 qpair failed and we were unable to recover it. 00:28:50.587 [2024-07-15 16:10:19.395598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.587 [2024-07-15 16:10:19.395630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.587 qpair failed and we were unable to recover it. 00:28:50.587 [2024-07-15 16:10:19.395862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.587 [2024-07-15 16:10:19.395894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.587 qpair failed and we were unable to recover it. 00:28:50.587 [2024-07-15 16:10:19.396163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.587 [2024-07-15 16:10:19.396195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.587 qpair failed and we were unable to recover it. 00:28:50.587 [2024-07-15 16:10:19.396422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.587 [2024-07-15 16:10:19.396439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.587 qpair failed and we were unable to recover it. 00:28:50.587 [2024-07-15 16:10:19.396619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.587 [2024-07-15 16:10:19.396641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.587 qpair failed and we were unable to recover it. 00:28:50.587 [2024-07-15 16:10:19.396928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.587 [2024-07-15 16:10:19.396971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.587 qpair failed and we were unable to recover it. 00:28:50.587 [2024-07-15 16:10:19.398476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.587 [2024-07-15 16:10:19.398511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.587 qpair failed and we were unable to recover it. 00:28:50.587 [2024-07-15 16:10:19.398684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.587 [2024-07-15 16:10:19.398701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.587 qpair failed and we were unable to recover it. 00:28:50.587 [2024-07-15 16:10:19.398853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.587 [2024-07-15 16:10:19.398884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.587 qpair failed and we were unable to recover it. 00:28:50.587 [2024-07-15 16:10:19.399140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.587 [2024-07-15 16:10:19.399174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.587 qpair failed and we were unable to recover it. 00:28:50.587 [2024-07-15 16:10:19.399467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.587 [2024-07-15 16:10:19.399501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.587 qpair failed and we were unable to recover it. 00:28:50.587 [2024-07-15 16:10:19.399740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.587 [2024-07-15 16:10:19.399772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.587 qpair failed and we were unable to recover it. 00:28:50.587 [2024-07-15 16:10:19.400029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.587 [2024-07-15 16:10:19.400046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.587 qpair failed and we were unable to recover it. 00:28:50.587 [2024-07-15 16:10:19.400339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.587 [2024-07-15 16:10:19.400357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.587 qpair failed and we were unable to recover it. 00:28:50.587 [2024-07-15 16:10:19.400478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.587 [2024-07-15 16:10:19.400496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.587 qpair failed and we were unable to recover it. 00:28:50.587 [2024-07-15 16:10:19.400729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.587 [2024-07-15 16:10:19.400761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.587 qpair failed and we were unable to recover it. 00:28:50.587 [2024-07-15 16:10:19.401984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.587 [2024-07-15 16:10:19.402019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.587 qpair failed and we were unable to recover it. 00:28:50.587 [2024-07-15 16:10:19.402188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.587 [2024-07-15 16:10:19.402205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.587 qpair failed and we were unable to recover it. 00:28:50.587 [2024-07-15 16:10:19.402432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.587 [2024-07-15 16:10:19.402466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.587 qpair failed and we were unable to recover it. 00:28:50.587 [2024-07-15 16:10:19.402635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.587 [2024-07-15 16:10:19.402667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.587 qpair failed and we were unable to recover it. 00:28:50.587 [2024-07-15 16:10:19.402960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.587 [2024-07-15 16:10:19.402978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.587 qpair failed and we were unable to recover it. 00:28:50.587 [2024-07-15 16:10:19.403111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.587 [2024-07-15 16:10:19.403144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.587 qpair failed and we were unable to recover it. 00:28:50.587 [2024-07-15 16:10:19.403388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.587 [2024-07-15 16:10:19.403422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.587 qpair failed and we were unable to recover it. 00:28:50.587 [2024-07-15 16:10:19.403686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.587 [2024-07-15 16:10:19.403718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.587 qpair failed and we were unable to recover it. 00:28:50.587 [2024-07-15 16:10:19.403983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.587 [2024-07-15 16:10:19.404015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.587 qpair failed and we were unable to recover it. 00:28:50.587 [2024-07-15 16:10:19.404308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.587 [2024-07-15 16:10:19.404342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.587 qpair failed and we were unable to recover it. 00:28:50.587 [2024-07-15 16:10:19.404588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.587 [2024-07-15 16:10:19.404620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.587 qpair failed and we were unable to recover it. 00:28:50.587 [2024-07-15 16:10:19.404786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.587 [2024-07-15 16:10:19.404803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.587 qpair failed and we were unable to recover it. 00:28:50.587 [2024-07-15 16:10:19.405028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.587 [2024-07-15 16:10:19.405061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.587 qpair failed and we were unable to recover it. 00:28:50.587 [2024-07-15 16:10:19.405275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.587 [2024-07-15 16:10:19.405308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.587 qpair failed and we were unable to recover it. 00:28:50.587 [2024-07-15 16:10:19.405491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.587 [2024-07-15 16:10:19.405525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.587 qpair failed and we were unable to recover it. 00:28:50.587 [2024-07-15 16:10:19.405766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.587 [2024-07-15 16:10:19.405799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.587 qpair failed and we were unable to recover it. 00:28:50.587 [2024-07-15 16:10:19.406031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.587 [2024-07-15 16:10:19.406064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.587 qpair failed and we were unable to recover it. 00:28:50.587 [2024-07-15 16:10:19.406342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.588 [2024-07-15 16:10:19.406360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.588 qpair failed and we were unable to recover it. 00:28:50.588 [2024-07-15 16:10:19.406556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.588 [2024-07-15 16:10:19.406572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.588 qpair failed and we were unable to recover it. 00:28:50.588 [2024-07-15 16:10:19.406716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.588 [2024-07-15 16:10:19.406747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.588 qpair failed and we were unable to recover it. 00:28:50.588 [2024-07-15 16:10:19.406921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.588 [2024-07-15 16:10:19.406954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.588 qpair failed and we were unable to recover it. 00:28:50.588 [2024-07-15 16:10:19.407185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.588 [2024-07-15 16:10:19.407217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.588 qpair failed and we were unable to recover it. 00:28:50.588 [2024-07-15 16:10:19.407426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.588 [2024-07-15 16:10:19.407444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.588 qpair failed and we were unable to recover it. 00:28:50.588 [2024-07-15 16:10:19.407631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.588 [2024-07-15 16:10:19.407650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.588 qpair failed and we were unable to recover it. 00:28:50.588 [2024-07-15 16:10:19.407854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.588 [2024-07-15 16:10:19.407870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.588 qpair failed and we were unable to recover it. 00:28:50.588 [2024-07-15 16:10:19.407997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.588 [2024-07-15 16:10:19.408014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.588 qpair failed and we were unable to recover it. 00:28:50.588 [2024-07-15 16:10:19.408238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.588 [2024-07-15 16:10:19.408273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.588 qpair failed and we were unable to recover it. 00:28:50.588 [2024-07-15 16:10:19.408555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.588 [2024-07-15 16:10:19.408589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.588 qpair failed and we were unable to recover it. 00:28:50.588 [2024-07-15 16:10:19.408800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.588 [2024-07-15 16:10:19.408833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.588 qpair failed and we were unable to recover it. 00:28:50.588 [2024-07-15 16:10:19.408894] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a7000 (9): Bad file descriptor 00:28:50.588 [2024-07-15 16:10:19.409249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.588 [2024-07-15 16:10:19.409322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.588 qpair failed and we were unable to recover it. 00:28:50.588 [2024-07-15 16:10:19.409604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.588 [2024-07-15 16:10:19.409645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.588 qpair failed and we were unable to recover it. 00:28:50.588 [2024-07-15 16:10:19.409815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.588 [2024-07-15 16:10:19.409860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.588 qpair failed and we were unable to recover it. 00:28:50.588 [2024-07-15 16:10:19.410018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.588 [2024-07-15 16:10:19.410035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.588 qpair failed and we were unable to recover it. 00:28:50.588 [2024-07-15 16:10:19.410293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.588 [2024-07-15 16:10:19.410346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.588 qpair failed and we were unable to recover it. 00:28:50.588 [2024-07-15 16:10:19.410524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.588 [2024-07-15 16:10:19.410557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.588 qpair failed and we were unable to recover it. 00:28:50.588 [2024-07-15 16:10:19.410778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.588 [2024-07-15 16:10:19.410814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.588 qpair failed and we were unable to recover it. 00:28:50.588 [2024-07-15 16:10:19.411061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.588 [2024-07-15 16:10:19.411079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.588 qpair failed and we were unable to recover it. 00:28:50.588 [2024-07-15 16:10:19.411289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.588 [2024-07-15 16:10:19.411327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.588 qpair failed and we were unable to recover it. 00:28:50.588 [2024-07-15 16:10:19.411508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.588 [2024-07-15 16:10:19.411543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.588 qpair failed and we were unable to recover it. 00:28:50.588 [2024-07-15 16:10:19.411823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.588 [2024-07-15 16:10:19.411866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.588 qpair failed and we were unable to recover it. 00:28:50.588 [2024-07-15 16:10:19.412014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.588 [2024-07-15 16:10:19.412031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.588 qpair failed and we were unable to recover it. 00:28:50.588 [2024-07-15 16:10:19.412195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.588 [2024-07-15 16:10:19.412239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.588 qpair failed and we were unable to recover it. 00:28:50.588 [2024-07-15 16:10:19.412486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.588 [2024-07-15 16:10:19.412518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.588 qpair failed and we were unable to recover it. 00:28:50.588 [2024-07-15 16:10:19.412744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.588 [2024-07-15 16:10:19.412776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.588 qpair failed and we were unable to recover it. 00:28:50.588 [2024-07-15 16:10:19.413086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.588 [2024-07-15 16:10:19.413123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.588 qpair failed and we were unable to recover it. 00:28:50.588 [2024-07-15 16:10:19.413287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.588 [2024-07-15 16:10:19.413306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.588 qpair failed and we were unable to recover it. 00:28:50.588 [2024-07-15 16:10:19.413515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.588 [2024-07-15 16:10:19.413548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.588 qpair failed and we were unable to recover it. 00:28:50.588 [2024-07-15 16:10:19.413722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.588 [2024-07-15 16:10:19.413754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.588 qpair failed and we were unable to recover it. 00:28:50.588 [2024-07-15 16:10:19.414866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.588 [2024-07-15 16:10:19.414901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.588 qpair failed and we were unable to recover it. 00:28:50.588 [2024-07-15 16:10:19.415131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.588 [2024-07-15 16:10:19.415147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.588 qpair failed and we were unable to recover it. 00:28:50.588 [2024-07-15 16:10:19.415347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.588 [2024-07-15 16:10:19.415365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.589 qpair failed and we were unable to recover it. 00:28:50.589 [2024-07-15 16:10:19.415521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.589 [2024-07-15 16:10:19.415554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.589 qpair failed and we were unable to recover it. 00:28:50.589 [2024-07-15 16:10:19.415817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.589 [2024-07-15 16:10:19.415850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.589 qpair failed and we were unable to recover it. 00:28:50.589 [2024-07-15 16:10:19.416132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.589 [2024-07-15 16:10:19.416149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.589 qpair failed and we were unable to recover it. 00:28:50.589 [2024-07-15 16:10:19.416361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.589 [2024-07-15 16:10:19.416378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.589 qpair failed and we were unable to recover it. 00:28:50.589 [2024-07-15 16:10:19.416584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.589 [2024-07-15 16:10:19.416605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.589 qpair failed and we were unable to recover it. 00:28:50.589 [2024-07-15 16:10:19.416801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.589 [2024-07-15 16:10:19.416818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.589 qpair failed and we were unable to recover it. 00:28:50.589 [2024-07-15 16:10:19.417025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.589 [2024-07-15 16:10:19.417043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.589 qpair failed and we were unable to recover it. 00:28:50.589 [2024-07-15 16:10:19.417236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.589 [2024-07-15 16:10:19.417284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.589 qpair failed and we were unable to recover it. 00:28:50.589 [2024-07-15 16:10:19.417452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.589 [2024-07-15 16:10:19.417485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.589 qpair failed and we were unable to recover it. 00:28:50.589 [2024-07-15 16:10:19.417741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.589 [2024-07-15 16:10:19.417773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.589 qpair failed and we were unable to recover it. 00:28:50.589 [2024-07-15 16:10:19.418001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.589 [2024-07-15 16:10:19.418033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.589 qpair failed and we were unable to recover it. 00:28:50.589 [2024-07-15 16:10:19.418266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.589 [2024-07-15 16:10:19.418284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.589 qpair failed and we were unable to recover it. 00:28:50.589 [2024-07-15 16:10:19.418419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.589 [2024-07-15 16:10:19.418437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.589 qpair failed and we were unable to recover it. 00:28:50.589 [2024-07-15 16:10:19.418631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.589 [2024-07-15 16:10:19.418664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.589 qpair failed and we were unable to recover it. 00:28:50.589 [2024-07-15 16:10:19.418897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.589 [2024-07-15 16:10:19.418930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.589 qpair failed and we were unable to recover it. 00:28:50.589 [2024-07-15 16:10:19.419162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.589 [2024-07-15 16:10:19.419195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.589 qpair failed and we were unable to recover it. 00:28:50.589 [2024-07-15 16:10:19.420869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.589 [2024-07-15 16:10:19.420907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.589 qpair failed and we were unable to recover it. 00:28:50.589 [2024-07-15 16:10:19.421167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.589 [2024-07-15 16:10:19.421181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.589 qpair failed and we were unable to recover it. 00:28:50.589 [2024-07-15 16:10:19.421381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.589 [2024-07-15 16:10:19.421396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.589 qpair failed and we were unable to recover it. 00:28:50.589 [2024-07-15 16:10:19.421587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.589 [2024-07-15 16:10:19.421600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.589 qpair failed and we were unable to recover it. 00:28:50.589 [2024-07-15 16:10:19.421796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.589 [2024-07-15 16:10:19.421829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.589 qpair failed and we were unable to recover it. 00:28:50.589 [2024-07-15 16:10:19.422072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.589 [2024-07-15 16:10:19.422105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.589 qpair failed and we were unable to recover it. 00:28:50.589 [2024-07-15 16:10:19.422386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.589 [2024-07-15 16:10:19.422400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.589 qpair failed and we were unable to recover it. 00:28:50.589 [2024-07-15 16:10:19.422559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.589 [2024-07-15 16:10:19.422573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.589 qpair failed and we were unable to recover it. 00:28:50.589 [2024-07-15 16:10:19.422698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.589 [2024-07-15 16:10:19.422730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.589 qpair failed and we were unable to recover it. 00:28:50.589 [2024-07-15 16:10:19.423087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.589 [2024-07-15 16:10:19.423120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.589 qpair failed and we were unable to recover it. 00:28:50.589 [2024-07-15 16:10:19.423343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.589 [2024-07-15 16:10:19.423376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.589 qpair failed and we were unable to recover it. 00:28:50.589 [2024-07-15 16:10:19.423628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.589 [2024-07-15 16:10:19.423660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.589 qpair failed and we were unable to recover it. 00:28:50.589 [2024-07-15 16:10:19.423980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.589 [2024-07-15 16:10:19.424013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.589 qpair failed and we were unable to recover it. 00:28:50.589 [2024-07-15 16:10:19.424320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.589 [2024-07-15 16:10:19.424335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.589 qpair failed and we were unable to recover it. 00:28:50.589 [2024-07-15 16:10:19.424476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.589 [2024-07-15 16:10:19.424490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.589 qpair failed and we were unable to recover it. 00:28:50.589 [2024-07-15 16:10:19.425105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.589 [2024-07-15 16:10:19.425131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.589 qpair failed and we were unable to recover it. 00:28:50.589 [2024-07-15 16:10:19.425356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.589 [2024-07-15 16:10:19.425371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.589 qpair failed and we were unable to recover it. 00:28:50.589 [2024-07-15 16:10:19.425559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.589 [2024-07-15 16:10:19.425573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.589 qpair failed and we were unable to recover it. 00:28:50.589 [2024-07-15 16:10:19.425755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.589 [2024-07-15 16:10:19.425787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.589 qpair failed and we were unable to recover it. 00:28:50.589 [2024-07-15 16:10:19.426074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.589 [2024-07-15 16:10:19.426107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.589 qpair failed and we were unable to recover it. 00:28:50.589 [2024-07-15 16:10:19.426283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.589 [2024-07-15 16:10:19.426297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.589 qpair failed and we were unable to recover it. 00:28:50.589 [2024-07-15 16:10:19.426450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.589 [2024-07-15 16:10:19.426482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.589 qpair failed and we were unable to recover it. 00:28:50.589 [2024-07-15 16:10:19.426714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.589 [2024-07-15 16:10:19.426747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.589 qpair failed and we were unable to recover it. 00:28:50.589 [2024-07-15 16:10:19.426996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.589 [2024-07-15 16:10:19.427030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.589 qpair failed and we were unable to recover it. 00:28:50.590 [2024-07-15 16:10:19.427208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.590 [2024-07-15 16:10:19.427222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.590 qpair failed and we were unable to recover it. 00:28:50.590 [2024-07-15 16:10:19.428232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.590 [2024-07-15 16:10:19.428281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.590 qpair failed and we were unable to recover it. 00:28:50.590 [2024-07-15 16:10:19.428432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.590 [2024-07-15 16:10:19.428448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.590 qpair failed and we were unable to recover it. 00:28:50.590 [2024-07-15 16:10:19.428644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.590 [2024-07-15 16:10:19.428659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.590 qpair failed and we were unable to recover it. 00:28:50.590 [2024-07-15 16:10:19.428875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.590 [2024-07-15 16:10:19.428915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.590 qpair failed and we were unable to recover it. 00:28:50.590 [2024-07-15 16:10:19.429917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.590 [2024-07-15 16:10:19.429945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.590 qpair failed and we were unable to recover it. 00:28:50.590 [2024-07-15 16:10:19.430145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.590 [2024-07-15 16:10:19.430159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.590 qpair failed and we were unable to recover it. 00:28:50.590 [2024-07-15 16:10:19.430389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.590 [2024-07-15 16:10:19.430403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.590 qpair failed and we were unable to recover it. 00:28:50.590 [2024-07-15 16:10:19.431301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.590 [2024-07-15 16:10:19.431329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.590 qpair failed and we were unable to recover it. 00:28:50.590 [2024-07-15 16:10:19.431482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.590 [2024-07-15 16:10:19.431497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.590 qpair failed and we were unable to recover it. 00:28:50.590 [2024-07-15 16:10:19.431632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.590 [2024-07-15 16:10:19.431645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.590 qpair failed and we were unable to recover it. 00:28:50.590 [2024-07-15 16:10:19.431770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.590 [2024-07-15 16:10:19.431783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.590 qpair failed and we were unable to recover it. 00:28:50.590 [2024-07-15 16:10:19.433295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.590 [2024-07-15 16:10:19.433323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.590 qpair failed and we were unable to recover it. 00:28:50.590 [2024-07-15 16:10:19.433538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.590 [2024-07-15 16:10:19.433551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.590 qpair failed and we were unable to recover it. 00:28:50.590 [2024-07-15 16:10:19.433703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.590 [2024-07-15 16:10:19.433715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.590 qpair failed and we were unable to recover it. 00:28:50.590 [2024-07-15 16:10:19.433927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.590 [2024-07-15 16:10:19.433940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.590 qpair failed and we were unable to recover it. 00:28:50.590 [2024-07-15 16:10:19.434926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.590 [2024-07-15 16:10:19.434955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.590 qpair failed and we were unable to recover it. 00:28:50.590 [2024-07-15 16:10:19.435213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.590 [2024-07-15 16:10:19.435280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.590 qpair failed and we were unable to recover it. 00:28:50.590 [2024-07-15 16:10:19.435554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.590 [2024-07-15 16:10:19.435589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.590 qpair failed and we were unable to recover it. 00:28:50.590 [2024-07-15 16:10:19.435891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.590 [2024-07-15 16:10:19.435923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.590 qpair failed and we were unable to recover it. 00:28:50.590 [2024-07-15 16:10:19.436240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.590 [2024-07-15 16:10:19.436274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.590 qpair failed and we were unable to recover it. 00:28:50.590 [2024-07-15 16:10:19.436420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.590 [2024-07-15 16:10:19.436434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.590 qpair failed and we were unable to recover it. 00:28:50.590 [2024-07-15 16:10:19.436619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.590 [2024-07-15 16:10:19.436633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.590 qpair failed and we were unable to recover it. 00:28:50.590 [2024-07-15 16:10:19.436765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.590 [2024-07-15 16:10:19.436776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.590 qpair failed and we were unable to recover it. 00:28:50.590 [2024-07-15 16:10:19.436885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.590 [2024-07-15 16:10:19.436898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.590 qpair failed and we were unable to recover it. 00:28:50.590 [2024-07-15 16:10:19.437024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.590 [2024-07-15 16:10:19.437036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.590 qpair failed and we were unable to recover it. 00:28:50.590 [2024-07-15 16:10:19.437236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.590 [2024-07-15 16:10:19.437250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.590 qpair failed and we were unable to recover it. 00:28:50.590 [2024-07-15 16:10:19.437481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.590 [2024-07-15 16:10:19.437494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.590 qpair failed and we were unable to recover it. 00:28:50.590 [2024-07-15 16:10:19.437618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.590 [2024-07-15 16:10:19.437632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.590 qpair failed and we were unable to recover it. 00:28:50.590 [2024-07-15 16:10:19.437817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.590 [2024-07-15 16:10:19.437830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.590 qpair failed and we were unable to recover it. 00:28:50.590 [2024-07-15 16:10:19.438007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.590 [2024-07-15 16:10:19.438039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.590 qpair failed and we were unable to recover it. 00:28:50.590 [2024-07-15 16:10:19.438274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.590 [2024-07-15 16:10:19.438350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.590 qpair failed and we were unable to recover it. 00:28:50.590 [2024-07-15 16:10:19.438676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.590 [2024-07-15 16:10:19.438713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.590 qpair failed and we were unable to recover it. 00:28:50.590 [2024-07-15 16:10:19.438984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.590 [2024-07-15 16:10:19.439019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.590 qpair failed and we were unable to recover it. 00:28:50.590 [2024-07-15 16:10:19.439202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.590 [2024-07-15 16:10:19.439246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.590 qpair failed and we were unable to recover it. 00:28:50.590 [2024-07-15 16:10:19.439565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.590 [2024-07-15 16:10:19.439582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.590 qpair failed and we were unable to recover it. 00:28:50.590 [2024-07-15 16:10:19.439735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.590 [2024-07-15 16:10:19.439752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.590 qpair failed and we were unable to recover it. 00:28:50.590 [2024-07-15 16:10:19.439877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.590 [2024-07-15 16:10:19.439892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.590 qpair failed and we were unable to recover it. 00:28:50.590 [2024-07-15 16:10:19.440000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.590 [2024-07-15 16:10:19.440017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.590 qpair failed and we were unable to recover it. 00:28:50.590 [2024-07-15 16:10:19.440266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.590 [2024-07-15 16:10:19.440283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.590 qpair failed and we were unable to recover it. 00:28:50.590 [2024-07-15 16:10:19.440388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.591 [2024-07-15 16:10:19.440403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.591 qpair failed and we were unable to recover it. 00:28:50.591 [2024-07-15 16:10:19.440554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.591 [2024-07-15 16:10:19.440571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.591 qpair failed and we were unable to recover it. 00:28:50.591 [2024-07-15 16:10:19.440768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.591 [2024-07-15 16:10:19.440799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.591 qpair failed and we were unable to recover it. 00:28:50.591 [2024-07-15 16:10:19.440968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.591 [2024-07-15 16:10:19.441000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.591 qpair failed and we were unable to recover it. 00:28:50.591 [2024-07-15 16:10:19.441250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.591 [2024-07-15 16:10:19.441296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.591 qpair failed and we were unable to recover it. 00:28:50.591 [2024-07-15 16:10:19.441538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.591 [2024-07-15 16:10:19.441570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.591 qpair failed and we were unable to recover it. 00:28:50.591 [2024-07-15 16:10:19.441791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.591 [2024-07-15 16:10:19.441823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.591 qpair failed and we were unable to recover it. 00:28:50.591 [2024-07-15 16:10:19.441979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.591 [2024-07-15 16:10:19.442016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.591 qpair failed and we were unable to recover it. 00:28:50.591 [2024-07-15 16:10:19.442285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.591 [2024-07-15 16:10:19.442318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.591 qpair failed and we were unable to recover it. 00:28:50.591 [2024-07-15 16:10:19.442479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.591 [2024-07-15 16:10:19.442512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.591 qpair failed and we were unable to recover it. 00:28:50.591 [2024-07-15 16:10:19.442731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.591 [2024-07-15 16:10:19.442764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.591 qpair failed and we were unable to recover it. 00:28:50.591 [2024-07-15 16:10:19.443012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.591 [2024-07-15 16:10:19.443044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.591 qpair failed and we were unable to recover it. 00:28:50.591 [2024-07-15 16:10:19.443299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.591 [2024-07-15 16:10:19.443334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.591 qpair failed and we were unable to recover it. 00:28:50.591 [2024-07-15 16:10:19.443488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.591 [2024-07-15 16:10:19.443521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.591 qpair failed and we were unable to recover it. 00:28:50.591 [2024-07-15 16:10:19.443689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.591 [2024-07-15 16:10:19.443721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.591 qpair failed and we were unable to recover it. 00:28:50.591 [2024-07-15 16:10:19.443957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.591 [2024-07-15 16:10:19.443989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.591 qpair failed and we were unable to recover it. 00:28:50.591 [2024-07-15 16:10:19.444291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.591 [2024-07-15 16:10:19.444326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.591 qpair failed and we were unable to recover it. 00:28:50.591 [2024-07-15 16:10:19.444614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.591 [2024-07-15 16:10:19.444647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.591 qpair failed and we were unable to recover it. 00:28:50.591 [2024-07-15 16:10:19.444876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.591 [2024-07-15 16:10:19.444908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.591 qpair failed and we were unable to recover it. 00:28:50.591 [2024-07-15 16:10:19.445155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.591 [2024-07-15 16:10:19.445187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.591 qpair failed and we were unable to recover it. 00:28:50.591 [2024-07-15 16:10:19.445387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.591 [2024-07-15 16:10:19.445405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.591 qpair failed and we were unable to recover it. 00:28:50.591 [2024-07-15 16:10:19.445545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.591 [2024-07-15 16:10:19.445577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.591 qpair failed and we were unable to recover it. 00:28:50.591 [2024-07-15 16:10:19.446781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.591 [2024-07-15 16:10:19.446816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.591 qpair failed and we were unable to recover it. 00:28:50.591 [2024-07-15 16:10:19.447105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.591 [2024-07-15 16:10:19.447124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.591 qpair failed and we were unable to recover it. 00:28:50.591 [2024-07-15 16:10:19.447381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.591 [2024-07-15 16:10:19.447403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.591 qpair failed and we were unable to recover it. 00:28:50.591 [2024-07-15 16:10:19.447642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.591 [2024-07-15 16:10:19.447659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.591 qpair failed and we were unable to recover it. 00:28:50.591 [2024-07-15 16:10:19.447788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.591 [2024-07-15 16:10:19.447805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.591 qpair failed and we were unable to recover it. 00:28:50.591 [2024-07-15 16:10:19.448073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.591 [2024-07-15 16:10:19.448090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.591 qpair failed and we were unable to recover it. 00:28:50.591 [2024-07-15 16:10:19.448308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.591 [2024-07-15 16:10:19.448328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.591 qpair failed and we were unable to recover it. 00:28:50.591 [2024-07-15 16:10:19.448528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.591 [2024-07-15 16:10:19.448546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.591 qpair failed and we were unable to recover it. 00:28:50.591 [2024-07-15 16:10:19.448688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.591 [2024-07-15 16:10:19.448705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.591 qpair failed and we were unable to recover it. 00:28:50.591 [2024-07-15 16:10:19.448972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.591 [2024-07-15 16:10:19.448992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.591 qpair failed and we were unable to recover it. 00:28:50.591 [2024-07-15 16:10:19.449190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.591 [2024-07-15 16:10:19.449207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.591 qpair failed and we were unable to recover it. 00:28:50.591 [2024-07-15 16:10:19.449383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.591 [2024-07-15 16:10:19.449399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.591 qpair failed and we were unable to recover it. 00:28:50.591 [2024-07-15 16:10:19.449645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.591 [2024-07-15 16:10:19.449664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.591 qpair failed and we were unable to recover it. 00:28:50.591 [2024-07-15 16:10:19.449874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.591 [2024-07-15 16:10:19.449891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.591 qpair failed and we were unable to recover it. 00:28:50.591 [2024-07-15 16:10:19.450153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.591 [2024-07-15 16:10:19.450170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.591 qpair failed and we were unable to recover it. 00:28:50.591 [2024-07-15 16:10:19.450378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.591 [2024-07-15 16:10:19.450398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.591 qpair failed and we were unable to recover it. 00:28:50.591 [2024-07-15 16:10:19.450540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.591 [2024-07-15 16:10:19.450557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.591 qpair failed and we were unable to recover it. 00:28:50.591 [2024-07-15 16:10:19.450689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.591 [2024-07-15 16:10:19.450706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.591 qpair failed and we were unable to recover it. 00:28:50.591 [2024-07-15 16:10:19.450924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.591 [2024-07-15 16:10:19.450941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.591 qpair failed and we were unable to recover it. 00:28:50.591 [2024-07-15 16:10:19.451136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.592 [2024-07-15 16:10:19.451152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.592 qpair failed and we were unable to recover it. 00:28:50.592 [2024-07-15 16:10:19.451285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.592 [2024-07-15 16:10:19.451302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.592 qpair failed and we were unable to recover it. 00:28:50.592 [2024-07-15 16:10:19.451482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.592 [2024-07-15 16:10:19.451498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.592 qpair failed and we were unable to recover it. 00:28:50.592 [2024-07-15 16:10:19.451621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.592 [2024-07-15 16:10:19.451642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.592 qpair failed and we were unable to recover it. 00:28:50.592 [2024-07-15 16:10:19.451773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.592 [2024-07-15 16:10:19.451789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.592 qpair failed and we were unable to recover it. 00:28:50.592 [2024-07-15 16:10:19.451971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.592 [2024-07-15 16:10:19.451988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.592 qpair failed and we were unable to recover it. 00:28:50.592 [2024-07-15 16:10:19.452174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.592 [2024-07-15 16:10:19.452190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.592 qpair failed and we were unable to recover it. 00:28:50.592 [2024-07-15 16:10:19.452310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.592 [2024-07-15 16:10:19.452326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.592 qpair failed and we were unable to recover it. 00:28:50.592 [2024-07-15 16:10:19.452512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.592 [2024-07-15 16:10:19.452529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.592 qpair failed and we were unable to recover it. 00:28:50.592 [2024-07-15 16:10:19.452746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.592 [2024-07-15 16:10:19.452778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.592 qpair failed and we were unable to recover it. 00:28:50.592 [2024-07-15 16:10:19.453010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.592 [2024-07-15 16:10:19.453042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.592 qpair failed and we were unable to recover it. 00:28:50.592 [2024-07-15 16:10:19.453177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.592 [2024-07-15 16:10:19.453194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.592 qpair failed and we were unable to recover it. 00:28:50.592 [2024-07-15 16:10:19.453376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.592 [2024-07-15 16:10:19.453415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.592 qpair failed and we were unable to recover it. 00:28:50.592 [2024-07-15 16:10:19.453593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.592 [2024-07-15 16:10:19.453625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.592 qpair failed and we were unable to recover it. 00:28:50.592 [2024-07-15 16:10:19.453783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.592 [2024-07-15 16:10:19.453816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.592 qpair failed and we were unable to recover it. 00:28:50.592 [2024-07-15 16:10:19.453973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.592 [2024-07-15 16:10:19.453990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.592 qpair failed and we were unable to recover it. 00:28:50.592 [2024-07-15 16:10:19.454106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.592 [2024-07-15 16:10:19.454123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.592 qpair failed and we were unable to recover it. 00:28:50.592 [2024-07-15 16:10:19.454263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.592 [2024-07-15 16:10:19.454281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.592 qpair failed and we were unable to recover it. 00:28:50.592 [2024-07-15 16:10:19.454395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.592 [2024-07-15 16:10:19.454412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.592 qpair failed and we were unable to recover it. 00:28:50.592 [2024-07-15 16:10:19.454692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.592 [2024-07-15 16:10:19.454725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.592 qpair failed and we were unable to recover it. 00:28:50.592 [2024-07-15 16:10:19.454940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.592 [2024-07-15 16:10:19.454972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.592 qpair failed and we were unable to recover it. 00:28:50.592 [2024-07-15 16:10:19.455276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.592 [2024-07-15 16:10:19.455310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.592 qpair failed and we were unable to recover it. 00:28:50.592 [2024-07-15 16:10:19.455448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.592 [2024-07-15 16:10:19.455479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.592 qpair failed and we were unable to recover it. 00:28:50.592 [2024-07-15 16:10:19.455641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.592 [2024-07-15 16:10:19.455674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.592 qpair failed and we were unable to recover it. 00:28:50.592 [2024-07-15 16:10:19.455852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.592 [2024-07-15 16:10:19.455884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.592 qpair failed and we were unable to recover it. 00:28:50.592 [2024-07-15 16:10:19.456100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.592 [2024-07-15 16:10:19.456131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.592 qpair failed and we were unable to recover it. 00:28:50.592 [2024-07-15 16:10:19.456366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.592 [2024-07-15 16:10:19.456399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.592 qpair failed and we were unable to recover it. 00:28:50.592 [2024-07-15 16:10:19.456584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.592 [2024-07-15 16:10:19.456615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.592 qpair failed and we were unable to recover it. 00:28:50.592 [2024-07-15 16:10:19.456848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.592 [2024-07-15 16:10:19.456881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.592 qpair failed and we were unable to recover it. 00:28:50.592 [2024-07-15 16:10:19.457042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.592 [2024-07-15 16:10:19.457059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.592 qpair failed and we were unable to recover it. 00:28:50.592 [2024-07-15 16:10:19.457171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.592 [2024-07-15 16:10:19.457188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.592 qpair failed and we were unable to recover it. 00:28:50.592 [2024-07-15 16:10:19.457374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.592 [2024-07-15 16:10:19.457392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.592 qpair failed and we were unable to recover it. 00:28:50.592 [2024-07-15 16:10:19.457510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.592 [2024-07-15 16:10:19.457529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.592 qpair failed and we were unable to recover it. 00:28:50.592 [2024-07-15 16:10:19.457728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.592 [2024-07-15 16:10:19.457745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.592 qpair failed and we were unable to recover it. 00:28:50.592 [2024-07-15 16:10:19.457866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.592 [2024-07-15 16:10:19.457911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.592 qpair failed and we were unable to recover it. 00:28:50.592 [2024-07-15 16:10:19.458061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.593 [2024-07-15 16:10:19.458094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.593 qpair failed and we were unable to recover it. 00:28:50.593 [2024-07-15 16:10:19.458251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.593 [2024-07-15 16:10:19.458284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.593 qpair failed and we were unable to recover it. 00:28:50.593 [2024-07-15 16:10:19.458501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.593 [2024-07-15 16:10:19.458533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.593 qpair failed and we were unable to recover it. 00:28:50.593 [2024-07-15 16:10:19.458683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.593 [2024-07-15 16:10:19.458716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.593 qpair failed and we were unable to recover it. 00:28:50.593 [2024-07-15 16:10:19.458868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.593 [2024-07-15 16:10:19.458900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.593 qpair failed and we were unable to recover it. 00:28:50.593 [2024-07-15 16:10:19.459039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.593 [2024-07-15 16:10:19.459071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.593 qpair failed and we were unable to recover it. 00:28:50.593 [2024-07-15 16:10:19.459291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.593 [2024-07-15 16:10:19.459325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.593 qpair failed and we were unable to recover it. 00:28:50.593 [2024-07-15 16:10:19.459521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.593 [2024-07-15 16:10:19.459537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.593 qpair failed and we were unable to recover it. 00:28:50.593 [2024-07-15 16:10:19.459663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.593 [2024-07-15 16:10:19.459683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.593 qpair failed and we were unable to recover it. 00:28:50.593 [2024-07-15 16:10:19.459794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.593 [2024-07-15 16:10:19.459810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.593 qpair failed and we were unable to recover it. 00:28:50.593 [2024-07-15 16:10:19.459927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.593 [2024-07-15 16:10:19.459959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.593 qpair failed and we were unable to recover it. 00:28:50.593 [2024-07-15 16:10:19.460263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.593 [2024-07-15 16:10:19.460297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.593 qpair failed and we were unable to recover it. 00:28:50.593 [2024-07-15 16:10:19.460520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.593 [2024-07-15 16:10:19.460553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.593 qpair failed and we were unable to recover it. 00:28:50.593 [2024-07-15 16:10:19.460826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.593 [2024-07-15 16:10:19.460858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.593 qpair failed and we were unable to recover it. 00:28:50.593 [2024-07-15 16:10:19.461028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.593 [2024-07-15 16:10:19.461045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.593 qpair failed and we were unable to recover it. 00:28:50.593 [2024-07-15 16:10:19.461241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.593 [2024-07-15 16:10:19.461257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.593 qpair failed and we were unable to recover it. 00:28:50.593 [2024-07-15 16:10:19.461441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.593 [2024-07-15 16:10:19.461457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.593 qpair failed and we were unable to recover it. 00:28:50.593 [2024-07-15 16:10:19.461574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.593 [2024-07-15 16:10:19.461591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.593 qpair failed and we were unable to recover it. 00:28:50.593 [2024-07-15 16:10:19.461789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.593 [2024-07-15 16:10:19.461822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.593 qpair failed and we were unable to recover it. 00:28:50.593 [2024-07-15 16:10:19.461986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.593 [2024-07-15 16:10:19.462019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.593 qpair failed and we were unable to recover it. 00:28:50.593 [2024-07-15 16:10:19.462320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.593 [2024-07-15 16:10:19.462337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.593 qpair failed and we were unable to recover it. 00:28:50.593 [2024-07-15 16:10:19.462544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.593 [2024-07-15 16:10:19.462576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.593 qpair failed and we were unable to recover it. 00:28:50.593 [2024-07-15 16:10:19.462726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.593 [2024-07-15 16:10:19.462757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.593 qpair failed and we were unable to recover it. 00:28:50.593 [2024-07-15 16:10:19.463036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.593 [2024-07-15 16:10:19.463068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.593 qpair failed and we were unable to recover it. 00:28:50.593 [2024-07-15 16:10:19.463252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.593 [2024-07-15 16:10:19.463269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.593 qpair failed and we were unable to recover it. 00:28:50.593 [2024-07-15 16:10:19.463458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.593 [2024-07-15 16:10:19.463489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.593 qpair failed and we were unable to recover it. 00:28:50.593 [2024-07-15 16:10:19.463628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.593 [2024-07-15 16:10:19.463659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.593 qpair failed and we were unable to recover it. 00:28:50.593 [2024-07-15 16:10:19.463869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.593 [2024-07-15 16:10:19.463902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.593 qpair failed and we were unable to recover it. 00:28:50.593 [2024-07-15 16:10:19.464111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.593 [2024-07-15 16:10:19.464142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.593 qpair failed and we were unable to recover it. 00:28:50.593 [2024-07-15 16:10:19.464290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.593 [2024-07-15 16:10:19.464308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.593 qpair failed and we were unable to recover it. 00:28:50.593 [2024-07-15 16:10:19.464573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.593 [2024-07-15 16:10:19.464613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.593 qpair failed and we were unable to recover it. 00:28:50.593 [2024-07-15 16:10:19.464758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.593 [2024-07-15 16:10:19.464789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.593 qpair failed and we were unable to recover it. 00:28:50.593 [2024-07-15 16:10:19.465093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.593 [2024-07-15 16:10:19.465125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.593 qpair failed and we were unable to recover it. 00:28:50.593 [2024-07-15 16:10:19.465328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.593 [2024-07-15 16:10:19.465346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.593 qpair failed and we were unable to recover it. 00:28:50.593 [2024-07-15 16:10:19.465595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.593 [2024-07-15 16:10:19.465611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.593 qpair failed and we were unable to recover it. 00:28:50.593 [2024-07-15 16:10:19.465884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.593 [2024-07-15 16:10:19.465902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.593 qpair failed and we were unable to recover it. 00:28:50.593 [2024-07-15 16:10:19.466131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.593 [2024-07-15 16:10:19.466163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.593 qpair failed and we were unable to recover it. 00:28:50.593 [2024-07-15 16:10:19.466327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.593 [2024-07-15 16:10:19.466361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.593 qpair failed and we were unable to recover it. 00:28:50.593 [2024-07-15 16:10:19.466504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.593 [2024-07-15 16:10:19.466535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.593 qpair failed and we were unable to recover it. 00:28:50.593 [2024-07-15 16:10:19.466734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.593 [2024-07-15 16:10:19.466767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.593 qpair failed and we were unable to recover it. 00:28:50.593 [2024-07-15 16:10:19.466995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.593 [2024-07-15 16:10:19.467029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.593 qpair failed and we were unable to recover it. 00:28:50.593 [2024-07-15 16:10:19.467152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.594 [2024-07-15 16:10:19.467171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.594 qpair failed and we were unable to recover it. 00:28:50.594 [2024-07-15 16:10:19.467367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.594 [2024-07-15 16:10:19.467384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.594 qpair failed and we were unable to recover it. 00:28:50.594 [2024-07-15 16:10:19.467580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.594 [2024-07-15 16:10:19.467598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.594 qpair failed and we were unable to recover it. 00:28:50.594 [2024-07-15 16:10:19.467779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.594 [2024-07-15 16:10:19.467812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.594 qpair failed and we were unable to recover it. 00:28:50.594 [2024-07-15 16:10:19.467951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.594 [2024-07-15 16:10:19.467982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.594 qpair failed and we were unable to recover it. 00:28:50.594 [2024-07-15 16:10:19.468126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.594 [2024-07-15 16:10:19.468157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.594 qpair failed and we were unable to recover it. 00:28:50.594 [2024-07-15 16:10:19.468321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.594 [2024-07-15 16:10:19.468353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.594 qpair failed and we were unable to recover it. 00:28:50.594 [2024-07-15 16:10:19.468571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.594 [2024-07-15 16:10:19.468592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.594 qpair failed and we were unable to recover it. 00:28:50.594 [2024-07-15 16:10:19.468712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.594 [2024-07-15 16:10:19.468729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.594 qpair failed and we were unable to recover it. 00:28:50.594 [2024-07-15 16:10:19.468861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.594 [2024-07-15 16:10:19.468878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.594 qpair failed and we were unable to recover it. 00:28:50.594 [2024-07-15 16:10:19.468990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.594 [2024-07-15 16:10:19.469007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.594 qpair failed and we were unable to recover it. 00:28:50.594 [2024-07-15 16:10:19.469204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.594 [2024-07-15 16:10:19.469249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.594 qpair failed and we were unable to recover it. 00:28:50.594 [2024-07-15 16:10:19.469463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.594 [2024-07-15 16:10:19.469496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.594 qpair failed and we were unable to recover it. 00:28:50.594 [2024-07-15 16:10:19.469637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.594 [2024-07-15 16:10:19.469668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.594 qpair failed and we were unable to recover it. 00:28:50.594 [2024-07-15 16:10:19.469884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.594 [2024-07-15 16:10:19.469915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.594 qpair failed and we were unable to recover it. 00:28:50.594 [2024-07-15 16:10:19.470137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.594 [2024-07-15 16:10:19.470153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.594 qpair failed and we were unable to recover it. 00:28:50.594 [2024-07-15 16:10:19.470260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.594 [2024-07-15 16:10:19.470277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.594 qpair failed and we were unable to recover it. 00:28:50.594 [2024-07-15 16:10:19.470396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.594 [2024-07-15 16:10:19.470413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.594 qpair failed and we were unable to recover it. 00:28:50.594 [2024-07-15 16:10:19.470583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.594 [2024-07-15 16:10:19.470601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.594 qpair failed and we were unable to recover it. 00:28:50.594 [2024-07-15 16:10:19.470784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.594 [2024-07-15 16:10:19.470816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.594 qpair failed and we were unable to recover it. 00:28:50.594 [2024-07-15 16:10:19.470975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.594 [2024-07-15 16:10:19.471008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.594 qpair failed and we were unable to recover it. 00:28:50.594 [2024-07-15 16:10:19.471158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.594 [2024-07-15 16:10:19.471190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.594 qpair failed and we were unable to recover it. 00:28:50.594 [2024-07-15 16:10:19.471498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.594 [2024-07-15 16:10:19.471516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.594 qpair failed and we were unable to recover it. 00:28:50.594 [2024-07-15 16:10:19.471643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.594 [2024-07-15 16:10:19.471659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.594 qpair failed and we were unable to recover it. 00:28:50.594 [2024-07-15 16:10:19.471935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.594 [2024-07-15 16:10:19.471967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.594 qpair failed and we were unable to recover it. 00:28:50.594 [2024-07-15 16:10:19.472134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.594 [2024-07-15 16:10:19.472166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.594 qpair failed and we were unable to recover it. 00:28:50.594 [2024-07-15 16:10:19.472341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.594 [2024-07-15 16:10:19.472374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.594 qpair failed and we were unable to recover it. 00:28:50.594 [2024-07-15 16:10:19.472561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.594 [2024-07-15 16:10:19.472578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.594 qpair failed and we were unable to recover it. 00:28:50.594 [2024-07-15 16:10:19.472705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.594 [2024-07-15 16:10:19.472735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.594 qpair failed and we were unable to recover it. 00:28:50.594 [2024-07-15 16:10:19.472958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.594 [2024-07-15 16:10:19.472991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.594 qpair failed and we were unable to recover it. 00:28:50.594 [2024-07-15 16:10:19.473140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.594 [2024-07-15 16:10:19.473172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.594 qpair failed and we were unable to recover it. 00:28:50.594 [2024-07-15 16:10:19.473388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.594 [2024-07-15 16:10:19.473406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.594 qpair failed and we were unable to recover it. 00:28:50.594 [2024-07-15 16:10:19.473528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.594 [2024-07-15 16:10:19.473558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.594 qpair failed and we were unable to recover it. 00:28:50.594 [2024-07-15 16:10:19.473838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.594 [2024-07-15 16:10:19.473870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.594 qpair failed and we were unable to recover it. 00:28:50.594 [2024-07-15 16:10:19.474172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.594 [2024-07-15 16:10:19.474205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.594 qpair failed and we were unable to recover it. 00:28:50.594 [2024-07-15 16:10:19.474448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.594 [2024-07-15 16:10:19.474481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.594 qpair failed and we were unable to recover it. 00:28:50.594 [2024-07-15 16:10:19.474783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.594 [2024-07-15 16:10:19.474815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.594 qpair failed and we were unable to recover it. 00:28:50.594 [2024-07-15 16:10:19.475026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.594 [2024-07-15 16:10:19.475058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.594 qpair failed and we were unable to recover it. 00:28:50.594 [2024-07-15 16:10:19.475240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.594 [2024-07-15 16:10:19.475273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.594 qpair failed and we were unable to recover it. 00:28:50.594 [2024-07-15 16:10:19.475479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.594 [2024-07-15 16:10:19.475495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.594 qpair failed and we were unable to recover it. 00:28:50.594 [2024-07-15 16:10:19.475633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.594 [2024-07-15 16:10:19.475664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.594 qpair failed and we were unable to recover it. 00:28:50.595 [2024-07-15 16:10:19.475916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.595 [2024-07-15 16:10:19.475948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.595 qpair failed and we were unable to recover it. 00:28:50.595 [2024-07-15 16:10:19.476170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.595 [2024-07-15 16:10:19.476202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.595 qpair failed and we were unable to recover it. 00:28:50.595 [2024-07-15 16:10:19.476424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.595 [2024-07-15 16:10:19.476457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.595 qpair failed and we were unable to recover it. 00:28:50.595 [2024-07-15 16:10:19.476760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.595 [2024-07-15 16:10:19.476792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.595 qpair failed and we were unable to recover it. 00:28:50.595 [2024-07-15 16:10:19.477000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.595 [2024-07-15 16:10:19.477032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.595 qpair failed and we were unable to recover it. 00:28:50.595 [2024-07-15 16:10:19.477256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.595 [2024-07-15 16:10:19.477274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.595 qpair failed and we were unable to recover it. 00:28:50.595 [2024-07-15 16:10:19.477457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.595 [2024-07-15 16:10:19.477474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.595 qpair failed and we were unable to recover it. 00:28:50.595 [2024-07-15 16:10:19.477655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.595 [2024-07-15 16:10:19.477671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.595 qpair failed and we were unable to recover it. 00:28:50.595 [2024-07-15 16:10:19.477839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.595 [2024-07-15 16:10:19.477856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.595 qpair failed and we were unable to recover it. 00:28:50.595 [2024-07-15 16:10:19.478123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.595 [2024-07-15 16:10:19.478139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.595 qpair failed and we were unable to recover it. 00:28:50.595 [2024-07-15 16:10:19.478249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.595 [2024-07-15 16:10:19.478264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.595 qpair failed and we were unable to recover it. 00:28:50.595 [2024-07-15 16:10:19.478376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.595 [2024-07-15 16:10:19.478391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.595 qpair failed and we were unable to recover it. 00:28:50.595 [2024-07-15 16:10:19.478485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.595 [2024-07-15 16:10:19.478502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.595 qpair failed and we were unable to recover it. 00:28:50.595 [2024-07-15 16:10:19.478700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.595 [2024-07-15 16:10:19.478716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.595 qpair failed and we were unable to recover it. 00:28:50.595 [2024-07-15 16:10:19.478972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.595 [2024-07-15 16:10:19.479004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.595 qpair failed and we were unable to recover it. 00:28:50.595 [2024-07-15 16:10:19.479231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.595 [2024-07-15 16:10:19.479248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.595 qpair failed and we were unable to recover it. 00:28:50.595 [2024-07-15 16:10:19.479428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.595 [2024-07-15 16:10:19.479460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.595 qpair failed and we were unable to recover it. 00:28:50.595 [2024-07-15 16:10:19.479603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.595 [2024-07-15 16:10:19.479635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.595 qpair failed and we were unable to recover it. 00:28:50.595 [2024-07-15 16:10:19.479907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.595 [2024-07-15 16:10:19.479939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.595 qpair failed and we were unable to recover it. 00:28:50.595 [2024-07-15 16:10:19.480078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.595 [2024-07-15 16:10:19.480110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.595 qpair failed and we were unable to recover it. 00:28:50.595 [2024-07-15 16:10:19.480443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.595 [2024-07-15 16:10:19.480461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.595 qpair failed and we were unable to recover it. 00:28:50.595 [2024-07-15 16:10:19.480579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.595 [2024-07-15 16:10:19.480595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.595 qpair failed and we were unable to recover it. 00:28:50.595 [2024-07-15 16:10:19.480775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.595 [2024-07-15 16:10:19.480791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.595 qpair failed and we were unable to recover it. 00:28:50.595 [2024-07-15 16:10:19.481003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.595 [2024-07-15 16:10:19.481019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.595 qpair failed and we were unable to recover it. 00:28:50.595 [2024-07-15 16:10:19.481162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.595 [2024-07-15 16:10:19.481178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.595 qpair failed and we were unable to recover it. 00:28:50.595 [2024-07-15 16:10:19.481299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.595 [2024-07-15 16:10:19.481317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.595 qpair failed and we were unable to recover it. 00:28:50.595 [2024-07-15 16:10:19.481591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.595 [2024-07-15 16:10:19.481623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.595 qpair failed and we were unable to recover it. 00:28:50.595 [2024-07-15 16:10:19.481867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.595 [2024-07-15 16:10:19.481899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.595 qpair failed and we were unable to recover it. 00:28:50.595 [2024-07-15 16:10:19.482109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.595 [2024-07-15 16:10:19.482127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.595 qpair failed and we were unable to recover it. 00:28:50.595 [2024-07-15 16:10:19.482247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.595 [2024-07-15 16:10:19.482264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.595 qpair failed and we were unable to recover it. 00:28:50.595 [2024-07-15 16:10:19.482460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.595 [2024-07-15 16:10:19.482478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.595 qpair failed and we were unable to recover it. 00:28:50.871 [2024-07-15 16:10:19.482633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.871 [2024-07-15 16:10:19.482649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.871 qpair failed and we were unable to recover it. 00:28:50.871 [2024-07-15 16:10:19.482832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.871 [2024-07-15 16:10:19.482848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.871 qpair failed and we were unable to recover it. 00:28:50.871 [2024-07-15 16:10:19.482983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.871 [2024-07-15 16:10:19.483002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.871 qpair failed and we were unable to recover it. 00:28:50.872 [2024-07-15 16:10:19.483162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.872 [2024-07-15 16:10:19.483180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.872 qpair failed and we were unable to recover it. 00:28:50.872 [2024-07-15 16:10:19.483311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.872 [2024-07-15 16:10:19.483328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.872 qpair failed and we were unable to recover it. 00:28:50.872 [2024-07-15 16:10:19.483589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.872 [2024-07-15 16:10:19.483606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.872 qpair failed and we were unable to recover it. 00:28:50.872 [2024-07-15 16:10:19.483879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.872 [2024-07-15 16:10:19.483895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.872 qpair failed and we were unable to recover it. 00:28:50.872 [2024-07-15 16:10:19.484004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.872 [2024-07-15 16:10:19.484020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.872 qpair failed and we were unable to recover it. 00:28:50.872 [2024-07-15 16:10:19.484154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.872 [2024-07-15 16:10:19.484170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.872 qpair failed and we were unable to recover it. 00:28:50.872 [2024-07-15 16:10:19.484360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.872 [2024-07-15 16:10:19.484377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.872 qpair failed and we were unable to recover it. 00:28:50.872 [2024-07-15 16:10:19.484583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.872 [2024-07-15 16:10:19.484600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.872 qpair failed and we were unable to recover it. 00:28:50.872 [2024-07-15 16:10:19.484783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.872 [2024-07-15 16:10:19.484799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.872 qpair failed and we were unable to recover it. 00:28:50.872 [2024-07-15 16:10:19.484982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.872 [2024-07-15 16:10:19.484998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.872 qpair failed and we were unable to recover it. 00:28:50.872 [2024-07-15 16:10:19.485199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.872 [2024-07-15 16:10:19.485215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.872 qpair failed and we were unable to recover it. 00:28:50.872 [2024-07-15 16:10:19.485450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.872 [2024-07-15 16:10:19.485467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.872 qpair failed and we were unable to recover it. 00:28:50.872 [2024-07-15 16:10:19.485684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.872 [2024-07-15 16:10:19.485700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.872 qpair failed and we were unable to recover it. 00:28:50.872 [2024-07-15 16:10:19.485784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.872 [2024-07-15 16:10:19.485800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.872 qpair failed and we were unable to recover it. 00:28:50.872 [2024-07-15 16:10:19.485973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.872 [2024-07-15 16:10:19.485989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.872 qpair failed and we were unable to recover it. 00:28:50.872 [2024-07-15 16:10:19.486171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.872 [2024-07-15 16:10:19.486186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.872 qpair failed and we were unable to recover it. 00:28:50.872 [2024-07-15 16:10:19.486386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.872 [2024-07-15 16:10:19.486403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.872 qpair failed and we were unable to recover it. 00:28:50.872 [2024-07-15 16:10:19.486578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.872 [2024-07-15 16:10:19.486595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.872 qpair failed and we were unable to recover it. 00:28:50.872 [2024-07-15 16:10:19.486764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.872 [2024-07-15 16:10:19.486780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.872 qpair failed and we were unable to recover it. 00:28:50.872 [2024-07-15 16:10:19.487026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.872 [2024-07-15 16:10:19.487058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.872 qpair failed and we were unable to recover it. 00:28:50.872 [2024-07-15 16:10:19.487278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.872 [2024-07-15 16:10:19.487311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.872 qpair failed and we were unable to recover it. 00:28:50.872 [2024-07-15 16:10:19.487548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.872 [2024-07-15 16:10:19.487580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.872 qpair failed and we were unable to recover it. 00:28:50.872 [2024-07-15 16:10:19.487855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.872 [2024-07-15 16:10:19.487886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.872 qpair failed and we were unable to recover it. 00:28:50.872 [2024-07-15 16:10:19.488091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.872 [2024-07-15 16:10:19.488123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.872 qpair failed and we were unable to recover it. 00:28:50.872 [2024-07-15 16:10:19.488341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.872 [2024-07-15 16:10:19.488358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.872 qpair failed and we were unable to recover it. 00:28:50.872 [2024-07-15 16:10:19.488535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.872 [2024-07-15 16:10:19.488551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.872 qpair failed and we were unable to recover it. 00:28:50.872 [2024-07-15 16:10:19.488824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.872 [2024-07-15 16:10:19.488856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.872 qpair failed and we were unable to recover it. 00:28:50.872 [2024-07-15 16:10:19.489059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.872 [2024-07-15 16:10:19.489089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.872 qpair failed and we were unable to recover it. 00:28:50.872 [2024-07-15 16:10:19.489320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.872 [2024-07-15 16:10:19.489352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.872 qpair failed and we were unable to recover it. 00:28:50.872 [2024-07-15 16:10:19.489614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.872 [2024-07-15 16:10:19.489645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.872 qpair failed and we were unable to recover it. 00:28:50.872 [2024-07-15 16:10:19.489851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.872 [2024-07-15 16:10:19.489883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.872 qpair failed and we were unable to recover it. 00:28:50.872 [2024-07-15 16:10:19.490058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.872 [2024-07-15 16:10:19.490089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.872 qpair failed and we were unable to recover it. 00:28:50.872 [2024-07-15 16:10:19.490357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.872 [2024-07-15 16:10:19.490374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.872 qpair failed and we were unable to recover it. 00:28:50.872 [2024-07-15 16:10:19.490485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.872 [2024-07-15 16:10:19.490501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.872 qpair failed and we were unable to recover it. 00:28:50.872 [2024-07-15 16:10:19.490685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.872 [2024-07-15 16:10:19.490702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.872 qpair failed and we were unable to recover it. 00:28:50.872 [2024-07-15 16:10:19.490898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.872 [2024-07-15 16:10:19.490928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.872 qpair failed and we were unable to recover it. 00:28:50.872 [2024-07-15 16:10:19.491205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.872 [2024-07-15 16:10:19.491245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.872 qpair failed and we were unable to recover it. 00:28:50.872 [2024-07-15 16:10:19.491451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.872 [2024-07-15 16:10:19.491482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.872 qpair failed and we were unable to recover it. 00:28:50.872 [2024-07-15 16:10:19.491607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.872 [2024-07-15 16:10:19.491638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.872 qpair failed and we were unable to recover it. 00:28:50.872 [2024-07-15 16:10:19.491847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.872 [2024-07-15 16:10:19.491884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.872 qpair failed and we were unable to recover it. 00:28:50.872 [2024-07-15 16:10:19.492111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.873 [2024-07-15 16:10:19.492142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.873 qpair failed and we were unable to recover it. 00:28:50.873 [2024-07-15 16:10:19.492416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.873 [2024-07-15 16:10:19.492457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.873 qpair failed and we were unable to recover it. 00:28:50.873 [2024-07-15 16:10:19.492666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.873 [2024-07-15 16:10:19.492682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.873 qpair failed and we were unable to recover it. 00:28:50.873 [2024-07-15 16:10:19.492866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.873 [2024-07-15 16:10:19.492882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.873 qpair failed and we were unable to recover it. 00:28:50.873 [2024-07-15 16:10:19.493151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.873 [2024-07-15 16:10:19.493183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.873 qpair failed and we were unable to recover it. 00:28:50.873 [2024-07-15 16:10:19.493464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.873 [2024-07-15 16:10:19.493497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.873 qpair failed and we were unable to recover it. 00:28:50.873 [2024-07-15 16:10:19.493775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.873 [2024-07-15 16:10:19.493806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.873 qpair failed and we were unable to recover it. 00:28:50.873 [2024-07-15 16:10:19.493973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.873 [2024-07-15 16:10:19.494004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.873 qpair failed and we were unable to recover it. 00:28:50.873 [2024-07-15 16:10:19.494177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.873 [2024-07-15 16:10:19.494223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.873 qpair failed and we were unable to recover it. 00:28:50.873 [2024-07-15 16:10:19.494385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.873 [2024-07-15 16:10:19.494401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.873 qpair failed and we were unable to recover it. 00:28:50.873 [2024-07-15 16:10:19.494524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.873 [2024-07-15 16:10:19.494555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.873 qpair failed and we were unable to recover it. 00:28:50.873 [2024-07-15 16:10:19.494769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.873 [2024-07-15 16:10:19.494800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.873 qpair failed and we were unable to recover it. 00:28:50.873 [2024-07-15 16:10:19.495034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.873 [2024-07-15 16:10:19.495065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.873 qpair failed and we were unable to recover it. 00:28:50.873 [2024-07-15 16:10:19.495309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.873 [2024-07-15 16:10:19.495342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.873 qpair failed and we were unable to recover it. 00:28:50.873 [2024-07-15 16:10:19.495546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.873 [2024-07-15 16:10:19.495578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.873 qpair failed and we were unable to recover it. 00:28:50.873 [2024-07-15 16:10:19.495731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.873 [2024-07-15 16:10:19.495762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.873 qpair failed and we were unable to recover it. 00:28:50.873 [2024-07-15 16:10:19.495878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.873 [2024-07-15 16:10:19.495908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.873 qpair failed and we were unable to recover it. 00:28:50.873 [2024-07-15 16:10:19.496133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.873 [2024-07-15 16:10:19.496164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.873 qpair failed and we were unable to recover it. 00:28:50.873 [2024-07-15 16:10:19.496397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.873 [2024-07-15 16:10:19.496429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.873 qpair failed and we were unable to recover it. 00:28:50.873 [2024-07-15 16:10:19.496567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.873 [2024-07-15 16:10:19.496583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.873 qpair failed and we were unable to recover it. 00:28:50.873 [2024-07-15 16:10:19.496822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.873 [2024-07-15 16:10:19.496854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.873 qpair failed and we were unable to recover it. 00:28:50.873 [2024-07-15 16:10:19.497071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.873 [2024-07-15 16:10:19.497103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.873 qpair failed and we were unable to recover it. 00:28:50.873 [2024-07-15 16:10:19.497248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.873 [2024-07-15 16:10:19.497281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.873 qpair failed and we were unable to recover it. 00:28:50.873 [2024-07-15 16:10:19.497497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.873 [2024-07-15 16:10:19.497513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.873 qpair failed and we were unable to recover it. 00:28:50.873 [2024-07-15 16:10:19.497614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.873 [2024-07-15 16:10:19.497630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.873 qpair failed and we were unable to recover it. 00:28:50.873 [2024-07-15 16:10:19.497872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.873 [2024-07-15 16:10:19.497903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.873 qpair failed and we were unable to recover it. 00:28:50.873 [2024-07-15 16:10:19.498142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.873 [2024-07-15 16:10:19.498174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.873 qpair failed and we were unable to recover it. 00:28:50.873 [2024-07-15 16:10:19.498335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.873 [2024-07-15 16:10:19.498369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.873 qpair failed and we were unable to recover it. 00:28:50.873 [2024-07-15 16:10:19.498645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.873 [2024-07-15 16:10:19.498679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.873 qpair failed and we were unable to recover it. 00:28:50.873 [2024-07-15 16:10:19.498892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.873 [2024-07-15 16:10:19.498924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.873 qpair failed and we were unable to recover it. 00:28:50.873 [2024-07-15 16:10:19.499080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.873 [2024-07-15 16:10:19.499113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.873 qpair failed and we were unable to recover it. 00:28:50.873 [2024-07-15 16:10:19.499420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.873 [2024-07-15 16:10:19.499454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.873 qpair failed and we were unable to recover it. 00:28:50.873 [2024-07-15 16:10:19.499593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.873 [2024-07-15 16:10:19.499623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.873 qpair failed and we were unable to recover it. 00:28:50.873 [2024-07-15 16:10:19.499921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.873 [2024-07-15 16:10:19.499954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.873 qpair failed and we were unable to recover it. 00:28:50.873 [2024-07-15 16:10:19.500295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.873 [2024-07-15 16:10:19.500328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.873 qpair failed and we were unable to recover it. 00:28:50.873 [2024-07-15 16:10:19.500552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.873 [2024-07-15 16:10:19.500584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.873 qpair failed and we were unable to recover it. 00:28:50.873 [2024-07-15 16:10:19.500751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.873 [2024-07-15 16:10:19.500783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.873 qpair failed and we were unable to recover it. 00:28:50.873 [2024-07-15 16:10:19.500990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.873 [2024-07-15 16:10:19.501022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.873 qpair failed and we were unable to recover it. 00:28:50.873 [2024-07-15 16:10:19.501295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.873 [2024-07-15 16:10:19.501328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.873 qpair failed and we were unable to recover it. 00:28:50.873 [2024-07-15 16:10:19.501543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.873 [2024-07-15 16:10:19.501580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.873 qpair failed and we were unable to recover it. 00:28:50.873 [2024-07-15 16:10:19.501819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.873 [2024-07-15 16:10:19.501851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.873 qpair failed and we were unable to recover it. 00:28:50.873 [2024-07-15 16:10:19.502140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.874 [2024-07-15 16:10:19.502171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.874 qpair failed and we were unable to recover it. 00:28:50.874 [2024-07-15 16:10:19.502495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.874 [2024-07-15 16:10:19.502529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.874 qpair failed and we were unable to recover it. 00:28:50.874 [2024-07-15 16:10:19.502686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.874 [2024-07-15 16:10:19.502717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.874 qpair failed and we were unable to recover it. 00:28:50.874 [2024-07-15 16:10:19.502867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.874 [2024-07-15 16:10:19.502899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.874 qpair failed and we were unable to recover it. 00:28:50.874 [2024-07-15 16:10:19.503136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.874 [2024-07-15 16:10:19.503168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.874 qpair failed and we were unable to recover it. 00:28:50.874 [2024-07-15 16:10:19.503315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.874 [2024-07-15 16:10:19.503347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.874 qpair failed and we were unable to recover it. 00:28:50.874 [2024-07-15 16:10:19.503512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.874 [2024-07-15 16:10:19.503553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.874 qpair failed and we were unable to recover it. 00:28:50.874 [2024-07-15 16:10:19.503822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.874 [2024-07-15 16:10:19.503853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.874 qpair failed and we were unable to recover it. 00:28:50.874 [2024-07-15 16:10:19.504072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.874 [2024-07-15 16:10:19.504103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.874 qpair failed and we were unable to recover it. 00:28:50.874 [2024-07-15 16:10:19.504277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.874 [2024-07-15 16:10:19.504318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.874 qpair failed and we were unable to recover it. 00:28:50.874 [2024-07-15 16:10:19.504505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.874 [2024-07-15 16:10:19.504521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.874 qpair failed and we were unable to recover it. 00:28:50.874 [2024-07-15 16:10:19.504620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.874 [2024-07-15 16:10:19.504635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.874 qpair failed and we were unable to recover it. 00:28:50.874 [2024-07-15 16:10:19.504737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.874 [2024-07-15 16:10:19.504753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.874 qpair failed and we were unable to recover it. 00:28:50.874 [2024-07-15 16:10:19.504992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.874 [2024-07-15 16:10:19.505008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.874 qpair failed and we were unable to recover it. 00:28:50.874 [2024-07-15 16:10:19.505135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.874 [2024-07-15 16:10:19.505151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.874 qpair failed and we were unable to recover it. 00:28:50.874 [2024-07-15 16:10:19.505393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.874 [2024-07-15 16:10:19.505425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.874 qpair failed and we were unable to recover it. 00:28:50.874 [2024-07-15 16:10:19.505574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.874 [2024-07-15 16:10:19.505605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.874 qpair failed and we were unable to recover it. 00:28:50.874 [2024-07-15 16:10:19.505826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.874 [2024-07-15 16:10:19.505858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.874 qpair failed and we were unable to recover it. 00:28:50.874 [2024-07-15 16:10:19.506011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.874 [2024-07-15 16:10:19.506041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.874 qpair failed and we were unable to recover it. 00:28:50.874 [2024-07-15 16:10:19.506333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.874 [2024-07-15 16:10:19.506350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.874 qpair failed and we were unable to recover it. 00:28:50.874 [2024-07-15 16:10:19.506517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.874 [2024-07-15 16:10:19.506533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.874 qpair failed and we were unable to recover it. 00:28:50.874 [2024-07-15 16:10:19.506737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.874 [2024-07-15 16:10:19.506753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.874 qpair failed and we were unable to recover it. 00:28:50.874 [2024-07-15 16:10:19.506933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.874 [2024-07-15 16:10:19.506948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.874 qpair failed and we were unable to recover it. 00:28:50.874 [2024-07-15 16:10:19.507220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.874 [2024-07-15 16:10:19.507262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.874 qpair failed and we were unable to recover it. 00:28:50.874 [2024-07-15 16:10:19.507424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.874 [2024-07-15 16:10:19.507455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.874 qpair failed and we were unable to recover it. 00:28:50.874 [2024-07-15 16:10:19.507672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.874 [2024-07-15 16:10:19.507705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.874 qpair failed and we were unable to recover it. 00:28:50.874 [2024-07-15 16:10:19.507978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.874 [2024-07-15 16:10:19.508010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.874 qpair failed and we were unable to recover it. 00:28:50.874 [2024-07-15 16:10:19.508202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.874 [2024-07-15 16:10:19.508244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.874 qpair failed and we were unable to recover it. 00:28:50.874 [2024-07-15 16:10:19.508487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.874 [2024-07-15 16:10:19.508502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.874 qpair failed and we were unable to recover it. 00:28:50.874 [2024-07-15 16:10:19.508668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.874 [2024-07-15 16:10:19.508685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.874 qpair failed and we were unable to recover it. 00:28:50.874 [2024-07-15 16:10:19.508784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.874 [2024-07-15 16:10:19.508798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.874 qpair failed and we were unable to recover it. 00:28:50.874 [2024-07-15 16:10:19.509007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.874 [2024-07-15 16:10:19.509023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.874 qpair failed and we were unable to recover it. 00:28:50.874 [2024-07-15 16:10:19.509189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.874 [2024-07-15 16:10:19.509205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.874 qpair failed and we were unable to recover it. 00:28:50.874 [2024-07-15 16:10:19.509312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.874 [2024-07-15 16:10:19.509327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.874 qpair failed and we were unable to recover it. 00:28:50.874 [2024-07-15 16:10:19.509467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.874 [2024-07-15 16:10:19.509498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.874 qpair failed and we were unable to recover it. 00:28:50.874 [2024-07-15 16:10:19.509742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.874 [2024-07-15 16:10:19.509772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.874 qpair failed and we were unable to recover it. 00:28:50.874 [2024-07-15 16:10:19.509884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.874 [2024-07-15 16:10:19.509915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.874 qpair failed and we were unable to recover it. 00:28:50.874 [2024-07-15 16:10:19.510208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.874 [2024-07-15 16:10:19.510253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.874 qpair failed and we were unable to recover it. 00:28:50.874 [2024-07-15 16:10:19.510546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.874 [2024-07-15 16:10:19.510583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.874 qpair failed and we were unable to recover it. 00:28:50.874 [2024-07-15 16:10:19.510824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.874 [2024-07-15 16:10:19.510856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.874 qpair failed and we were unable to recover it. 00:28:50.875 [2024-07-15 16:10:19.511061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.875 [2024-07-15 16:10:19.511102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.875 qpair failed and we were unable to recover it. 00:28:50.875 [2024-07-15 16:10:19.511339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.875 [2024-07-15 16:10:19.511356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.875 qpair failed and we were unable to recover it. 00:28:50.875 [2024-07-15 16:10:19.511478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.875 [2024-07-15 16:10:19.511494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.875 qpair failed and we were unable to recover it. 00:28:50.875 [2024-07-15 16:10:19.511734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.875 [2024-07-15 16:10:19.511766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.875 qpair failed and we were unable to recover it. 00:28:50.875 [2024-07-15 16:10:19.511966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.875 [2024-07-15 16:10:19.511997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.875 qpair failed and we were unable to recover it. 00:28:50.875 [2024-07-15 16:10:19.512196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.875 [2024-07-15 16:10:19.512212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.875 qpair failed and we were unable to recover it. 00:28:50.875 [2024-07-15 16:10:19.512380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.875 [2024-07-15 16:10:19.512421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.875 qpair failed and we were unable to recover it. 00:28:50.875 [2024-07-15 16:10:19.512563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.875 [2024-07-15 16:10:19.512581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.875 qpair failed and we were unable to recover it. 00:28:50.875 [2024-07-15 16:10:19.512782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.875 [2024-07-15 16:10:19.512814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.875 qpair failed and we were unable to recover it. 00:28:50.875 [2024-07-15 16:10:19.513043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.875 [2024-07-15 16:10:19.513075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.875 qpair failed and we were unable to recover it. 00:28:50.875 [2024-07-15 16:10:19.513214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.875 [2024-07-15 16:10:19.513262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.875 qpair failed and we were unable to recover it. 00:28:50.875 [2024-07-15 16:10:19.513496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.875 [2024-07-15 16:10:19.513511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.875 qpair failed and we were unable to recover it. 00:28:50.875 [2024-07-15 16:10:19.513642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.875 [2024-07-15 16:10:19.513671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.875 qpair failed and we were unable to recover it. 00:28:50.875 [2024-07-15 16:10:19.513871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.875 [2024-07-15 16:10:19.513904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.875 qpair failed and we were unable to recover it. 00:28:50.875 [2024-07-15 16:10:19.514218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.875 [2024-07-15 16:10:19.514262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.875 qpair failed and we were unable to recover it. 00:28:50.875 [2024-07-15 16:10:19.514487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.875 [2024-07-15 16:10:19.514520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.875 qpair failed and we were unable to recover it. 00:28:50.875 [2024-07-15 16:10:19.514651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.875 [2024-07-15 16:10:19.514682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.875 qpair failed and we were unable to recover it. 00:28:50.875 [2024-07-15 16:10:19.514941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.875 [2024-07-15 16:10:19.514973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.875 qpair failed and we were unable to recover it. 00:28:50.875 [2024-07-15 16:10:19.515185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.875 [2024-07-15 16:10:19.515201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.875 qpair failed and we were unable to recover it. 00:28:50.875 [2024-07-15 16:10:19.515493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.875 [2024-07-15 16:10:19.515525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.875 qpair failed and we were unable to recover it. 00:28:50.875 [2024-07-15 16:10:19.515687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.875 [2024-07-15 16:10:19.515719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.875 qpair failed and we were unable to recover it. 00:28:50.875 [2024-07-15 16:10:19.515945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.875 [2024-07-15 16:10:19.515977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.875 qpair failed and we were unable to recover it. 00:28:50.875 [2024-07-15 16:10:19.516184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.875 [2024-07-15 16:10:19.516216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.875 qpair failed and we were unable to recover it. 00:28:50.875 [2024-07-15 16:10:19.516496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.875 [2024-07-15 16:10:19.516511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.875 qpair failed and we were unable to recover it. 00:28:50.875 [2024-07-15 16:10:19.516634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.875 [2024-07-15 16:10:19.516650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.875 qpair failed and we were unable to recover it. 00:28:50.875 [2024-07-15 16:10:19.516825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.875 [2024-07-15 16:10:19.516844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.875 qpair failed and we were unable to recover it. 00:28:50.875 [2024-07-15 16:10:19.517013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.875 [2024-07-15 16:10:19.517030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.875 qpair failed and we were unable to recover it. 00:28:50.875 [2024-07-15 16:10:19.517219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.875 [2024-07-15 16:10:19.517262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.875 qpair failed and we were unable to recover it. 00:28:50.875 [2024-07-15 16:10:19.517565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.875 [2024-07-15 16:10:19.517597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.875 qpair failed and we were unable to recover it. 00:28:50.875 [2024-07-15 16:10:19.517798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.875 [2024-07-15 16:10:19.517829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.875 qpair failed and we were unable to recover it. 00:28:50.875 [2024-07-15 16:10:19.518037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.875 [2024-07-15 16:10:19.518068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.875 qpair failed and we were unable to recover it. 00:28:50.875 [2024-07-15 16:10:19.518337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.875 [2024-07-15 16:10:19.518370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.875 qpair failed and we were unable to recover it. 00:28:50.875 [2024-07-15 16:10:19.518513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.875 [2024-07-15 16:10:19.518545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.875 qpair failed and we were unable to recover it. 00:28:50.875 [2024-07-15 16:10:19.518768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.875 [2024-07-15 16:10:19.518800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.875 qpair failed and we were unable to recover it. 00:28:50.875 [2024-07-15 16:10:19.519061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.875 [2024-07-15 16:10:19.519093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.875 qpair failed and we were unable to recover it. 00:28:50.875 [2024-07-15 16:10:19.519304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.875 [2024-07-15 16:10:19.519320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.875 qpair failed and we were unable to recover it. 00:28:50.875 [2024-07-15 16:10:19.519577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.875 [2024-07-15 16:10:19.519608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.875 qpair failed and we were unable to recover it. 00:28:50.875 [2024-07-15 16:10:19.519748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.875 [2024-07-15 16:10:19.519780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.875 qpair failed and we were unable to recover it. 00:28:50.875 [2024-07-15 16:10:19.520051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.875 [2024-07-15 16:10:19.520083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.875 qpair failed and we were unable to recover it. 00:28:50.875 [2024-07-15 16:10:19.520302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.875 [2024-07-15 16:10:19.520335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.875 qpair failed and we were unable to recover it. 00:28:50.875 [2024-07-15 16:10:19.520532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.875 [2024-07-15 16:10:19.520546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.875 qpair failed and we were unable to recover it. 00:28:50.875 [2024-07-15 16:10:19.520730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.876 [2024-07-15 16:10:19.520760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.876 qpair failed and we were unable to recover it. 00:28:50.876 [2024-07-15 16:10:19.520892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.876 [2024-07-15 16:10:19.520924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.876 qpair failed and we were unable to recover it. 00:28:50.876 [2024-07-15 16:10:19.521085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.876 [2024-07-15 16:10:19.521117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.876 qpair failed and we were unable to recover it. 00:28:50.876 [2024-07-15 16:10:19.521405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.876 [2024-07-15 16:10:19.521421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.876 qpair failed and we were unable to recover it. 00:28:50.876 [2024-07-15 16:10:19.521655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.876 [2024-07-15 16:10:19.521671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.876 qpair failed and we were unable to recover it. 00:28:50.876 [2024-07-15 16:10:19.521910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.876 [2024-07-15 16:10:19.521942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.876 qpair failed and we were unable to recover it. 00:28:50.876 [2024-07-15 16:10:19.522244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.876 [2024-07-15 16:10:19.522276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.876 qpair failed and we were unable to recover it. 00:28:50.876 [2024-07-15 16:10:19.522440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.876 [2024-07-15 16:10:19.522471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.876 qpair failed and we were unable to recover it. 00:28:50.876 [2024-07-15 16:10:19.522764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.876 [2024-07-15 16:10:19.522795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.876 qpair failed and we were unable to recover it. 00:28:50.876 [2024-07-15 16:10:19.522960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.876 [2024-07-15 16:10:19.522992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.876 qpair failed and we were unable to recover it. 00:28:50.876 [2024-07-15 16:10:19.523268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.876 [2024-07-15 16:10:19.523306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.876 qpair failed and we were unable to recover it. 00:28:50.876 [2024-07-15 16:10:19.523452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.876 [2024-07-15 16:10:19.523471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.876 qpair failed and we were unable to recover it. 00:28:50.876 [2024-07-15 16:10:19.523596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.876 [2024-07-15 16:10:19.523627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.876 qpair failed and we were unable to recover it. 00:28:50.876 [2024-07-15 16:10:19.523866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.876 [2024-07-15 16:10:19.523897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.876 qpair failed and we were unable to recover it. 00:28:50.876 [2024-07-15 16:10:19.524062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.876 [2024-07-15 16:10:19.524093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.876 qpair failed and we were unable to recover it. 00:28:50.876 [2024-07-15 16:10:19.524356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.876 [2024-07-15 16:10:19.524373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.876 qpair failed and we were unable to recover it. 00:28:50.876 [2024-07-15 16:10:19.524496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.876 [2024-07-15 16:10:19.524512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.876 qpair failed and we were unable to recover it. 00:28:50.876 [2024-07-15 16:10:19.524710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.876 [2024-07-15 16:10:19.524741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.876 qpair failed and we were unable to recover it. 00:28:50.876 [2024-07-15 16:10:19.525027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.876 [2024-07-15 16:10:19.525058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.876 qpair failed and we were unable to recover it. 00:28:50.876 [2024-07-15 16:10:19.525192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.876 [2024-07-15 16:10:19.525223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.876 qpair failed and we were unable to recover it. 00:28:50.876 [2024-07-15 16:10:19.525393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.876 [2024-07-15 16:10:19.525408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.876 qpair failed and we were unable to recover it. 00:28:50.876 [2024-07-15 16:10:19.525567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.876 [2024-07-15 16:10:19.525599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.876 qpair failed and we were unable to recover it. 00:28:50.876 [2024-07-15 16:10:19.525800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.876 [2024-07-15 16:10:19.525831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.876 qpair failed and we were unable to recover it. 00:28:50.876 [2024-07-15 16:10:19.525982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.876 [2024-07-15 16:10:19.526015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.876 qpair failed and we were unable to recover it. 00:28:50.876 [2024-07-15 16:10:19.526239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.876 [2024-07-15 16:10:19.526271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.876 qpair failed and we were unable to recover it. 00:28:50.876 [2024-07-15 16:10:19.526438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.876 [2024-07-15 16:10:19.526469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.876 qpair failed and we were unable to recover it. 00:28:50.876 [2024-07-15 16:10:19.526668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.876 [2024-07-15 16:10:19.526699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.876 qpair failed and we were unable to recover it. 00:28:50.876 [2024-07-15 16:10:19.526901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.876 [2024-07-15 16:10:19.526932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.876 qpair failed and we were unable to recover it. 00:28:50.876 [2024-07-15 16:10:19.527104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.876 [2024-07-15 16:10:19.527136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.876 qpair failed and we were unable to recover it. 00:28:50.876 [2024-07-15 16:10:19.527377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.876 [2024-07-15 16:10:19.527393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.876 qpair failed and we were unable to recover it. 00:28:50.876 [2024-07-15 16:10:19.527520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.876 [2024-07-15 16:10:19.527551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.876 qpair failed and we were unable to recover it. 00:28:50.876 [2024-07-15 16:10:19.527757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.876 [2024-07-15 16:10:19.527788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.876 qpair failed and we were unable to recover it. 00:28:50.876 [2024-07-15 16:10:19.527937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.876 [2024-07-15 16:10:19.527967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.876 qpair failed and we were unable to recover it. 00:28:50.876 [2024-07-15 16:10:19.528185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.876 [2024-07-15 16:10:19.528217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.876 qpair failed and we were unable to recover it. 00:28:50.876 [2024-07-15 16:10:19.528434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.876 [2024-07-15 16:10:19.528466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.877 qpair failed and we were unable to recover it. 00:28:50.877 [2024-07-15 16:10:19.528672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.877 [2024-07-15 16:10:19.528703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.877 qpair failed and we were unable to recover it. 00:28:50.877 [2024-07-15 16:10:19.529004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.877 [2024-07-15 16:10:19.529035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.877 qpair failed and we were unable to recover it. 00:28:50.877 [2024-07-15 16:10:19.529261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.877 [2024-07-15 16:10:19.529295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.877 qpair failed and we were unable to recover it. 00:28:50.877 [2024-07-15 16:10:19.529566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.877 [2024-07-15 16:10:19.529597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.877 qpair failed and we were unable to recover it. 00:28:50.877 [2024-07-15 16:10:19.529726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.877 [2024-07-15 16:10:19.529758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.877 qpair failed and we were unable to recover it. 00:28:50.877 [2024-07-15 16:10:19.529974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.877 [2024-07-15 16:10:19.530006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.877 qpair failed and we were unable to recover it. 00:28:50.877 [2024-07-15 16:10:19.530220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.877 [2024-07-15 16:10:19.530260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.877 qpair failed and we were unable to recover it. 00:28:50.877 [2024-07-15 16:10:19.530561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.877 [2024-07-15 16:10:19.530592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.877 qpair failed and we were unable to recover it. 00:28:50.877 [2024-07-15 16:10:19.530861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.877 [2024-07-15 16:10:19.530892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.877 qpair failed and we were unable to recover it. 00:28:50.877 [2024-07-15 16:10:19.531114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.877 [2024-07-15 16:10:19.531146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.877 qpair failed and we were unable to recover it. 00:28:50.877 [2024-07-15 16:10:19.531297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.877 [2024-07-15 16:10:19.531329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.877 qpair failed and we were unable to recover it. 00:28:50.877 [2024-07-15 16:10:19.531560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.877 [2024-07-15 16:10:19.531591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.877 qpair failed and we were unable to recover it. 00:28:50.877 [2024-07-15 16:10:19.531738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.877 [2024-07-15 16:10:19.531770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.877 qpair failed and we were unable to recover it. 00:28:50.877 [2024-07-15 16:10:19.531974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.877 [2024-07-15 16:10:19.532005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.877 qpair failed and we were unable to recover it. 00:28:50.877 [2024-07-15 16:10:19.532302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.877 [2024-07-15 16:10:19.532335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.877 qpair failed and we were unable to recover it. 00:28:50.877 [2024-07-15 16:10:19.532509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.877 [2024-07-15 16:10:19.532525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.877 qpair failed and we were unable to recover it. 00:28:50.877 [2024-07-15 16:10:19.532706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.877 [2024-07-15 16:10:19.532722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.877 qpair failed and we were unable to recover it. 00:28:50.877 [2024-07-15 16:10:19.532828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.877 [2024-07-15 16:10:19.532843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.877 qpair failed and we were unable to recover it. 00:28:50.877 [2024-07-15 16:10:19.533029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.877 [2024-07-15 16:10:19.533045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.877 qpair failed and we were unable to recover it. 00:28:50.877 [2024-07-15 16:10:19.533244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.877 [2024-07-15 16:10:19.533275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.877 qpair failed and we were unable to recover it. 00:28:50.877 [2024-07-15 16:10:19.533592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.877 [2024-07-15 16:10:19.533623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.877 qpair failed and we were unable to recover it. 00:28:50.877 [2024-07-15 16:10:19.533830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.877 [2024-07-15 16:10:19.533861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.877 qpair failed and we were unable to recover it. 00:28:50.877 [2024-07-15 16:10:19.534038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.877 [2024-07-15 16:10:19.534069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.877 qpair failed and we were unable to recover it. 00:28:50.877 [2024-07-15 16:10:19.534274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.877 [2024-07-15 16:10:19.534306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.877 qpair failed and we were unable to recover it. 00:28:50.877 [2024-07-15 16:10:19.534462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.877 [2024-07-15 16:10:19.534493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.877 qpair failed and we were unable to recover it. 00:28:50.877 [2024-07-15 16:10:19.534727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.877 [2024-07-15 16:10:19.534759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.877 qpair failed and we were unable to recover it. 00:28:50.877 [2024-07-15 16:10:19.534919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.877 [2024-07-15 16:10:19.534950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.877 qpair failed and we were unable to recover it. 00:28:50.877 [2024-07-15 16:10:19.535245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.877 [2024-07-15 16:10:19.535277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.877 qpair failed and we were unable to recover it. 00:28:50.877 [2024-07-15 16:10:19.535599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.877 [2024-07-15 16:10:19.535630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.877 qpair failed and we were unable to recover it. 00:28:50.877 [2024-07-15 16:10:19.535868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.877 [2024-07-15 16:10:19.535900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.877 qpair failed and we were unable to recover it. 00:28:50.877 [2024-07-15 16:10:19.536098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.877 [2024-07-15 16:10:19.536129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.877 qpair failed and we were unable to recover it. 00:28:50.877 [2024-07-15 16:10:19.536337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.877 [2024-07-15 16:10:19.536370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.877 qpair failed and we were unable to recover it. 00:28:50.877 [2024-07-15 16:10:19.536530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.877 [2024-07-15 16:10:19.536545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.877 qpair failed and we were unable to recover it. 00:28:50.877 [2024-07-15 16:10:19.536815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.877 [2024-07-15 16:10:19.536846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.877 qpair failed and we were unable to recover it. 00:28:50.877 [2024-07-15 16:10:19.537053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.877 [2024-07-15 16:10:19.537084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.877 qpair failed and we were unable to recover it. 00:28:50.877 [2024-07-15 16:10:19.537349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.877 [2024-07-15 16:10:19.537364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.877 qpair failed and we were unable to recover it. 00:28:50.877 [2024-07-15 16:10:19.537552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.877 [2024-07-15 16:10:19.537568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.877 qpair failed and we were unable to recover it. 00:28:50.877 [2024-07-15 16:10:19.537672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.877 [2024-07-15 16:10:19.537687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.877 qpair failed and we were unable to recover it. 00:28:50.877 [2024-07-15 16:10:19.537874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.877 [2024-07-15 16:10:19.537906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.877 qpair failed and we were unable to recover it. 00:28:50.877 [2024-07-15 16:10:19.538067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.877 [2024-07-15 16:10:19.538099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.877 qpair failed and we were unable to recover it. 00:28:50.877 [2024-07-15 16:10:19.538312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.877 [2024-07-15 16:10:19.538345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.877 qpair failed and we were unable to recover it. 00:28:50.878 [2024-07-15 16:10:19.538623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.878 [2024-07-15 16:10:19.538654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.878 qpair failed and we were unable to recover it. 00:28:50.878 [2024-07-15 16:10:19.538770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.878 [2024-07-15 16:10:19.538800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.878 qpair failed and we were unable to recover it. 00:28:50.878 [2024-07-15 16:10:19.539011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.878 [2024-07-15 16:10:19.539041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.878 qpair failed and we were unable to recover it. 00:28:50.878 [2024-07-15 16:10:19.539180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.878 [2024-07-15 16:10:19.539215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.878 qpair failed and we were unable to recover it. 00:28:50.878 [2024-07-15 16:10:19.539445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.878 [2024-07-15 16:10:19.539477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.878 qpair failed and we were unable to recover it. 00:28:50.878 [2024-07-15 16:10:19.539742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.878 [2024-07-15 16:10:19.539773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.878 qpair failed and we were unable to recover it. 00:28:50.878 [2024-07-15 16:10:19.539933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.878 [2024-07-15 16:10:19.539964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.878 qpair failed and we were unable to recover it. 00:28:50.878 [2024-07-15 16:10:19.540168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.878 [2024-07-15 16:10:19.540198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.878 qpair failed and we were unable to recover it. 00:28:50.878 [2024-07-15 16:10:19.540411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.878 [2024-07-15 16:10:19.540443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.878 qpair failed and we were unable to recover it. 00:28:50.878 [2024-07-15 16:10:19.540645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.878 [2024-07-15 16:10:19.540676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.878 qpair failed and we were unable to recover it. 00:28:50.878 [2024-07-15 16:10:19.540896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.878 [2024-07-15 16:10:19.540927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.878 qpair failed and we were unable to recover it. 00:28:50.878 [2024-07-15 16:10:19.541200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.878 [2024-07-15 16:10:19.541248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.878 qpair failed and we were unable to recover it. 00:28:50.878 [2024-07-15 16:10:19.541361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.878 [2024-07-15 16:10:19.541375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.878 qpair failed and we were unable to recover it. 00:28:50.878 [2024-07-15 16:10:19.541549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.878 [2024-07-15 16:10:19.541579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.878 qpair failed and we were unable to recover it. 00:28:50.878 [2024-07-15 16:10:19.541873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.878 [2024-07-15 16:10:19.541905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.878 qpair failed and we were unable to recover it. 00:28:50.878 [2024-07-15 16:10:19.542129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.878 [2024-07-15 16:10:19.542161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.878 qpair failed and we were unable to recover it. 00:28:50.878 [2024-07-15 16:10:19.542363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.878 [2024-07-15 16:10:19.542379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.878 qpair failed and we were unable to recover it. 00:28:50.878 [2024-07-15 16:10:19.542524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.878 [2024-07-15 16:10:19.542539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.878 qpair failed and we were unable to recover it. 00:28:50.878 [2024-07-15 16:10:19.542716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.878 [2024-07-15 16:10:19.542746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.878 qpair failed and we were unable to recover it. 00:28:50.878 [2024-07-15 16:10:19.543016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.878 [2024-07-15 16:10:19.543048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.878 qpair failed and we were unable to recover it. 00:28:50.878 [2024-07-15 16:10:19.543316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.878 [2024-07-15 16:10:19.543348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.878 qpair failed and we were unable to recover it. 00:28:50.878 [2024-07-15 16:10:19.543595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.878 [2024-07-15 16:10:19.543610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.878 qpair failed and we were unable to recover it. 00:28:50.878 [2024-07-15 16:10:19.543867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.878 [2024-07-15 16:10:19.543882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.878 qpair failed and we were unable to recover it. 00:28:50.878 [2024-07-15 16:10:19.544060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.878 [2024-07-15 16:10:19.544076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.878 qpair failed and we were unable to recover it. 00:28:50.878 [2024-07-15 16:10:19.544275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.878 [2024-07-15 16:10:19.544307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.878 qpair failed and we were unable to recover it. 00:28:50.878 [2024-07-15 16:10:19.544588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.878 [2024-07-15 16:10:19.544620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.878 qpair failed and we were unable to recover it. 00:28:50.878 [2024-07-15 16:10:19.544862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.878 [2024-07-15 16:10:19.544894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.878 qpair failed and we were unable to recover it. 00:28:50.878 [2024-07-15 16:10:19.545163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.878 [2024-07-15 16:10:19.545194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.878 qpair failed and we were unable to recover it. 00:28:50.878 [2024-07-15 16:10:19.545406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.878 [2024-07-15 16:10:19.545422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.878 qpair failed and we were unable to recover it. 00:28:50.878 [2024-07-15 16:10:19.545660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.878 [2024-07-15 16:10:19.545691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.878 qpair failed and we were unable to recover it. 00:28:50.878 [2024-07-15 16:10:19.545891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.878 [2024-07-15 16:10:19.545928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.878 qpair failed and we were unable to recover it. 00:28:50.878 [2024-07-15 16:10:19.546218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.878 [2024-07-15 16:10:19.546259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.878 qpair failed and we were unable to recover it. 00:28:50.878 [2024-07-15 16:10:19.546469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.878 [2024-07-15 16:10:19.546500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.878 qpair failed and we were unable to recover it. 00:28:50.878 [2024-07-15 16:10:19.546647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.878 [2024-07-15 16:10:19.546663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.878 qpair failed and we were unable to recover it. 00:28:50.878 [2024-07-15 16:10:19.546835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.878 [2024-07-15 16:10:19.546866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.878 qpair failed and we were unable to recover it. 00:28:50.878 [2024-07-15 16:10:19.547099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.878 [2024-07-15 16:10:19.547130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.878 qpair failed and we were unable to recover it. 00:28:50.878 [2024-07-15 16:10:19.547373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.878 [2024-07-15 16:10:19.547406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.878 qpair failed and we were unable to recover it. 00:28:50.878 [2024-07-15 16:10:19.547550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.878 [2024-07-15 16:10:19.547580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.878 qpair failed and we were unable to recover it. 00:28:50.878 [2024-07-15 16:10:19.547783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.878 [2024-07-15 16:10:19.547812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.878 qpair failed and we were unable to recover it. 00:28:50.878 [2024-07-15 16:10:19.548012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.878 [2024-07-15 16:10:19.548044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.878 qpair failed and we were unable to recover it. 00:28:50.878 [2024-07-15 16:10:19.548311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.878 [2024-07-15 16:10:19.548343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.878 qpair failed and we were unable to recover it. 00:28:50.878 [2024-07-15 16:10:19.548654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.879 [2024-07-15 16:10:19.548686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.879 qpair failed and we were unable to recover it. 00:28:50.879 [2024-07-15 16:10:19.548907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.879 [2024-07-15 16:10:19.548939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.879 qpair failed and we were unable to recover it. 00:28:50.879 [2024-07-15 16:10:19.549254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.879 [2024-07-15 16:10:19.549286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.879 qpair failed and we were unable to recover it. 00:28:50.879 [2024-07-15 16:10:19.549454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.879 [2024-07-15 16:10:19.549486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.879 qpair failed and we were unable to recover it. 00:28:50.879 [2024-07-15 16:10:19.549688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.879 [2024-07-15 16:10:19.549719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.879 qpair failed and we were unable to recover it. 00:28:50.879 [2024-07-15 16:10:19.549917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.879 [2024-07-15 16:10:19.549948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.879 qpair failed and we were unable to recover it. 00:28:50.879 [2024-07-15 16:10:19.550172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.879 [2024-07-15 16:10:19.550203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.879 qpair failed and we were unable to recover it. 00:28:50.879 [2024-07-15 16:10:19.550422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.879 [2024-07-15 16:10:19.550455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.879 qpair failed and we were unable to recover it. 00:28:50.879 [2024-07-15 16:10:19.550735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.879 [2024-07-15 16:10:19.550766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.879 qpair failed and we were unable to recover it. 00:28:50.879 [2024-07-15 16:10:19.550925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.879 [2024-07-15 16:10:19.550956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.879 qpair failed and we were unable to recover it. 00:28:50.879 [2024-07-15 16:10:19.551249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.879 [2024-07-15 16:10:19.551282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.879 qpair failed and we were unable to recover it. 00:28:50.879 [2024-07-15 16:10:19.551498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.879 [2024-07-15 16:10:19.551530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.879 qpair failed and we were unable to recover it. 00:28:50.879 [2024-07-15 16:10:19.551764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.879 [2024-07-15 16:10:19.551796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.879 qpair failed and we were unable to recover it. 00:28:50.879 [2024-07-15 16:10:19.552008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.879 [2024-07-15 16:10:19.552038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.879 qpair failed and we were unable to recover it. 00:28:50.879 [2024-07-15 16:10:19.552243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.879 [2024-07-15 16:10:19.552275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.879 qpair failed and we were unable to recover it. 00:28:50.879 [2024-07-15 16:10:19.552564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.879 [2024-07-15 16:10:19.552596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.879 qpair failed and we were unable to recover it. 00:28:50.879 [2024-07-15 16:10:19.552803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.879 [2024-07-15 16:10:19.552821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.879 qpair failed and we were unable to recover it. 00:28:50.879 [2024-07-15 16:10:19.553009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.879 [2024-07-15 16:10:19.553041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.879 qpair failed and we were unable to recover it. 00:28:50.879 [2024-07-15 16:10:19.553255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.879 [2024-07-15 16:10:19.553288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.879 qpair failed and we were unable to recover it. 00:28:50.879 [2024-07-15 16:10:19.553398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.879 [2024-07-15 16:10:19.553427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.879 qpair failed and we were unable to recover it. 00:28:50.879 [2024-07-15 16:10:19.553667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.879 [2024-07-15 16:10:19.553699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.879 qpair failed and we were unable to recover it. 00:28:50.879 [2024-07-15 16:10:19.553911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.879 [2024-07-15 16:10:19.553942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.879 qpair failed and we were unable to recover it. 00:28:50.879 [2024-07-15 16:10:19.554152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.879 [2024-07-15 16:10:19.554184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.879 qpair failed and we were unable to recover it. 00:28:50.879 [2024-07-15 16:10:19.554427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.879 [2024-07-15 16:10:19.554443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.879 qpair failed and we were unable to recover it. 00:28:50.879 [2024-07-15 16:10:19.554617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.879 [2024-07-15 16:10:19.554632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.879 qpair failed and we were unable to recover it. 00:28:50.879 [2024-07-15 16:10:19.554762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.879 [2024-07-15 16:10:19.554792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.879 qpair failed and we were unable to recover it. 00:28:50.879 [2024-07-15 16:10:19.554988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.879 [2024-07-15 16:10:19.555018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.879 qpair failed and we were unable to recover it. 00:28:50.879 [2024-07-15 16:10:19.555214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.879 [2024-07-15 16:10:19.555255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.879 qpair failed and we were unable to recover it. 00:28:50.879 [2024-07-15 16:10:19.555509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.879 [2024-07-15 16:10:19.555524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.879 qpair failed and we were unable to recover it. 00:28:50.879 [2024-07-15 16:10:19.555641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.879 [2024-07-15 16:10:19.555656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.879 qpair failed and we were unable to recover it. 00:28:50.879 [2024-07-15 16:10:19.555831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.879 [2024-07-15 16:10:19.555862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.879 qpair failed and we were unable to recover it. 00:28:50.879 [2024-07-15 16:10:19.556036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.879 [2024-07-15 16:10:19.556066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.879 qpair failed and we were unable to recover it. 00:28:50.879 [2024-07-15 16:10:19.556241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.879 [2024-07-15 16:10:19.556272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.879 qpair failed and we were unable to recover it. 00:28:50.879 [2024-07-15 16:10:19.556434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.879 [2024-07-15 16:10:19.556450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.879 qpair failed and we were unable to recover it. 00:28:50.879 [2024-07-15 16:10:19.556643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.879 [2024-07-15 16:10:19.556674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.879 qpair failed and we were unable to recover it. 00:28:50.879 [2024-07-15 16:10:19.556885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.879 [2024-07-15 16:10:19.556915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.879 qpair failed and we were unable to recover it. 00:28:50.879 [2024-07-15 16:10:19.557137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.879 [2024-07-15 16:10:19.557168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.879 qpair failed and we were unable to recover it. 00:28:50.879 [2024-07-15 16:10:19.557412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.879 [2024-07-15 16:10:19.557429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.879 qpair failed and we were unable to recover it. 00:28:50.879 [2024-07-15 16:10:19.557596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.879 [2024-07-15 16:10:19.557626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.879 qpair failed and we were unable to recover it. 00:28:50.879 [2024-07-15 16:10:19.557838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.879 [2024-07-15 16:10:19.557869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.879 qpair failed and we were unable to recover it. 00:28:50.879 [2024-07-15 16:10:19.558014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.879 [2024-07-15 16:10:19.558045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.879 qpair failed and we were unable to recover it. 00:28:50.879 [2024-07-15 16:10:19.558245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.879 [2024-07-15 16:10:19.558279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.880 qpair failed and we were unable to recover it. 00:28:50.880 [2024-07-15 16:10:19.558571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.880 [2024-07-15 16:10:19.558602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.880 qpair failed and we were unable to recover it. 00:28:50.880 [2024-07-15 16:10:19.558802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.880 [2024-07-15 16:10:19.558818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.880 qpair failed and we were unable to recover it. 00:28:50.880 [2024-07-15 16:10:19.559023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.880 [2024-07-15 16:10:19.559039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.880 qpair failed and we were unable to recover it. 00:28:50.880 [2024-07-15 16:10:19.559204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.880 [2024-07-15 16:10:19.559220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.880 qpair failed and we were unable to recover it. 00:28:50.880 [2024-07-15 16:10:19.559422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.880 [2024-07-15 16:10:19.559454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.880 qpair failed and we were unable to recover it. 00:28:50.880 [2024-07-15 16:10:19.559603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.880 [2024-07-15 16:10:19.559634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.880 qpair failed and we were unable to recover it. 00:28:50.880 [2024-07-15 16:10:19.559871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.880 [2024-07-15 16:10:19.559902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.880 qpair failed and we were unable to recover it. 00:28:50.880 [2024-07-15 16:10:19.560051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.880 [2024-07-15 16:10:19.560082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.880 qpair failed and we were unable to recover it. 00:28:50.880 [2024-07-15 16:10:19.560306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.880 [2024-07-15 16:10:19.560338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.880 qpair failed and we were unable to recover it. 00:28:50.880 [2024-07-15 16:10:19.560538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.880 [2024-07-15 16:10:19.560570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.880 qpair failed and we were unable to recover it. 00:28:50.880 [2024-07-15 16:10:19.560713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.880 [2024-07-15 16:10:19.560743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.880 qpair failed and we were unable to recover it. 00:28:50.880 [2024-07-15 16:10:19.560964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.880 [2024-07-15 16:10:19.560994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.880 qpair failed and we were unable to recover it. 00:28:50.880 [2024-07-15 16:10:19.561208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.880 [2024-07-15 16:10:19.561247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.880 qpair failed and we were unable to recover it. 00:28:50.880 [2024-07-15 16:10:19.561488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.880 [2024-07-15 16:10:19.561503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.880 qpair failed and we were unable to recover it. 00:28:50.880 [2024-07-15 16:10:19.561675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.880 [2024-07-15 16:10:19.561691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.880 qpair failed and we were unable to recover it. 00:28:50.880 [2024-07-15 16:10:19.561792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.880 [2024-07-15 16:10:19.561824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.880 qpair failed and we were unable to recover it. 00:28:50.880 [2024-07-15 16:10:19.561958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.880 [2024-07-15 16:10:19.561989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.880 qpair failed and we were unable to recover it. 00:28:50.880 [2024-07-15 16:10:19.562133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.880 [2024-07-15 16:10:19.562164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.880 qpair failed and we were unable to recover it. 00:28:50.880 [2024-07-15 16:10:19.562431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.880 [2024-07-15 16:10:19.562464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.880 qpair failed and we were unable to recover it. 00:28:50.880 [2024-07-15 16:10:19.562677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.880 [2024-07-15 16:10:19.562708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.880 qpair failed and we were unable to recover it. 00:28:50.880 [2024-07-15 16:10:19.562909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.880 [2024-07-15 16:10:19.562939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.880 qpair failed and we were unable to recover it. 00:28:50.880 [2024-07-15 16:10:19.563099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.880 [2024-07-15 16:10:19.563130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.880 qpair failed and we were unable to recover it. 00:28:50.880 [2024-07-15 16:10:19.563357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.880 [2024-07-15 16:10:19.563373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.880 qpair failed and we were unable to recover it. 00:28:50.880 [2024-07-15 16:10:19.563551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.880 [2024-07-15 16:10:19.563583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.880 qpair failed and we were unable to recover it. 00:28:50.880 [2024-07-15 16:10:19.563873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.880 [2024-07-15 16:10:19.563904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.880 qpair failed and we were unable to recover it. 00:28:50.880 [2024-07-15 16:10:19.564108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.880 [2024-07-15 16:10:19.564139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.880 qpair failed and we were unable to recover it. 00:28:50.880 [2024-07-15 16:10:19.564369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.880 [2024-07-15 16:10:19.564407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.880 qpair failed and we were unable to recover it. 00:28:50.880 [2024-07-15 16:10:19.564637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.880 [2024-07-15 16:10:19.564652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.880 qpair failed and we were unable to recover it. 00:28:50.880 [2024-07-15 16:10:19.564899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.880 [2024-07-15 16:10:19.564914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.880 qpair failed and we were unable to recover it. 00:28:50.880 [2024-07-15 16:10:19.565136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.880 [2024-07-15 16:10:19.565151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.880 qpair failed and we were unable to recover it. 00:28:50.880 [2024-07-15 16:10:19.565273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.880 [2024-07-15 16:10:19.565289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.880 qpair failed and we were unable to recover it. 00:28:50.880 [2024-07-15 16:10:19.565459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.880 [2024-07-15 16:10:19.565492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.880 qpair failed and we were unable to recover it. 00:28:50.880 [2024-07-15 16:10:19.565702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.880 [2024-07-15 16:10:19.565732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.880 qpair failed and we were unable to recover it. 00:28:50.880 [2024-07-15 16:10:19.565932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.880 [2024-07-15 16:10:19.565963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.880 qpair failed and we were unable to recover it. 00:28:50.880 [2024-07-15 16:10:19.566171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.880 [2024-07-15 16:10:19.566203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.880 qpair failed and we were unable to recover it. 00:28:50.880 [2024-07-15 16:10:19.566364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.880 [2024-07-15 16:10:19.566380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.881 qpair failed and we were unable to recover it. 00:28:50.881 [2024-07-15 16:10:19.566496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.881 [2024-07-15 16:10:19.566511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.881 qpair failed and we were unable to recover it. 00:28:50.881 [2024-07-15 16:10:19.566706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.881 [2024-07-15 16:10:19.566722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.881 qpair failed and we were unable to recover it. 00:28:50.881 [2024-07-15 16:10:19.566894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.881 [2024-07-15 16:10:19.566908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.881 qpair failed and we were unable to recover it. 00:28:50.881 [2024-07-15 16:10:19.567045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.881 [2024-07-15 16:10:19.567076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.881 qpair failed and we were unable to recover it. 00:28:50.881 [2024-07-15 16:10:19.567286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.881 [2024-07-15 16:10:19.567317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.881 qpair failed and we were unable to recover it. 00:28:50.881 [2024-07-15 16:10:19.567516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.881 [2024-07-15 16:10:19.567547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.881 qpair failed and we were unable to recover it. 00:28:50.881 [2024-07-15 16:10:19.567768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.881 [2024-07-15 16:10:19.567786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.881 qpair failed and we were unable to recover it. 00:28:50.881 [2024-07-15 16:10:19.567900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.881 [2024-07-15 16:10:19.567933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.881 qpair failed and we were unable to recover it. 00:28:50.881 [2024-07-15 16:10:19.568151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.881 [2024-07-15 16:10:19.568182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.881 qpair failed and we were unable to recover it. 00:28:50.881 [2024-07-15 16:10:19.568466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.881 [2024-07-15 16:10:19.568506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.881 qpair failed and we were unable to recover it. 00:28:50.881 [2024-07-15 16:10:19.568612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.881 [2024-07-15 16:10:19.568628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.881 qpair failed and we were unable to recover it. 00:28:50.881 [2024-07-15 16:10:19.568796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.881 [2024-07-15 16:10:19.568826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.881 qpair failed and we were unable to recover it. 00:28:50.881 [2024-07-15 16:10:19.569040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.881 [2024-07-15 16:10:19.569070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.881 qpair failed and we were unable to recover it. 00:28:50.881 [2024-07-15 16:10:19.569256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.881 [2024-07-15 16:10:19.569288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.881 qpair failed and we were unable to recover it. 00:28:50.881 [2024-07-15 16:10:19.569496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.881 [2024-07-15 16:10:19.569511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.881 qpair failed and we were unable to recover it. 00:28:50.881 [2024-07-15 16:10:19.569680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.881 [2024-07-15 16:10:19.569711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.881 qpair failed and we were unable to recover it. 00:28:50.881 [2024-07-15 16:10:19.569979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.881 [2024-07-15 16:10:19.570010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.881 qpair failed and we were unable to recover it. 00:28:50.881 [2024-07-15 16:10:19.570275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.881 [2024-07-15 16:10:19.570308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.881 qpair failed and we were unable to recover it. 00:28:50.881 [2024-07-15 16:10:19.570508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.881 [2024-07-15 16:10:19.570539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.881 qpair failed and we were unable to recover it. 00:28:50.881 [2024-07-15 16:10:19.570805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.881 [2024-07-15 16:10:19.570836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.881 qpair failed and we were unable to recover it. 00:28:50.881 [2024-07-15 16:10:19.571108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.881 [2024-07-15 16:10:19.571140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.881 qpair failed and we were unable to recover it. 00:28:50.881 [2024-07-15 16:10:19.571302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.881 [2024-07-15 16:10:19.571334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.881 qpair failed and we were unable to recover it. 00:28:50.881 [2024-07-15 16:10:19.571603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.881 [2024-07-15 16:10:19.571618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.881 qpair failed and we were unable to recover it. 00:28:50.881 [2024-07-15 16:10:19.571810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.881 [2024-07-15 16:10:19.571825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.881 qpair failed and we were unable to recover it. 00:28:50.881 [2024-07-15 16:10:19.572009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.881 [2024-07-15 16:10:19.572025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.881 qpair failed and we were unable to recover it. 00:28:50.881 [2024-07-15 16:10:19.572131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.881 [2024-07-15 16:10:19.572163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.881 qpair failed and we were unable to recover it. 00:28:50.881 [2024-07-15 16:10:19.572321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.881 [2024-07-15 16:10:19.572353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.881 qpair failed and we were unable to recover it. 00:28:50.881 [2024-07-15 16:10:19.572643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.881 [2024-07-15 16:10:19.572680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.881 qpair failed and we were unable to recover it. 00:28:50.881 [2024-07-15 16:10:19.572872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.881 [2024-07-15 16:10:19.572888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.881 qpair failed and we were unable to recover it. 00:28:50.881 [2024-07-15 16:10:19.573053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.881 [2024-07-15 16:10:19.573069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.881 qpair failed and we were unable to recover it. 00:28:50.881 [2024-07-15 16:10:19.573342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.881 [2024-07-15 16:10:19.573374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.881 qpair failed and we were unable to recover it. 00:28:50.881 [2024-07-15 16:10:19.573594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.881 [2024-07-15 16:10:19.573625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.881 qpair failed and we were unable to recover it. 00:28:50.881 [2024-07-15 16:10:19.573842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.881 [2024-07-15 16:10:19.573857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.881 qpair failed and we were unable to recover it. 00:28:50.881 [2024-07-15 16:10:19.574062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.881 [2024-07-15 16:10:19.574079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.881 qpair failed and we were unable to recover it. 00:28:50.881 [2024-07-15 16:10:19.574264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.881 [2024-07-15 16:10:19.574280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.881 qpair failed and we were unable to recover it. 00:28:50.881 [2024-07-15 16:10:19.574451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.881 [2024-07-15 16:10:19.574482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.881 qpair failed and we were unable to recover it. 00:28:50.881 [2024-07-15 16:10:19.574750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.881 [2024-07-15 16:10:19.574782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.881 qpair failed and we were unable to recover it. 00:28:50.881 [2024-07-15 16:10:19.575100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.881 [2024-07-15 16:10:19.575131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.881 qpair failed and we were unable to recover it. 00:28:50.881 [2024-07-15 16:10:19.575366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.881 [2024-07-15 16:10:19.575398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.881 qpair failed and we were unable to recover it. 00:28:50.881 [2024-07-15 16:10:19.575680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.881 [2024-07-15 16:10:19.575711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.881 qpair failed and we were unable to recover it. 00:28:50.881 [2024-07-15 16:10:19.575917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.881 [2024-07-15 16:10:19.575948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.881 qpair failed and we were unable to recover it. 00:28:50.882 [2024-07-15 16:10:19.576187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.882 [2024-07-15 16:10:19.576228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.882 qpair failed and we were unable to recover it. 00:28:50.882 [2024-07-15 16:10:19.576381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.882 [2024-07-15 16:10:19.576396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.882 qpair failed and we were unable to recover it. 00:28:50.882 [2024-07-15 16:10:19.576518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.882 [2024-07-15 16:10:19.576548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.882 qpair failed and we were unable to recover it. 00:28:50.882 [2024-07-15 16:10:19.576832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.882 [2024-07-15 16:10:19.576863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.882 qpair failed and we were unable to recover it. 00:28:50.882 [2024-07-15 16:10:19.577020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.882 [2024-07-15 16:10:19.577050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.882 qpair failed and we were unable to recover it. 00:28:50.882 [2024-07-15 16:10:19.577315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.882 [2024-07-15 16:10:19.577347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.882 qpair failed and we were unable to recover it. 00:28:50.882 [2024-07-15 16:10:19.577612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.882 [2024-07-15 16:10:19.577628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.882 qpair failed and we were unable to recover it. 00:28:50.882 [2024-07-15 16:10:19.577780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.882 [2024-07-15 16:10:19.577811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.882 qpair failed and we were unable to recover it. 00:28:50.882 [2024-07-15 16:10:19.578016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.882 [2024-07-15 16:10:19.578047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.882 qpair failed and we were unable to recover it. 00:28:50.882 [2024-07-15 16:10:19.578314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.882 [2024-07-15 16:10:19.578346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.882 qpair failed and we were unable to recover it. 00:28:50.882 [2024-07-15 16:10:19.578475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.882 [2024-07-15 16:10:19.578507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.882 qpair failed and we were unable to recover it. 00:28:50.882 [2024-07-15 16:10:19.578651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.882 [2024-07-15 16:10:19.578681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.882 qpair failed and we were unable to recover it. 00:28:50.882 [2024-07-15 16:10:19.578905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.882 [2024-07-15 16:10:19.578937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.882 qpair failed and we were unable to recover it. 00:28:50.882 [2024-07-15 16:10:19.579202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.882 [2024-07-15 16:10:19.579257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.882 qpair failed and we were unable to recover it. 00:28:50.882 [2024-07-15 16:10:19.579513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.882 [2024-07-15 16:10:19.579553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.882 qpair failed and we were unable to recover it. 00:28:50.882 [2024-07-15 16:10:19.579846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.882 [2024-07-15 16:10:19.579876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.882 qpair failed and we were unable to recover it. 00:28:50.882 [2024-07-15 16:10:19.580091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.882 [2024-07-15 16:10:19.580122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.882 qpair failed and we were unable to recover it. 00:28:50.882 [2024-07-15 16:10:19.580392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.882 [2024-07-15 16:10:19.580425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.882 qpair failed and we were unable to recover it. 00:28:50.882 [2024-07-15 16:10:19.580634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.882 [2024-07-15 16:10:19.580666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.882 qpair failed and we were unable to recover it. 00:28:50.882 [2024-07-15 16:10:19.580900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.882 [2024-07-15 16:10:19.580931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.882 qpair failed and we were unable to recover it. 00:28:50.882 [2024-07-15 16:10:19.581145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.882 [2024-07-15 16:10:19.581177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.882 qpair failed and we were unable to recover it. 00:28:50.882 [2024-07-15 16:10:19.581387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.882 [2024-07-15 16:10:19.581403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.882 qpair failed and we were unable to recover it. 00:28:50.882 [2024-07-15 16:10:19.581585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.882 [2024-07-15 16:10:19.581616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.882 qpair failed and we were unable to recover it. 00:28:50.882 [2024-07-15 16:10:19.581829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.882 [2024-07-15 16:10:19.581860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.882 qpair failed and we were unable to recover it. 00:28:50.882 [2024-07-15 16:10:19.582095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.882 [2024-07-15 16:10:19.582126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.882 qpair failed and we were unable to recover it. 00:28:50.882 [2024-07-15 16:10:19.582383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.882 [2024-07-15 16:10:19.582399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.882 qpair failed and we were unable to recover it. 00:28:50.882 [2024-07-15 16:10:19.582562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.882 [2024-07-15 16:10:19.582577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.882 qpair failed and we were unable to recover it. 00:28:50.882 [2024-07-15 16:10:19.582691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.882 [2024-07-15 16:10:19.582721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.882 qpair failed and we were unable to recover it. 00:28:50.882 [2024-07-15 16:10:19.582889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.882 [2024-07-15 16:10:19.582921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.882 qpair failed and we were unable to recover it. 00:28:50.882 [2024-07-15 16:10:19.583079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.882 [2024-07-15 16:10:19.583109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.882 qpair failed and we were unable to recover it. 00:28:50.882 [2024-07-15 16:10:19.583253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.882 [2024-07-15 16:10:19.583293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.882 qpair failed and we were unable to recover it. 00:28:50.882 [2024-07-15 16:10:19.583538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.882 [2024-07-15 16:10:19.583554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.882 qpair failed and we were unable to recover it. 00:28:50.882 [2024-07-15 16:10:19.583740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.882 [2024-07-15 16:10:19.583756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.882 qpair failed and we were unable to recover it. 00:28:50.882 [2024-07-15 16:10:19.583923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.882 [2024-07-15 16:10:19.583939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.882 qpair failed and we were unable to recover it. 00:28:50.882 [2024-07-15 16:10:19.584116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.882 [2024-07-15 16:10:19.584147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.882 qpair failed and we were unable to recover it. 00:28:50.882 [2024-07-15 16:10:19.584296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.882 [2024-07-15 16:10:19.584328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.882 qpair failed and we were unable to recover it. 00:28:50.882 [2024-07-15 16:10:19.584602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.882 [2024-07-15 16:10:19.584634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.882 qpair failed and we were unable to recover it. 00:28:50.882 [2024-07-15 16:10:19.584760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.882 [2024-07-15 16:10:19.584775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.882 qpair failed and we were unable to recover it. 00:28:50.882 [2024-07-15 16:10:19.584953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.882 [2024-07-15 16:10:19.584984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.882 qpair failed and we were unable to recover it. 00:28:50.882 [2024-07-15 16:10:19.585182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.882 [2024-07-15 16:10:19.585212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.882 qpair failed and we were unable to recover it. 00:28:50.882 [2024-07-15 16:10:19.585352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.882 [2024-07-15 16:10:19.585383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.882 qpair failed and we were unable to recover it. 00:28:50.882 [2024-07-15 16:10:19.585589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.883 [2024-07-15 16:10:19.585604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.883 qpair failed and we were unable to recover it. 00:28:50.883 [2024-07-15 16:10:19.585801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.883 [2024-07-15 16:10:19.585832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.883 qpair failed and we were unable to recover it. 00:28:50.883 [2024-07-15 16:10:19.585981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.883 [2024-07-15 16:10:19.586012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.883 qpair failed and we were unable to recover it. 00:28:50.883 [2024-07-15 16:10:19.586238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.883 [2024-07-15 16:10:19.586271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.883 qpair failed and we were unable to recover it. 00:28:50.883 [2024-07-15 16:10:19.586462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.883 [2024-07-15 16:10:19.586478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.883 qpair failed and we were unable to recover it. 00:28:50.883 [2024-07-15 16:10:19.586714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.883 [2024-07-15 16:10:19.586745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.883 qpair failed and we were unable to recover it. 00:28:50.883 [2024-07-15 16:10:19.586962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.883 [2024-07-15 16:10:19.586993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.883 qpair failed and we were unable to recover it. 00:28:50.883 [2024-07-15 16:10:19.587205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.883 [2024-07-15 16:10:19.587262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.883 qpair failed and we were unable to recover it. 00:28:50.883 [2024-07-15 16:10:19.587454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.883 [2024-07-15 16:10:19.587470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.883 qpair failed and we were unable to recover it. 00:28:50.883 [2024-07-15 16:10:19.587590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.883 [2024-07-15 16:10:19.587605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.883 qpair failed and we were unable to recover it. 00:28:50.883 [2024-07-15 16:10:19.587810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.883 [2024-07-15 16:10:19.587841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.883 qpair failed and we were unable to recover it. 00:28:50.883 [2024-07-15 16:10:19.588042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.883 [2024-07-15 16:10:19.588072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.883 qpair failed and we were unable to recover it. 00:28:50.883 [2024-07-15 16:10:19.588239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.883 [2024-07-15 16:10:19.588271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.883 qpair failed and we were unable to recover it. 00:28:50.883 [2024-07-15 16:10:19.588564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.883 [2024-07-15 16:10:19.588595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.883 qpair failed and we were unable to recover it. 00:28:50.883 [2024-07-15 16:10:19.588809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.883 [2024-07-15 16:10:19.588842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.883 qpair failed and we were unable to recover it. 00:28:50.883 [2024-07-15 16:10:19.589075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.883 [2024-07-15 16:10:19.589106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.883 qpair failed and we were unable to recover it. 00:28:50.883 [2024-07-15 16:10:19.589337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.883 [2024-07-15 16:10:19.589370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.883 qpair failed and we were unable to recover it. 00:28:50.883 [2024-07-15 16:10:19.589659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.883 [2024-07-15 16:10:19.589691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.883 qpair failed and we were unable to recover it. 00:28:50.883 [2024-07-15 16:10:19.589892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.883 [2024-07-15 16:10:19.589924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.883 qpair failed and we were unable to recover it. 00:28:50.883 [2024-07-15 16:10:19.590136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.883 [2024-07-15 16:10:19.590173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.883 qpair failed and we were unable to recover it. 00:28:50.883 [2024-07-15 16:10:19.590328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.883 [2024-07-15 16:10:19.590360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.883 qpair failed and we were unable to recover it. 00:28:50.883 [2024-07-15 16:10:19.590572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.883 [2024-07-15 16:10:19.590603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.883 qpair failed and we were unable to recover it. 00:28:50.883 [2024-07-15 16:10:19.590749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.883 [2024-07-15 16:10:19.590789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.883 qpair failed and we were unable to recover it. 00:28:50.883 [2024-07-15 16:10:19.590897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.883 [2024-07-15 16:10:19.590911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.883 qpair failed and we were unable to recover it. 00:28:50.883 [2024-07-15 16:10:19.591098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.883 [2024-07-15 16:10:19.591129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.883 qpair failed and we were unable to recover it. 00:28:50.883 [2024-07-15 16:10:19.591405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.883 [2024-07-15 16:10:19.591437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.883 qpair failed and we were unable to recover it. 00:28:50.883 [2024-07-15 16:10:19.591573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.883 [2024-07-15 16:10:19.591604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.883 qpair failed and we were unable to recover it. 00:28:50.883 [2024-07-15 16:10:19.591753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.883 [2024-07-15 16:10:19.591770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.883 qpair failed and we were unable to recover it. 00:28:50.883 [2024-07-15 16:10:19.591885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.883 [2024-07-15 16:10:19.591916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.883 qpair failed and we were unable to recover it. 00:28:50.883 [2024-07-15 16:10:19.592138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.883 [2024-07-15 16:10:19.592170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.883 qpair failed and we were unable to recover it. 00:28:50.883 [2024-07-15 16:10:19.592373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.883 [2024-07-15 16:10:19.592405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.883 qpair failed and we were unable to recover it. 00:28:50.883 [2024-07-15 16:10:19.592563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.883 [2024-07-15 16:10:19.592579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.883 qpair failed and we were unable to recover it. 00:28:50.883 [2024-07-15 16:10:19.592708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.883 [2024-07-15 16:10:19.592723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.883 qpair failed and we were unable to recover it. 00:28:50.883 [2024-07-15 16:10:19.592829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.883 [2024-07-15 16:10:19.592845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.883 qpair failed and we were unable to recover it. 00:28:50.883 [2024-07-15 16:10:19.593032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.883 [2024-07-15 16:10:19.593064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.883 qpair failed and we were unable to recover it. 00:28:50.883 [2024-07-15 16:10:19.593262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.883 [2024-07-15 16:10:19.593294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.883 qpair failed and we were unable to recover it. 00:28:50.883 [2024-07-15 16:10:19.593451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.883 [2024-07-15 16:10:19.593481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.883 qpair failed and we were unable to recover it. 00:28:50.883 [2024-07-15 16:10:19.593634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.883 [2024-07-15 16:10:19.593666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.883 qpair failed and we were unable to recover it. 00:28:50.883 [2024-07-15 16:10:19.593929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.883 [2024-07-15 16:10:19.593960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.883 qpair failed and we were unable to recover it. 00:28:50.883 [2024-07-15 16:10:19.594233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.883 [2024-07-15 16:10:19.594265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.883 qpair failed and we were unable to recover it. 00:28:50.883 [2024-07-15 16:10:19.594414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.883 [2024-07-15 16:10:19.594444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.883 qpair failed and we were unable to recover it. 00:28:50.883 [2024-07-15 16:10:19.594715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.884 [2024-07-15 16:10:19.594746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.884 qpair failed and we were unable to recover it. 00:28:50.884 [2024-07-15 16:10:19.594904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.884 [2024-07-15 16:10:19.594935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.884 qpair failed and we were unable to recover it. 00:28:50.884 [2024-07-15 16:10:19.595219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.884 [2024-07-15 16:10:19.595274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.884 qpair failed and we were unable to recover it. 00:28:50.884 [2024-07-15 16:10:19.595400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.884 [2024-07-15 16:10:19.595415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.884 qpair failed and we were unable to recover it. 00:28:50.884 [2024-07-15 16:10:19.595549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.884 [2024-07-15 16:10:19.595565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.884 qpair failed and we were unable to recover it. 00:28:50.884 [2024-07-15 16:10:19.595729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.884 [2024-07-15 16:10:19.595750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.884 qpair failed and we were unable to recover it. 00:28:50.884 [2024-07-15 16:10:19.595869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.884 [2024-07-15 16:10:19.595901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.884 qpair failed and we were unable to recover it. 00:28:50.884 [2024-07-15 16:10:19.596102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.884 [2024-07-15 16:10:19.596132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.884 qpair failed and we were unable to recover it. 00:28:50.884 [2024-07-15 16:10:19.596273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.884 [2024-07-15 16:10:19.596304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.884 qpair failed and we were unable to recover it. 00:28:50.884 [2024-07-15 16:10:19.596432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.884 [2024-07-15 16:10:19.596464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.884 qpair failed and we were unable to recover it. 00:28:50.884 [2024-07-15 16:10:19.596612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.884 [2024-07-15 16:10:19.596643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.884 qpair failed and we were unable to recover it. 00:28:50.884 [2024-07-15 16:10:19.596854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.884 [2024-07-15 16:10:19.596884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.884 qpair failed and we were unable to recover it. 00:28:50.884 [2024-07-15 16:10:19.597099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.884 [2024-07-15 16:10:19.597129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.884 qpair failed and we were unable to recover it. 00:28:50.884 [2024-07-15 16:10:19.597334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.884 [2024-07-15 16:10:19.597366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.884 qpair failed and we were unable to recover it. 00:28:50.884 [2024-07-15 16:10:19.597513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.884 [2024-07-15 16:10:19.597544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.884 qpair failed and we were unable to recover it. 00:28:50.884 [2024-07-15 16:10:19.597748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.884 [2024-07-15 16:10:19.597779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.884 qpair failed and we were unable to recover it. 00:28:50.884 [2024-07-15 16:10:19.597976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.884 [2024-07-15 16:10:19.598008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.884 qpair failed and we were unable to recover it. 00:28:50.884 [2024-07-15 16:10:19.598208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.884 [2024-07-15 16:10:19.598247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.884 qpair failed and we were unable to recover it. 00:28:50.884 [2024-07-15 16:10:19.598441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.884 [2024-07-15 16:10:19.598457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.884 qpair failed and we were unable to recover it. 00:28:50.884 [2024-07-15 16:10:19.598614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.884 [2024-07-15 16:10:19.598643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.884 qpair failed and we were unable to recover it. 00:28:50.884 [2024-07-15 16:10:19.598823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.884 [2024-07-15 16:10:19.598855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.884 qpair failed and we were unable to recover it. 00:28:50.884 [2024-07-15 16:10:19.599000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.884 [2024-07-15 16:10:19.599031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.884 qpair failed and we were unable to recover it. 00:28:50.884 [2024-07-15 16:10:19.599261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.884 [2024-07-15 16:10:19.599293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.884 qpair failed and we were unable to recover it. 00:28:50.884 [2024-07-15 16:10:19.599426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.884 [2024-07-15 16:10:19.599457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.884 qpair failed and we were unable to recover it. 00:28:50.884 [2024-07-15 16:10:19.599697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.884 [2024-07-15 16:10:19.599727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.884 qpair failed and we were unable to recover it. 00:28:50.884 [2024-07-15 16:10:19.599877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.884 [2024-07-15 16:10:19.599907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.884 qpair failed and we were unable to recover it. 00:28:50.884 [2024-07-15 16:10:19.600123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.884 [2024-07-15 16:10:19.600154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.884 qpair failed and we were unable to recover it. 00:28:50.884 [2024-07-15 16:10:19.600352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.884 [2024-07-15 16:10:19.600383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.884 qpair failed and we were unable to recover it. 00:28:50.884 [2024-07-15 16:10:19.600584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.884 [2024-07-15 16:10:19.600615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.884 qpair failed and we were unable to recover it. 00:28:50.884 [2024-07-15 16:10:19.600834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.884 [2024-07-15 16:10:19.600865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.884 qpair failed and we were unable to recover it. 00:28:50.884 [2024-07-15 16:10:19.600999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.884 [2024-07-15 16:10:19.601029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.884 qpair failed and we were unable to recover it. 00:28:50.884 [2024-07-15 16:10:19.601247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.884 [2024-07-15 16:10:19.601279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.884 qpair failed and we were unable to recover it. 00:28:50.884 [2024-07-15 16:10:19.601493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.884 [2024-07-15 16:10:19.601529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.884 qpair failed and we were unable to recover it. 00:28:50.884 [2024-07-15 16:10:19.601744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.884 [2024-07-15 16:10:19.601775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.884 qpair failed and we were unable to recover it. 00:28:50.884 [2024-07-15 16:10:19.601982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.884 [2024-07-15 16:10:19.602013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.884 qpair failed and we were unable to recover it. 00:28:50.884 [2024-07-15 16:10:19.602237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.884 [2024-07-15 16:10:19.602270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.884 qpair failed and we were unable to recover it. 00:28:50.884 [2024-07-15 16:10:19.602431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.885 [2024-07-15 16:10:19.602446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.885 qpair failed and we were unable to recover it. 00:28:50.885 [2024-07-15 16:10:19.602550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.885 [2024-07-15 16:10:19.602582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.885 qpair failed and we were unable to recover it. 00:28:50.885 [2024-07-15 16:10:19.602782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.885 [2024-07-15 16:10:19.602812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.885 qpair failed and we were unable to recover it. 00:28:50.885 [2024-07-15 16:10:19.602965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.885 [2024-07-15 16:10:19.602995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.885 qpair failed and we were unable to recover it. 00:28:50.885 [2024-07-15 16:10:19.603139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.885 [2024-07-15 16:10:19.603172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.885 qpair failed and we were unable to recover it. 00:28:50.885 [2024-07-15 16:10:19.603350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.885 [2024-07-15 16:10:19.603365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.885 qpair failed and we were unable to recover it. 00:28:50.885 [2024-07-15 16:10:19.603465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.885 [2024-07-15 16:10:19.603479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.885 qpair failed and we were unable to recover it. 00:28:50.885 [2024-07-15 16:10:19.603591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.885 [2024-07-15 16:10:19.603607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.885 qpair failed and we were unable to recover it. 00:28:50.885 [2024-07-15 16:10:19.603835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.885 [2024-07-15 16:10:19.603850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.885 qpair failed and we were unable to recover it. 00:28:50.885 [2024-07-15 16:10:19.603947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.885 [2024-07-15 16:10:19.603960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.885 qpair failed and we were unable to recover it. 00:28:50.885 [2024-07-15 16:10:19.604218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.885 [2024-07-15 16:10:19.604260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.885 qpair failed and we were unable to recover it. 00:28:50.885 [2024-07-15 16:10:19.604440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.885 [2024-07-15 16:10:19.604475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.885 qpair failed and we were unable to recover it. 00:28:50.885 [2024-07-15 16:10:19.604612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.885 [2024-07-15 16:10:19.604644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.885 qpair failed and we were unable to recover it. 00:28:50.885 [2024-07-15 16:10:19.604861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.885 [2024-07-15 16:10:19.604893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.885 qpair failed and we were unable to recover it. 00:28:50.885 [2024-07-15 16:10:19.605110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.885 [2024-07-15 16:10:19.605142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.885 qpair failed and we were unable to recover it. 00:28:50.885 [2024-07-15 16:10:19.605324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.885 [2024-07-15 16:10:19.605357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.885 qpair failed and we were unable to recover it. 00:28:50.885 [2024-07-15 16:10:19.605473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.885 [2024-07-15 16:10:19.605504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.885 qpair failed and we were unable to recover it. 00:28:50.885 [2024-07-15 16:10:19.605657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.885 [2024-07-15 16:10:19.605689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.885 qpair failed and we were unable to recover it. 00:28:50.885 [2024-07-15 16:10:19.605900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.885 [2024-07-15 16:10:19.605917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.885 qpair failed and we were unable to recover it. 00:28:50.885 [2024-07-15 16:10:19.606047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.885 [2024-07-15 16:10:19.606079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.885 qpair failed and we were unable to recover it. 00:28:50.885 [2024-07-15 16:10:19.606215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.885 [2024-07-15 16:10:19.606260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.885 qpair failed and we were unable to recover it. 00:28:50.885 [2024-07-15 16:10:19.606466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.885 [2024-07-15 16:10:19.606499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.885 qpair failed and we were unable to recover it. 00:28:50.885 [2024-07-15 16:10:19.606709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.885 [2024-07-15 16:10:19.606725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.885 qpair failed and we were unable to recover it. 00:28:50.885 [2024-07-15 16:10:19.606891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.885 [2024-07-15 16:10:19.606932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.885 qpair failed and we were unable to recover it. 00:28:50.885 [2024-07-15 16:10:19.607200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.885 [2024-07-15 16:10:19.607242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.885 qpair failed and we were unable to recover it. 00:28:50.885 [2024-07-15 16:10:19.607394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.885 [2024-07-15 16:10:19.607426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.885 qpair failed and we were unable to recover it. 00:28:50.885 [2024-07-15 16:10:19.607637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.885 [2024-07-15 16:10:19.607652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.885 qpair failed and we were unable to recover it. 00:28:50.885 [2024-07-15 16:10:19.607831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.885 [2024-07-15 16:10:19.607846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.885 qpair failed and we were unable to recover it. 00:28:50.885 [2024-07-15 16:10:19.608010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.885 [2024-07-15 16:10:19.608043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.885 qpair failed and we were unable to recover it. 00:28:50.885 [2024-07-15 16:10:19.608245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.885 [2024-07-15 16:10:19.608277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.885 qpair failed and we were unable to recover it. 00:28:50.885 [2024-07-15 16:10:19.608492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.885 [2024-07-15 16:10:19.608529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.885 qpair failed and we were unable to recover it. 00:28:50.885 [2024-07-15 16:10:19.608694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.885 [2024-07-15 16:10:19.608710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.885 qpair failed and we were unable to recover it. 00:28:50.885 [2024-07-15 16:10:19.608801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.885 [2024-07-15 16:10:19.608815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.885 qpair failed and we were unable to recover it. 00:28:50.885 [2024-07-15 16:10:19.608939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.885 [2024-07-15 16:10:19.608968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.885 qpair failed and we were unable to recover it. 00:28:50.885 [2024-07-15 16:10:19.609204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.885 [2024-07-15 16:10:19.609250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.885 qpair failed and we were unable to recover it. 00:28:50.885 [2024-07-15 16:10:19.609411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.885 [2024-07-15 16:10:19.609443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.885 qpair failed and we were unable to recover it. 00:28:50.885 [2024-07-15 16:10:19.609600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.885 [2024-07-15 16:10:19.609631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.885 qpair failed and we were unable to recover it. 00:28:50.885 [2024-07-15 16:10:19.609802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.885 [2024-07-15 16:10:19.609818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.885 qpair failed and we were unable to recover it. 00:28:50.885 [2024-07-15 16:10:19.609923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.885 [2024-07-15 16:10:19.609938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.885 qpair failed and we were unable to recover it. 00:28:50.885 [2024-07-15 16:10:19.610108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.885 [2024-07-15 16:10:19.610124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.885 qpair failed and we were unable to recover it. 00:28:50.885 [2024-07-15 16:10:19.610366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.885 [2024-07-15 16:10:19.610399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.885 qpair failed and we were unable to recover it. 00:28:50.885 [2024-07-15 16:10:19.610683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.885 [2024-07-15 16:10:19.610715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.886 qpair failed and we were unable to recover it. 00:28:50.886 [2024-07-15 16:10:19.610861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.886 [2024-07-15 16:10:19.610891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.886 qpair failed and we were unable to recover it. 00:28:50.886 [2024-07-15 16:10:19.611072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.886 [2024-07-15 16:10:19.611101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.886 qpair failed and we were unable to recover it. 00:28:50.886 [2024-07-15 16:10:19.611379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.886 [2024-07-15 16:10:19.611411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.886 qpair failed and we were unable to recover it. 00:28:50.886 [2024-07-15 16:10:19.611636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.886 [2024-07-15 16:10:19.611667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.886 qpair failed and we were unable to recover it. 00:28:50.886 [2024-07-15 16:10:19.611950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.886 [2024-07-15 16:10:19.611981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.886 qpair failed and we were unable to recover it. 00:28:50.886 [2024-07-15 16:10:19.612247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.886 [2024-07-15 16:10:19.612279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.886 qpair failed and we were unable to recover it. 00:28:50.886 [2024-07-15 16:10:19.612499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.886 [2024-07-15 16:10:19.612530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.886 qpair failed and we were unable to recover it. 00:28:50.886 [2024-07-15 16:10:19.612792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.886 [2024-07-15 16:10:19.612807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.886 qpair failed and we were unable to recover it. 00:28:50.886 [2024-07-15 16:10:19.613039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.886 [2024-07-15 16:10:19.613057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.886 qpair failed and we were unable to recover it. 00:28:50.886 [2024-07-15 16:10:19.613239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.886 [2024-07-15 16:10:19.613255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.886 qpair failed and we were unable to recover it. 00:28:50.886 [2024-07-15 16:10:19.613370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.886 [2024-07-15 16:10:19.613385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.886 qpair failed and we were unable to recover it. 00:28:50.886 [2024-07-15 16:10:19.613547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.886 [2024-07-15 16:10:19.613563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.886 qpair failed and we were unable to recover it. 00:28:50.886 [2024-07-15 16:10:19.613719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.886 [2024-07-15 16:10:19.613735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.886 qpair failed and we were unable to recover it. 00:28:50.886 [2024-07-15 16:10:19.613914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.886 [2024-07-15 16:10:19.613943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.886 qpair failed and we were unable to recover it. 00:28:50.886 [2024-07-15 16:10:19.614159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.886 [2024-07-15 16:10:19.614190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.886 qpair failed and we were unable to recover it. 00:28:50.886 [2024-07-15 16:10:19.614340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.886 [2024-07-15 16:10:19.614372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.886 qpair failed and we were unable to recover it. 00:28:50.886 [2024-07-15 16:10:19.614703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.886 [2024-07-15 16:10:19.614735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.886 qpair failed and we were unable to recover it. 00:28:50.886 [2024-07-15 16:10:19.615001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.886 [2024-07-15 16:10:19.615032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.886 qpair failed and we were unable to recover it. 00:28:50.886 [2024-07-15 16:10:19.615257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.886 [2024-07-15 16:10:19.615291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.886 qpair failed and we were unable to recover it. 00:28:50.886 [2024-07-15 16:10:19.615492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.886 [2024-07-15 16:10:19.615523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.886 qpair failed and we were unable to recover it. 00:28:50.886 [2024-07-15 16:10:19.615731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.886 [2024-07-15 16:10:19.615746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.886 qpair failed and we were unable to recover it. 00:28:50.886 [2024-07-15 16:10:19.615980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.886 [2024-07-15 16:10:19.616011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.886 qpair failed and we were unable to recover it. 00:28:50.886 [2024-07-15 16:10:19.616196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.886 [2024-07-15 16:10:19.616238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.886 qpair failed and we were unable to recover it. 00:28:50.886 [2024-07-15 16:10:19.616374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.886 [2024-07-15 16:10:19.616412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.886 qpair failed and we were unable to recover it. 00:28:50.886 [2024-07-15 16:10:19.616701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.886 [2024-07-15 16:10:19.616733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.886 qpair failed and we were unable to recover it. 00:28:50.886 [2024-07-15 16:10:19.616878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.886 [2024-07-15 16:10:19.616908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.886 qpair failed and we were unable to recover it. 00:28:50.886 [2024-07-15 16:10:19.617060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.886 [2024-07-15 16:10:19.617090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.886 qpair failed and we were unable to recover it. 00:28:50.886 [2024-07-15 16:10:19.617294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.886 [2024-07-15 16:10:19.617327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.886 qpair failed and we were unable to recover it. 00:28:50.886 [2024-07-15 16:10:19.617484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.886 [2024-07-15 16:10:19.617515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.886 qpair failed and we were unable to recover it. 00:28:50.886 [2024-07-15 16:10:19.617741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.886 [2024-07-15 16:10:19.617772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.886 qpair failed and we were unable to recover it. 00:28:50.886 [2024-07-15 16:10:19.617994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.886 [2024-07-15 16:10:19.618025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.886 qpair failed and we were unable to recover it. 00:28:50.886 [2024-07-15 16:10:19.618211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.886 [2024-07-15 16:10:19.618257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.886 qpair failed and we were unable to recover it. 00:28:50.886 [2024-07-15 16:10:19.618459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.886 [2024-07-15 16:10:19.618474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.886 qpair failed and we were unable to recover it. 00:28:50.886 [2024-07-15 16:10:19.618723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.886 [2024-07-15 16:10:19.618754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.886 qpair failed and we were unable to recover it. 00:28:50.886 [2024-07-15 16:10:19.618899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.886 [2024-07-15 16:10:19.618930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.886 qpair failed and we were unable to recover it. 00:28:50.886 [2024-07-15 16:10:19.619138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.886 [2024-07-15 16:10:19.619174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.886 qpair failed and we were unable to recover it. 00:28:50.886 [2024-07-15 16:10:19.619381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.886 [2024-07-15 16:10:19.619412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.886 qpair failed and we were unable to recover it. 00:28:50.886 [2024-07-15 16:10:19.619633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.886 [2024-07-15 16:10:19.619663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.886 qpair failed and we were unable to recover it. 00:28:50.886 [2024-07-15 16:10:19.619784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.886 [2024-07-15 16:10:19.619799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.886 qpair failed and we were unable to recover it. 00:28:50.886 [2024-07-15 16:10:19.619960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.886 [2024-07-15 16:10:19.619976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.886 qpair failed and we were unable to recover it. 00:28:50.886 [2024-07-15 16:10:19.620239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.886 [2024-07-15 16:10:19.620271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.886 qpair failed and we were unable to recover it. 00:28:50.887 [2024-07-15 16:10:19.620539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.887 [2024-07-15 16:10:19.620570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.887 qpair failed and we were unable to recover it. 00:28:50.887 [2024-07-15 16:10:19.620716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.887 [2024-07-15 16:10:19.620747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.887 qpair failed and we were unable to recover it. 00:28:50.887 [2024-07-15 16:10:19.620892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.887 [2024-07-15 16:10:19.620908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.887 qpair failed and we were unable to recover it. 00:28:50.887 [2024-07-15 16:10:19.621079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.887 [2024-07-15 16:10:19.621110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.887 qpair failed and we were unable to recover it. 00:28:50.887 [2024-07-15 16:10:19.621273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.887 [2024-07-15 16:10:19.621305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.887 qpair failed and we were unable to recover it. 00:28:50.887 [2024-07-15 16:10:19.621459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.887 [2024-07-15 16:10:19.621491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.887 qpair failed and we were unable to recover it. 00:28:50.887 [2024-07-15 16:10:19.621640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.887 [2024-07-15 16:10:19.621672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.887 qpair failed and we were unable to recover it. 00:28:50.887 [2024-07-15 16:10:19.621930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.887 [2024-07-15 16:10:19.621946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.887 qpair failed and we were unable to recover it. 00:28:50.887 [2024-07-15 16:10:19.622106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.887 [2024-07-15 16:10:19.622121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.887 qpair failed and we were unable to recover it. 00:28:50.887 [2024-07-15 16:10:19.622296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.887 [2024-07-15 16:10:19.622328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.887 qpair failed and we were unable to recover it. 00:28:50.887 [2024-07-15 16:10:19.622594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.887 [2024-07-15 16:10:19.622625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.887 qpair failed and we were unable to recover it. 00:28:50.887 [2024-07-15 16:10:19.622773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.887 [2024-07-15 16:10:19.622804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.887 qpair failed and we were unable to recover it. 00:28:50.887 [2024-07-15 16:10:19.623019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.887 [2024-07-15 16:10:19.623050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.887 qpair failed and we were unable to recover it. 00:28:50.887 [2024-07-15 16:10:19.623283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.887 [2024-07-15 16:10:19.623315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.887 qpair failed and we were unable to recover it. 00:28:50.887 [2024-07-15 16:10:19.623480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.887 [2024-07-15 16:10:19.623512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.887 qpair failed and we were unable to recover it. 00:28:50.887 [2024-07-15 16:10:19.623618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.887 [2024-07-15 16:10:19.623634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.887 qpair failed and we were unable to recover it. 00:28:50.887 [2024-07-15 16:10:19.623834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.887 [2024-07-15 16:10:19.623850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.887 qpair failed and we were unable to recover it. 00:28:50.887 [2024-07-15 16:10:19.623980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.887 [2024-07-15 16:10:19.623996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.887 qpair failed and we were unable to recover it. 00:28:50.887 [2024-07-15 16:10:19.624258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.887 [2024-07-15 16:10:19.624290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.887 qpair failed and we were unable to recover it. 00:28:50.887 [2024-07-15 16:10:19.624440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.887 [2024-07-15 16:10:19.624471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.887 qpair failed and we were unable to recover it. 00:28:50.887 [2024-07-15 16:10:19.624610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.887 [2024-07-15 16:10:19.624641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.887 qpair failed and we were unable to recover it. 00:28:50.887 [2024-07-15 16:10:19.624845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.887 [2024-07-15 16:10:19.624860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.887 qpair failed and we were unable to recover it. 00:28:50.887 [2024-07-15 16:10:19.625032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.887 [2024-07-15 16:10:19.625062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.887 qpair failed and we were unable to recover it. 00:28:50.887 [2024-07-15 16:10:19.625256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.887 [2024-07-15 16:10:19.625290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.887 qpair failed and we were unable to recover it. 00:28:50.887 [2024-07-15 16:10:19.625553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.887 [2024-07-15 16:10:19.625583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.887 qpair failed and we were unable to recover it. 00:28:50.887 [2024-07-15 16:10:19.625730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.887 [2024-07-15 16:10:19.625761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.887 qpair failed and we were unable to recover it. 00:28:50.887 [2024-07-15 16:10:19.625901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.887 [2024-07-15 16:10:19.625931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.887 qpair failed and we were unable to recover it. 00:28:50.887 [2024-07-15 16:10:19.626128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.887 [2024-07-15 16:10:19.626159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.887 qpair failed and we were unable to recover it. 00:28:50.887 [2024-07-15 16:10:19.626441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.887 [2024-07-15 16:10:19.626473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.887 qpair failed and we were unable to recover it. 00:28:50.887 [2024-07-15 16:10:19.626624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.887 [2024-07-15 16:10:19.626662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.887 qpair failed and we were unable to recover it. 00:28:50.887 [2024-07-15 16:10:19.626840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.887 [2024-07-15 16:10:19.626855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.887 qpair failed and we were unable to recover it. 00:28:50.887 [2024-07-15 16:10:19.627066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.887 [2024-07-15 16:10:19.627082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.887 qpair failed and we were unable to recover it. 00:28:50.887 [2024-07-15 16:10:19.627266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.887 [2024-07-15 16:10:19.627298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.887 qpair failed and we were unable to recover it. 00:28:50.887 [2024-07-15 16:10:19.627444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.887 [2024-07-15 16:10:19.627475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.887 qpair failed and we were unable to recover it. 00:28:50.887 [2024-07-15 16:10:19.627624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.887 [2024-07-15 16:10:19.627655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.887 qpair failed and we were unable to recover it. 00:28:50.887 [2024-07-15 16:10:19.627865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.887 [2024-07-15 16:10:19.627933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.887 qpair failed and we were unable to recover it. 00:28:50.887 [2024-07-15 16:10:19.628246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.887 [2024-07-15 16:10:19.628283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.887 qpair failed and we were unable to recover it. 00:28:50.887 [2024-07-15 16:10:19.628437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.887 [2024-07-15 16:10:19.628469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.887 qpair failed and we were unable to recover it. 00:28:50.887 [2024-07-15 16:10:19.628692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.887 [2024-07-15 16:10:19.628724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.887 qpair failed and we were unable to recover it. 00:28:50.887 [2024-07-15 16:10:19.628863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.887 [2024-07-15 16:10:19.628875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.887 qpair failed and we were unable to recover it. 00:28:50.887 [2024-07-15 16:10:19.628969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.887 [2024-07-15 16:10:19.628979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.887 qpair failed and we were unable to recover it. 00:28:50.887 [2024-07-15 16:10:19.629082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.888 [2024-07-15 16:10:19.629094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.888 qpair failed and we were unable to recover it. 00:28:50.888 [2024-07-15 16:10:19.629185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.888 [2024-07-15 16:10:19.629195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.888 qpair failed and we were unable to recover it. 00:28:50.888 [2024-07-15 16:10:19.629312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.888 [2024-07-15 16:10:19.629324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.888 qpair failed and we were unable to recover it. 00:28:50.888 [2024-07-15 16:10:19.629425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.888 [2024-07-15 16:10:19.629435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.888 qpair failed and we were unable to recover it. 00:28:50.888 [2024-07-15 16:10:19.629528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.888 [2024-07-15 16:10:19.629538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.888 qpair failed and we were unable to recover it. 00:28:50.888 [2024-07-15 16:10:19.629702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.888 [2024-07-15 16:10:19.629714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.888 qpair failed and we were unable to recover it. 00:28:50.888 [2024-07-15 16:10:19.629879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.888 [2024-07-15 16:10:19.629891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.888 qpair failed and we were unable to recover it. 00:28:50.888 [2024-07-15 16:10:19.630059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.888 [2024-07-15 16:10:19.630098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.888 qpair failed and we were unable to recover it. 00:28:50.888 [2024-07-15 16:10:19.630282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.888 [2024-07-15 16:10:19.630314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.888 qpair failed and we were unable to recover it. 00:28:50.888 [2024-07-15 16:10:19.630601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.888 [2024-07-15 16:10:19.630638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.888 qpair failed and we were unable to recover it. 00:28:50.888 [2024-07-15 16:10:19.630842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.888 [2024-07-15 16:10:19.630873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.888 qpair failed and we were unable to recover it. 00:28:50.888 [2024-07-15 16:10:19.631066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.888 [2024-07-15 16:10:19.631097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.888 qpair failed and we were unable to recover it. 00:28:50.888 [2024-07-15 16:10:19.631326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.888 [2024-07-15 16:10:19.631359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.888 qpair failed and we were unable to recover it. 00:28:50.888 [2024-07-15 16:10:19.631581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.888 [2024-07-15 16:10:19.631612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.888 qpair failed and we were unable to recover it. 00:28:50.888 [2024-07-15 16:10:19.631760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.888 [2024-07-15 16:10:19.631791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.888 qpair failed and we were unable to recover it. 00:28:50.888 [2024-07-15 16:10:19.631992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.888 [2024-07-15 16:10:19.632023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.888 qpair failed and we were unable to recover it. 00:28:50.888 [2024-07-15 16:10:19.632244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.888 [2024-07-15 16:10:19.632275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.888 qpair failed and we were unable to recover it. 00:28:50.888 [2024-07-15 16:10:19.632484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.888 [2024-07-15 16:10:19.632515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.888 qpair failed and we were unable to recover it. 00:28:50.888 [2024-07-15 16:10:19.632734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.888 [2024-07-15 16:10:19.632765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.888 qpair failed and we were unable to recover it. 00:28:50.888 [2024-07-15 16:10:19.632979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.888 [2024-07-15 16:10:19.633010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.888 qpair failed and we were unable to recover it. 00:28:50.888 [2024-07-15 16:10:19.633316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.888 [2024-07-15 16:10:19.633347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.888 qpair failed and we were unable to recover it. 00:28:50.888 [2024-07-15 16:10:19.633568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.888 [2024-07-15 16:10:19.633606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.888 qpair failed and we were unable to recover it. 00:28:50.888 [2024-07-15 16:10:19.633802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.888 [2024-07-15 16:10:19.633814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.888 qpair failed and we were unable to recover it. 00:28:50.888 [2024-07-15 16:10:19.634017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.888 [2024-07-15 16:10:19.634048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.888 qpair failed and we were unable to recover it. 00:28:50.888 [2024-07-15 16:10:19.634191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.888 [2024-07-15 16:10:19.634221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.888 qpair failed and we were unable to recover it. 00:28:50.888 [2024-07-15 16:10:19.634344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.888 [2024-07-15 16:10:19.634375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.888 qpair failed and we were unable to recover it. 00:28:50.888 [2024-07-15 16:10:19.634590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.888 [2024-07-15 16:10:19.634602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.888 qpair failed and we were unable to recover it. 00:28:50.888 [2024-07-15 16:10:19.634852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.888 [2024-07-15 16:10:19.634863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.888 qpair failed and we were unable to recover it. 00:28:50.888 [2024-07-15 16:10:19.634973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.888 [2024-07-15 16:10:19.634984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.888 qpair failed and we were unable to recover it. 00:28:50.888 [2024-07-15 16:10:19.635101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.888 [2024-07-15 16:10:19.635113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.888 qpair failed and we were unable to recover it. 00:28:50.888 [2024-07-15 16:10:19.635206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.888 [2024-07-15 16:10:19.635252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.888 qpair failed and we were unable to recover it. 00:28:50.888 [2024-07-15 16:10:19.635509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.888 [2024-07-15 16:10:19.635540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.888 qpair failed and we were unable to recover it. 00:28:50.888 [2024-07-15 16:10:19.635813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.888 [2024-07-15 16:10:19.635843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.888 qpair failed and we were unable to recover it. 00:28:50.888 [2024-07-15 16:10:19.636059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.888 [2024-07-15 16:10:19.636089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.888 qpair failed and we were unable to recover it. 00:28:50.888 [2024-07-15 16:10:19.636247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.888 [2024-07-15 16:10:19.636279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.888 qpair failed and we were unable to recover it. 00:28:50.888 [2024-07-15 16:10:19.636492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.888 [2024-07-15 16:10:19.636522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.888 qpair failed and we were unable to recover it. 00:28:50.888 [2024-07-15 16:10:19.636675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.888 [2024-07-15 16:10:19.636707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.888 qpair failed and we were unable to recover it. 00:28:50.888 [2024-07-15 16:10:19.636936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.888 [2024-07-15 16:10:19.636948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.888 qpair failed and we were unable to recover it. 00:28:50.888 [2024-07-15 16:10:19.637099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.888 [2024-07-15 16:10:19.637109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.889 qpair failed and we were unable to recover it. 00:28:50.889 [2024-07-15 16:10:19.637333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.889 [2024-07-15 16:10:19.637365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.889 qpair failed and we were unable to recover it. 00:28:50.889 [2024-07-15 16:10:19.637520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.889 [2024-07-15 16:10:19.637551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.889 qpair failed and we were unable to recover it. 00:28:50.889 [2024-07-15 16:10:19.637727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.889 [2024-07-15 16:10:19.637758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.889 qpair failed and we were unable to recover it. 00:28:50.889 [2024-07-15 16:10:19.637970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.889 [2024-07-15 16:10:19.638000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.889 qpair failed and we were unable to recover it. 00:28:50.889 [2024-07-15 16:10:19.638164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.889 [2024-07-15 16:10:19.638194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.889 qpair failed and we were unable to recover it. 00:28:50.889 [2024-07-15 16:10:19.638368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.889 [2024-07-15 16:10:19.638400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.889 qpair failed and we were unable to recover it. 00:28:50.889 [2024-07-15 16:10:19.638544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.889 [2024-07-15 16:10:19.638575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.889 qpair failed and we were unable to recover it. 00:28:50.889 [2024-07-15 16:10:19.638861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.889 [2024-07-15 16:10:19.638892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.889 qpair failed and we were unable to recover it. 00:28:50.889 [2024-07-15 16:10:19.639048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.889 [2024-07-15 16:10:19.639083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.889 qpair failed and we were unable to recover it. 00:28:50.889 [2024-07-15 16:10:19.639340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.889 [2024-07-15 16:10:19.639373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.889 qpair failed and we were unable to recover it. 00:28:50.889 [2024-07-15 16:10:19.639527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.889 [2024-07-15 16:10:19.639558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.889 qpair failed and we were unable to recover it. 00:28:50.889 [2024-07-15 16:10:19.639684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.889 [2024-07-15 16:10:19.639715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.889 qpair failed and we were unable to recover it. 00:28:50.889 [2024-07-15 16:10:19.639992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.889 [2024-07-15 16:10:19.640022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.889 qpair failed and we were unable to recover it. 00:28:50.889 [2024-07-15 16:10:19.640312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.889 [2024-07-15 16:10:19.640344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.889 qpair failed and we were unable to recover it. 00:28:50.889 [2024-07-15 16:10:19.640572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.889 [2024-07-15 16:10:19.640602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.889 qpair failed and we were unable to recover it. 00:28:50.889 [2024-07-15 16:10:19.640748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.889 [2024-07-15 16:10:19.640780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.889 qpair failed and we were unable to recover it. 00:28:50.889 [2024-07-15 16:10:19.640985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.889 [2024-07-15 16:10:19.641016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.889 qpair failed and we were unable to recover it. 00:28:50.889 [2024-07-15 16:10:19.641222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.889 [2024-07-15 16:10:19.641261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.889 qpair failed and we were unable to recover it. 00:28:50.889 [2024-07-15 16:10:19.641400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.889 [2024-07-15 16:10:19.641430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.889 qpair failed and we were unable to recover it. 00:28:50.889 [2024-07-15 16:10:19.641655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.889 [2024-07-15 16:10:19.641685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.889 qpair failed and we were unable to recover it. 00:28:50.889 [2024-07-15 16:10:19.641909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.889 [2024-07-15 16:10:19.641922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.889 qpair failed and we were unable to recover it. 00:28:50.889 [2024-07-15 16:10:19.642023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.889 [2024-07-15 16:10:19.642033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.889 qpair failed and we were unable to recover it. 00:28:50.889 [2024-07-15 16:10:19.642234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.889 [2024-07-15 16:10:19.642266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.889 qpair failed and we were unable to recover it. 00:28:50.889 [2024-07-15 16:10:19.642383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.889 [2024-07-15 16:10:19.642413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.889 qpair failed and we were unable to recover it. 00:28:50.889 [2024-07-15 16:10:19.642624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.889 [2024-07-15 16:10:19.642655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.889 qpair failed and we were unable to recover it. 00:28:50.889 [2024-07-15 16:10:19.642799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.889 [2024-07-15 16:10:19.642811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.889 qpair failed and we were unable to recover it. 00:28:50.889 [2024-07-15 16:10:19.643044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.889 [2024-07-15 16:10:19.643075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.889 qpair failed and we were unable to recover it. 00:28:50.889 [2024-07-15 16:10:19.643371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.889 [2024-07-15 16:10:19.643403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.889 qpair failed and we were unable to recover it. 00:28:50.889 [2024-07-15 16:10:19.643533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.889 [2024-07-15 16:10:19.643545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.889 qpair failed and we were unable to recover it. 00:28:50.889 [2024-07-15 16:10:19.643704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.889 [2024-07-15 16:10:19.643732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.889 qpair failed and we were unable to recover it. 00:28:50.889 [2024-07-15 16:10:19.643893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.889 [2024-07-15 16:10:19.643923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.889 qpair failed and we were unable to recover it. 00:28:50.889 [2024-07-15 16:10:19.644124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.889 [2024-07-15 16:10:19.644154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.889 qpair failed and we were unable to recover it. 00:28:50.889 [2024-07-15 16:10:19.644370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.889 [2024-07-15 16:10:19.644402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.889 qpair failed and we were unable to recover it. 00:28:50.889 [2024-07-15 16:10:19.644602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.889 [2024-07-15 16:10:19.644614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.889 qpair failed and we were unable to recover it. 00:28:50.889 [2024-07-15 16:10:19.644815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.889 [2024-07-15 16:10:19.644846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.889 qpair failed and we were unable to recover it. 00:28:50.889 [2024-07-15 16:10:19.645052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.889 [2024-07-15 16:10:19.645087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.889 qpair failed and we were unable to recover it. 00:28:50.889 [2024-07-15 16:10:19.645258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.889 [2024-07-15 16:10:19.645290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.889 qpair failed and we were unable to recover it. 00:28:50.889 [2024-07-15 16:10:19.645504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.890 [2024-07-15 16:10:19.645535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.890 qpair failed and we were unable to recover it. 00:28:50.890 [2024-07-15 16:10:19.645688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.890 [2024-07-15 16:10:19.645718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.890 qpair failed and we were unable to recover it. 00:28:50.890 [2024-07-15 16:10:19.645916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.890 [2024-07-15 16:10:19.645928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.890 qpair failed and we were unable to recover it. 00:28:50.890 [2024-07-15 16:10:19.646107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.890 [2024-07-15 16:10:19.646138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.890 qpair failed and we were unable to recover it. 00:28:50.890 [2024-07-15 16:10:19.646271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.890 [2024-07-15 16:10:19.646302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.890 qpair failed and we were unable to recover it. 00:28:50.890 [2024-07-15 16:10:19.646449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.890 [2024-07-15 16:10:19.646480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.890 qpair failed and we were unable to recover it. 00:28:50.890 [2024-07-15 16:10:19.646680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.890 [2024-07-15 16:10:19.646711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.890 qpair failed and we were unable to recover it. 00:28:50.890 [2024-07-15 16:10:19.646969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.890 [2024-07-15 16:10:19.646980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.890 qpair failed and we were unable to recover it. 00:28:50.890 [2024-07-15 16:10:19.647157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.890 [2024-07-15 16:10:19.647168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.890 qpair failed and we were unable to recover it. 00:28:50.890 [2024-07-15 16:10:19.647394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.890 [2024-07-15 16:10:19.647405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.890 qpair failed and we were unable to recover it. 00:28:50.890 [2024-07-15 16:10:19.647596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.890 [2024-07-15 16:10:19.647627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.890 qpair failed and we were unable to recover it. 00:28:50.890 [2024-07-15 16:10:19.647873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.890 [2024-07-15 16:10:19.647903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.890 qpair failed and we were unable to recover it. 00:28:50.890 [2024-07-15 16:10:19.648088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.890 [2024-07-15 16:10:19.648120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.890 qpair failed and we were unable to recover it. 00:28:50.890 [2024-07-15 16:10:19.648276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.890 [2024-07-15 16:10:19.648308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.890 qpair failed and we were unable to recover it. 00:28:50.890 [2024-07-15 16:10:19.648531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.890 [2024-07-15 16:10:19.648562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.890 qpair failed and we were unable to recover it. 00:28:50.890 [2024-07-15 16:10:19.648826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.890 [2024-07-15 16:10:19.648837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.890 qpair failed and we were unable to recover it. 00:28:50.890 [2024-07-15 16:10:19.648958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.890 [2024-07-15 16:10:19.648989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.890 qpair failed and we were unable to recover it. 00:28:50.890 [2024-07-15 16:10:19.649193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.890 [2024-07-15 16:10:19.649231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.890 qpair failed and we were unable to recover it. 00:28:50.890 [2024-07-15 16:10:19.649374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.890 [2024-07-15 16:10:19.649404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.890 qpair failed and we were unable to recover it. 00:28:50.890 [2024-07-15 16:10:19.649639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.890 [2024-07-15 16:10:19.649670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.890 qpair failed and we were unable to recover it. 00:28:50.890 [2024-07-15 16:10:19.649816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.890 [2024-07-15 16:10:19.649846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.890 qpair failed and we were unable to recover it. 00:28:50.890 [2024-07-15 16:10:19.650060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.890 [2024-07-15 16:10:19.650071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.890 qpair failed and we were unable to recover it. 00:28:50.890 [2024-07-15 16:10:19.650170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.890 [2024-07-15 16:10:19.650180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.890 qpair failed and we were unable to recover it. 00:28:50.890 [2024-07-15 16:10:19.650396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.890 [2024-07-15 16:10:19.650429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.890 qpair failed and we were unable to recover it. 00:28:50.890 [2024-07-15 16:10:19.650646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.890 [2024-07-15 16:10:19.650676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.890 qpair failed and we were unable to recover it. 00:28:50.890 [2024-07-15 16:10:19.650827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.890 [2024-07-15 16:10:19.650858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.890 qpair failed and we were unable to recover it. 00:28:50.890 [2024-07-15 16:10:19.651061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.890 [2024-07-15 16:10:19.651073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.890 qpair failed and we were unable to recover it. 00:28:50.890 [2024-07-15 16:10:19.651197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.890 [2024-07-15 16:10:19.651235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.890 qpair failed and we were unable to recover it. 00:28:50.890 [2024-07-15 16:10:19.651502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.890 [2024-07-15 16:10:19.651532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.890 qpair failed and we were unable to recover it. 00:28:50.890 [2024-07-15 16:10:19.651820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.890 [2024-07-15 16:10:19.651851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.890 qpair failed and we were unable to recover it. 00:28:50.890 [2024-07-15 16:10:19.652066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.890 [2024-07-15 16:10:19.652096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.890 qpair failed and we were unable to recover it. 00:28:50.890 [2024-07-15 16:10:19.652259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.890 [2024-07-15 16:10:19.652290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.890 qpair failed and we were unable to recover it. 00:28:50.890 [2024-07-15 16:10:19.652438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.890 [2024-07-15 16:10:19.652470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.890 qpair failed and we were unable to recover it. 00:28:50.890 [2024-07-15 16:10:19.652676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.890 [2024-07-15 16:10:19.652706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.890 qpair failed and we were unable to recover it. 00:28:50.890 [2024-07-15 16:10:19.652839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.890 [2024-07-15 16:10:19.652869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.890 qpair failed and we were unable to recover it. 00:28:50.890 [2024-07-15 16:10:19.653062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.890 [2024-07-15 16:10:19.653074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.890 qpair failed and we were unable to recover it. 00:28:50.890 [2024-07-15 16:10:19.653243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.890 [2024-07-15 16:10:19.653255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.890 qpair failed and we were unable to recover it. 00:28:50.890 [2024-07-15 16:10:19.653363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.890 [2024-07-15 16:10:19.653395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.890 qpair failed and we were unable to recover it. 00:28:50.890 [2024-07-15 16:10:19.653512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.890 [2024-07-15 16:10:19.653548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.890 qpair failed and we were unable to recover it. 00:28:50.890 [2024-07-15 16:10:19.653691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.890 [2024-07-15 16:10:19.653721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.890 qpair failed and we were unable to recover it. 00:28:50.890 [2024-07-15 16:10:19.653915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.890 [2024-07-15 16:10:19.653947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.890 qpair failed and we were unable to recover it. 00:28:50.890 [2024-07-15 16:10:19.654219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.891 [2024-07-15 16:10:19.654276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.891 qpair failed and we were unable to recover it. 00:28:50.891 [2024-07-15 16:10:19.654481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.891 [2024-07-15 16:10:19.654512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.891 qpair failed and we were unable to recover it. 00:28:50.891 [2024-07-15 16:10:19.654733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.891 [2024-07-15 16:10:19.654764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.891 qpair failed and we were unable to recover it. 00:28:50.891 [2024-07-15 16:10:19.655048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.891 [2024-07-15 16:10:19.655060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.891 qpair failed and we were unable to recover it. 00:28:50.891 [2024-07-15 16:10:19.655239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.891 [2024-07-15 16:10:19.655257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.891 qpair failed and we were unable to recover it. 00:28:50.891 [2024-07-15 16:10:19.655359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.891 [2024-07-15 16:10:19.655372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.891 qpair failed and we were unable to recover it. 00:28:50.891 [2024-07-15 16:10:19.655465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.891 [2024-07-15 16:10:19.655475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.891 qpair failed and we were unable to recover it. 00:28:50.891 [2024-07-15 16:10:19.655654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.891 [2024-07-15 16:10:19.655685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.891 qpair failed and we were unable to recover it. 00:28:50.891 [2024-07-15 16:10:19.655881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.891 [2024-07-15 16:10:19.655912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.891 qpair failed and we were unable to recover it. 00:28:50.891 [2024-07-15 16:10:19.656061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.891 [2024-07-15 16:10:19.656092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.891 qpair failed and we were unable to recover it. 00:28:50.891 [2024-07-15 16:10:19.656305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.891 [2024-07-15 16:10:19.656337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.891 qpair failed and we were unable to recover it. 00:28:50.891 [2024-07-15 16:10:19.656479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.891 [2024-07-15 16:10:19.656511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.891 qpair failed and we were unable to recover it. 00:28:50.891 [2024-07-15 16:10:19.656773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.891 [2024-07-15 16:10:19.656804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.891 qpair failed and we were unable to recover it. 00:28:50.891 [2024-07-15 16:10:19.657012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.891 [2024-07-15 16:10:19.657042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.891 qpair failed and we were unable to recover it. 00:28:50.891 [2024-07-15 16:10:19.657285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.891 [2024-07-15 16:10:19.657318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.891 qpair failed and we were unable to recover it. 00:28:50.891 [2024-07-15 16:10:19.657539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.891 [2024-07-15 16:10:19.657570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.891 qpair failed and we were unable to recover it. 00:28:50.891 [2024-07-15 16:10:19.657837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.891 [2024-07-15 16:10:19.657849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.891 qpair failed and we were unable to recover it. 00:28:50.891 [2024-07-15 16:10:19.658004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.891 [2024-07-15 16:10:19.658016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.891 qpair failed and we were unable to recover it. 00:28:50.891 [2024-07-15 16:10:19.658236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.891 [2024-07-15 16:10:19.658267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.891 qpair failed and we were unable to recover it. 00:28:50.891 [2024-07-15 16:10:19.658424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.891 [2024-07-15 16:10:19.658454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.891 qpair failed and we were unable to recover it. 00:28:50.891 [2024-07-15 16:10:19.658623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.891 [2024-07-15 16:10:19.658654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.891 qpair failed and we were unable to recover it. 00:28:50.891 [2024-07-15 16:10:19.658803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.891 [2024-07-15 16:10:19.658834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.891 qpair failed and we were unable to recover it. 00:28:50.891 [2024-07-15 16:10:19.658969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.891 [2024-07-15 16:10:19.658999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.891 qpair failed and we were unable to recover it. 00:28:50.891 [2024-07-15 16:10:19.659243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.891 [2024-07-15 16:10:19.659274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.891 qpair failed and we were unable to recover it. 00:28:50.891 [2024-07-15 16:10:19.659477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.891 [2024-07-15 16:10:19.659508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.891 qpair failed and we were unable to recover it. 00:28:50.891 [2024-07-15 16:10:19.659769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.891 [2024-07-15 16:10:19.659781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.891 qpair failed and we were unable to recover it. 00:28:50.891 [2024-07-15 16:10:19.659942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.891 [2024-07-15 16:10:19.659972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.891 qpair failed and we were unable to recover it. 00:28:50.891 [2024-07-15 16:10:19.660183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.891 [2024-07-15 16:10:19.660213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.891 qpair failed and we were unable to recover it. 00:28:50.891 [2024-07-15 16:10:19.660353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.891 [2024-07-15 16:10:19.660384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.891 qpair failed and we were unable to recover it. 00:28:50.891 [2024-07-15 16:10:19.660649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.891 [2024-07-15 16:10:19.660680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.891 qpair failed and we were unable to recover it. 00:28:50.891 [2024-07-15 16:10:19.660968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.891 [2024-07-15 16:10:19.660998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.891 qpair failed and we were unable to recover it. 00:28:50.891 [2024-07-15 16:10:19.661212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.891 [2024-07-15 16:10:19.661252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.891 qpair failed and we were unable to recover it. 00:28:50.891 [2024-07-15 16:10:19.661414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.891 [2024-07-15 16:10:19.661444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.891 qpair failed and we were unable to recover it. 00:28:50.891 [2024-07-15 16:10:19.661718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.891 [2024-07-15 16:10:19.661749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.891 qpair failed and we were unable to recover it. 00:28:50.891 [2024-07-15 16:10:19.661899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.891 [2024-07-15 16:10:19.661930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.891 qpair failed and we were unable to recover it. 00:28:50.891 [2024-07-15 16:10:19.662141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.891 [2024-07-15 16:10:19.662172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.891 qpair failed and we were unable to recover it. 00:28:50.891 [2024-07-15 16:10:19.662406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.891 [2024-07-15 16:10:19.662439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.891 qpair failed and we were unable to recover it. 00:28:50.891 [2024-07-15 16:10:19.662575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.891 [2024-07-15 16:10:19.662611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.891 qpair failed and we were unable to recover it. 00:28:50.891 [2024-07-15 16:10:19.662808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.891 [2024-07-15 16:10:19.662838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.891 qpair failed and we were unable to recover it. 00:28:50.891 [2024-07-15 16:10:19.663024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.891 [2024-07-15 16:10:19.663035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.891 qpair failed and we were unable to recover it. 00:28:50.891 [2024-07-15 16:10:19.663126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.891 [2024-07-15 16:10:19.663136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.891 qpair failed and we were unable to recover it. 00:28:50.892 [2024-07-15 16:10:19.663242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.892 [2024-07-15 16:10:19.663253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.892 qpair failed and we were unable to recover it. 00:28:50.892 [2024-07-15 16:10:19.663425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.892 [2024-07-15 16:10:19.663456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.892 qpair failed and we were unable to recover it. 00:28:50.892 [2024-07-15 16:10:19.663675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.892 [2024-07-15 16:10:19.663705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.892 qpair failed and we were unable to recover it. 00:28:50.892 [2024-07-15 16:10:19.663843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.892 [2024-07-15 16:10:19.663874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.892 qpair failed and we were unable to recover it. 00:28:50.892 [2024-07-15 16:10:19.664017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.892 [2024-07-15 16:10:19.664028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.892 qpair failed and we were unable to recover it. 00:28:50.892 [2024-07-15 16:10:19.664178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.892 [2024-07-15 16:10:19.664189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.892 qpair failed and we were unable to recover it. 00:28:50.892 [2024-07-15 16:10:19.664305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.892 [2024-07-15 16:10:19.664317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.892 qpair failed and we were unable to recover it. 00:28:50.892 [2024-07-15 16:10:19.664482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.892 [2024-07-15 16:10:19.664493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.892 qpair failed and we were unable to recover it. 00:28:50.892 [2024-07-15 16:10:19.664637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.892 [2024-07-15 16:10:19.664648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.892 qpair failed and we were unable to recover it. 00:28:50.892 [2024-07-15 16:10:19.664824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.892 [2024-07-15 16:10:19.664854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.892 qpair failed and we were unable to recover it. 00:28:50.892 [2024-07-15 16:10:19.665017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.892 [2024-07-15 16:10:19.665048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.892 qpair failed and we were unable to recover it. 00:28:50.892 [2024-07-15 16:10:19.665165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.892 [2024-07-15 16:10:19.665195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.892 qpair failed and we were unable to recover it. 00:28:50.892 [2024-07-15 16:10:19.665500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.892 [2024-07-15 16:10:19.665570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.892 qpair failed and we were unable to recover it. 00:28:50.892 [2024-07-15 16:10:19.665887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.892 [2024-07-15 16:10:19.665922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.892 qpair failed and we were unable to recover it. 00:28:50.892 [2024-07-15 16:10:19.666071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.892 [2024-07-15 16:10:19.666108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.892 qpair failed and we were unable to recover it. 00:28:50.892 [2024-07-15 16:10:19.666347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.892 [2024-07-15 16:10:19.666381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.892 qpair failed and we were unable to recover it. 00:28:50.892 [2024-07-15 16:10:19.666587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.892 [2024-07-15 16:10:19.666618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.892 qpair failed and we were unable to recover it. 00:28:50.892 [2024-07-15 16:10:19.666852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.892 [2024-07-15 16:10:19.666884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.892 qpair failed and we were unable to recover it. 00:28:50.892 [2024-07-15 16:10:19.667192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.892 [2024-07-15 16:10:19.667223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.892 qpair failed and we were unable to recover it. 00:28:50.892 [2024-07-15 16:10:19.667381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.892 [2024-07-15 16:10:19.667412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.892 qpair failed and we were unable to recover it. 00:28:50.892 [2024-07-15 16:10:19.667699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.892 [2024-07-15 16:10:19.667714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.892 qpair failed and we were unable to recover it. 00:28:50.892 [2024-07-15 16:10:19.667819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.892 [2024-07-15 16:10:19.667835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.892 qpair failed and we were unable to recover it. 00:28:50.892 [2024-07-15 16:10:19.667945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.892 [2024-07-15 16:10:19.667958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.892 qpair failed and we were unable to recover it. 00:28:50.892 [2024-07-15 16:10:19.668076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.892 [2024-07-15 16:10:19.668092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.892 qpair failed and we were unable to recover it. 00:28:50.892 [2024-07-15 16:10:19.668200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.892 [2024-07-15 16:10:19.668216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.892 qpair failed and we were unable to recover it. 00:28:50.892 [2024-07-15 16:10:19.668341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.892 [2024-07-15 16:10:19.668357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.892 qpair failed and we were unable to recover it. 00:28:50.892 [2024-07-15 16:10:19.668599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.892 [2024-07-15 16:10:19.668630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.892 qpair failed and we were unable to recover it. 00:28:50.892 [2024-07-15 16:10:19.668776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.892 [2024-07-15 16:10:19.668807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.892 qpair failed and we were unable to recover it. 00:28:50.892 [2024-07-15 16:10:19.668963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.892 [2024-07-15 16:10:19.668993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.892 qpair failed and we were unable to recover it. 00:28:50.892 [2024-07-15 16:10:19.669197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.892 [2024-07-15 16:10:19.669237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.892 qpair failed and we were unable to recover it. 00:28:50.892 [2024-07-15 16:10:19.669369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.892 [2024-07-15 16:10:19.669400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.892 qpair failed and we were unable to recover it. 00:28:50.892 [2024-07-15 16:10:19.669625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.892 [2024-07-15 16:10:19.669656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.892 qpair failed and we were unable to recover it. 00:28:50.892 [2024-07-15 16:10:19.669864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.892 [2024-07-15 16:10:19.669897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.892 qpair failed and we were unable to recover it. 00:28:50.892 [2024-07-15 16:10:19.670040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.892 [2024-07-15 16:10:19.670069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.892 qpair failed and we were unable to recover it. 00:28:50.892 [2024-07-15 16:10:19.670339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.892 [2024-07-15 16:10:19.670381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.892 qpair failed and we were unable to recover it. 00:28:50.892 [2024-07-15 16:10:19.670597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.892 [2024-07-15 16:10:19.670613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.892 qpair failed and we were unable to recover it. 00:28:50.892 [2024-07-15 16:10:19.670806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.892 [2024-07-15 16:10:19.670824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.892 qpair failed and we were unable to recover it. 00:28:50.892 [2024-07-15 16:10:19.670924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.892 [2024-07-15 16:10:19.670940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.892 qpair failed and we were unable to recover it. 00:28:50.892 [2024-07-15 16:10:19.671039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.892 [2024-07-15 16:10:19.671055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.892 qpair failed and we were unable to recover it. 00:28:50.892 [2024-07-15 16:10:19.671154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.892 [2024-07-15 16:10:19.671168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.892 qpair failed and we were unable to recover it. 00:28:50.892 [2024-07-15 16:10:19.671271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.892 [2024-07-15 16:10:19.671282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.892 qpair failed and we were unable to recover it. 00:28:50.893 [2024-07-15 16:10:19.671465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.893 [2024-07-15 16:10:19.671496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.893 qpair failed and we were unable to recover it. 00:28:50.893 [2024-07-15 16:10:19.671634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.893 [2024-07-15 16:10:19.671665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.893 qpair failed and we were unable to recover it. 00:28:50.893 [2024-07-15 16:10:19.671812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.893 [2024-07-15 16:10:19.671842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.893 qpair failed and we were unable to recover it. 00:28:50.893 [2024-07-15 16:10:19.672057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.893 [2024-07-15 16:10:19.672087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.893 qpair failed and we were unable to recover it. 00:28:50.893 [2024-07-15 16:10:19.672286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.893 [2024-07-15 16:10:19.672318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.893 qpair failed and we were unable to recover it. 00:28:50.893 [2024-07-15 16:10:19.672519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.893 [2024-07-15 16:10:19.672549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.893 qpair failed and we were unable to recover it. 00:28:50.893 [2024-07-15 16:10:19.672739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.893 [2024-07-15 16:10:19.672750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.893 qpair failed and we were unable to recover it. 00:28:50.893 [2024-07-15 16:10:19.672846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.893 [2024-07-15 16:10:19.672856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.893 qpair failed and we were unable to recover it. 00:28:50.893 [2024-07-15 16:10:19.673012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.893 [2024-07-15 16:10:19.673023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.893 qpair failed and we were unable to recover it. 00:28:50.893 [2024-07-15 16:10:19.673143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.893 [2024-07-15 16:10:19.673175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.893 qpair failed and we were unable to recover it. 00:28:50.893 [2024-07-15 16:10:19.673339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.893 [2024-07-15 16:10:19.673370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.893 qpair failed and we were unable to recover it. 00:28:50.893 [2024-07-15 16:10:19.673576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.893 [2024-07-15 16:10:19.673608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.893 qpair failed and we were unable to recover it. 00:28:50.893 [2024-07-15 16:10:19.673830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.893 [2024-07-15 16:10:19.673841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.893 qpair failed and we were unable to recover it. 00:28:50.893 [2024-07-15 16:10:19.674002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.893 [2024-07-15 16:10:19.674033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.893 qpair failed and we were unable to recover it. 00:28:50.893 [2024-07-15 16:10:19.674257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.893 [2024-07-15 16:10:19.674289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.893 qpair failed and we were unable to recover it. 00:28:50.893 [2024-07-15 16:10:19.674428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.893 [2024-07-15 16:10:19.674440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.893 qpair failed and we were unable to recover it. 00:28:50.893 [2024-07-15 16:10:19.674534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.893 [2024-07-15 16:10:19.674545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.893 qpair failed and we were unable to recover it. 00:28:50.893 [2024-07-15 16:10:19.674672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.893 [2024-07-15 16:10:19.674703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.893 qpair failed and we were unable to recover it. 00:28:50.893 [2024-07-15 16:10:19.674860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.893 [2024-07-15 16:10:19.674891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.893 qpair failed and we were unable to recover it. 00:28:50.893 [2024-07-15 16:10:19.675121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.893 [2024-07-15 16:10:19.675152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.893 qpair failed and we were unable to recover it. 00:28:50.893 [2024-07-15 16:10:19.675301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.893 [2024-07-15 16:10:19.675333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.893 qpair failed and we were unable to recover it. 00:28:50.893 [2024-07-15 16:10:19.675473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.893 [2024-07-15 16:10:19.675503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.893 qpair failed and we were unable to recover it. 00:28:50.893 [2024-07-15 16:10:19.675714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.893 [2024-07-15 16:10:19.675746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.893 qpair failed and we were unable to recover it. 00:28:50.893 [2024-07-15 16:10:19.675952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.893 [2024-07-15 16:10:19.675983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.893 qpair failed and we were unable to recover it. 00:28:50.893 [2024-07-15 16:10:19.676188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.893 [2024-07-15 16:10:19.676219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.893 qpair failed and we were unable to recover it. 00:28:50.893 [2024-07-15 16:10:19.676457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.893 [2024-07-15 16:10:19.676487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.893 qpair failed and we were unable to recover it. 00:28:50.893 [2024-07-15 16:10:19.676637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.893 [2024-07-15 16:10:19.676668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.893 qpair failed and we were unable to recover it. 00:28:50.893 [2024-07-15 16:10:19.676944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.893 [2024-07-15 16:10:19.676955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.893 qpair failed and we were unable to recover it. 00:28:50.893 [2024-07-15 16:10:19.677176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.893 [2024-07-15 16:10:19.677188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.893 qpair failed and we were unable to recover it. 00:28:50.893 [2024-07-15 16:10:19.677291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.893 [2024-07-15 16:10:19.677302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.893 qpair failed and we were unable to recover it. 00:28:50.893 [2024-07-15 16:10:19.677386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.893 [2024-07-15 16:10:19.677396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.893 qpair failed and we were unable to recover it. 00:28:50.893 [2024-07-15 16:10:19.677501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.893 [2024-07-15 16:10:19.677512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.893 qpair failed and we were unable to recover it. 00:28:50.893 [2024-07-15 16:10:19.677684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.893 [2024-07-15 16:10:19.677714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.893 qpair failed and we were unable to recover it. 00:28:50.893 [2024-07-15 16:10:19.677851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.893 [2024-07-15 16:10:19.677882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.893 qpair failed and we were unable to recover it. 00:28:50.893 [2024-07-15 16:10:19.678085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.893 [2024-07-15 16:10:19.678116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.893 qpair failed and we were unable to recover it. 00:28:50.893 [2024-07-15 16:10:19.678273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.894 [2024-07-15 16:10:19.678310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.894 qpair failed and we were unable to recover it. 00:28:50.894 [2024-07-15 16:10:19.678453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.894 [2024-07-15 16:10:19.678483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.894 qpair failed and we were unable to recover it. 00:28:50.894 [2024-07-15 16:10:19.678685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.894 [2024-07-15 16:10:19.678715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.894 qpair failed and we were unable to recover it. 00:28:50.894 [2024-07-15 16:10:19.678922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.894 [2024-07-15 16:10:19.678953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.894 qpair failed and we were unable to recover it. 00:28:50.894 [2024-07-15 16:10:19.679070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.894 [2024-07-15 16:10:19.679100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.894 qpair failed and we were unable to recover it. 00:28:50.894 [2024-07-15 16:10:19.679359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.894 [2024-07-15 16:10:19.679392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.894 qpair failed and we were unable to recover it. 00:28:50.894 [2024-07-15 16:10:19.679596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.894 [2024-07-15 16:10:19.679608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.894 qpair failed and we were unable to recover it. 00:28:50.894 [2024-07-15 16:10:19.679741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.894 [2024-07-15 16:10:19.679772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.894 qpair failed and we were unable to recover it. 00:28:50.894 [2024-07-15 16:10:19.679979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.894 [2024-07-15 16:10:19.680010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.894 qpair failed and we were unable to recover it. 00:28:50.894 [2024-07-15 16:10:19.680189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.894 [2024-07-15 16:10:19.680220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.894 qpair failed and we were unable to recover it. 00:28:50.894 [2024-07-15 16:10:19.680521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.894 [2024-07-15 16:10:19.680558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.894 qpair failed and we were unable to recover it. 00:28:50.894 [2024-07-15 16:10:19.680739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.894 [2024-07-15 16:10:19.680752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.894 qpair failed and we were unable to recover it. 00:28:50.894 [2024-07-15 16:10:19.680943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.894 [2024-07-15 16:10:19.680974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.894 qpair failed and we were unable to recover it. 00:28:50.894 [2024-07-15 16:10:19.681137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.894 [2024-07-15 16:10:19.681167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.894 qpair failed and we were unable to recover it. 00:28:50.894 [2024-07-15 16:10:19.681401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.894 [2024-07-15 16:10:19.681433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.894 qpair failed and we were unable to recover it. 00:28:50.894 [2024-07-15 16:10:19.681574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.894 [2024-07-15 16:10:19.681604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.894 qpair failed and we were unable to recover it. 00:28:50.894 [2024-07-15 16:10:19.681804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.894 [2024-07-15 16:10:19.681834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.894 qpair failed and we were unable to recover it. 00:28:50.894 [2024-07-15 16:10:19.682033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.894 [2024-07-15 16:10:19.682064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.894 qpair failed and we were unable to recover it. 00:28:50.894 [2024-07-15 16:10:19.682264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.894 [2024-07-15 16:10:19.682296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.894 qpair failed and we were unable to recover it. 00:28:50.894 [2024-07-15 16:10:19.682452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.894 [2024-07-15 16:10:19.682484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.894 qpair failed and we were unable to recover it. 00:28:50.894 [2024-07-15 16:10:19.682671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.894 [2024-07-15 16:10:19.682701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.894 qpair failed and we were unable to recover it. 00:28:50.894 [2024-07-15 16:10:19.683014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.894 [2024-07-15 16:10:19.683044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.894 qpair failed and we were unable to recover it. 00:28:50.894 [2024-07-15 16:10:19.683254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.894 [2024-07-15 16:10:19.683285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.894 qpair failed and we were unable to recover it. 00:28:50.894 [2024-07-15 16:10:19.683438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.894 [2024-07-15 16:10:19.683469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.894 qpair failed and we were unable to recover it. 00:28:50.894 [2024-07-15 16:10:19.683604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.894 [2024-07-15 16:10:19.683616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.894 qpair failed and we were unable to recover it. 00:28:50.894 [2024-07-15 16:10:19.683823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.894 [2024-07-15 16:10:19.683853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.894 qpair failed and we were unable to recover it. 00:28:50.894 [2024-07-15 16:10:19.684074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.894 [2024-07-15 16:10:19.684106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.894 qpair failed and we were unable to recover it. 00:28:50.894 [2024-07-15 16:10:19.684405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.894 [2024-07-15 16:10:19.684437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.894 qpair failed and we were unable to recover it. 00:28:50.894 [2024-07-15 16:10:19.684576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.894 [2024-07-15 16:10:19.684606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.894 qpair failed and we were unable to recover it. 00:28:50.894 [2024-07-15 16:10:19.684841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.894 [2024-07-15 16:10:19.684872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.894 qpair failed and we were unable to recover it. 00:28:50.894 [2024-07-15 16:10:19.685019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.894 [2024-07-15 16:10:19.685049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.894 qpair failed and we were unable to recover it. 00:28:50.894 [2024-07-15 16:10:19.685290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.894 [2024-07-15 16:10:19.685322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.894 qpair failed and we were unable to recover it. 00:28:50.894 [2024-07-15 16:10:19.685524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.894 [2024-07-15 16:10:19.685555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.894 qpair failed and we were unable to recover it. 00:28:50.894 [2024-07-15 16:10:19.685699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.894 [2024-07-15 16:10:19.685729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.894 qpair failed and we were unable to recover it. 00:28:50.894 [2024-07-15 16:10:19.685866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.894 [2024-07-15 16:10:19.685904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.894 qpair failed and we were unable to recover it. 00:28:50.894 [2024-07-15 16:10:19.686127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.894 [2024-07-15 16:10:19.686139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.894 qpair failed and we were unable to recover it. 00:28:50.894 [2024-07-15 16:10:19.686256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.894 [2024-07-15 16:10:19.686269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.894 qpair failed and we were unable to recover it. 00:28:50.894 [2024-07-15 16:10:19.686369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.894 [2024-07-15 16:10:19.686379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.894 qpair failed and we were unable to recover it. 00:28:50.894 [2024-07-15 16:10:19.686532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.894 [2024-07-15 16:10:19.686543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.894 qpair failed and we were unable to recover it. 00:28:50.894 [2024-07-15 16:10:19.686690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.894 [2024-07-15 16:10:19.686703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.894 qpair failed and we were unable to recover it. 00:28:50.894 [2024-07-15 16:10:19.686870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.894 [2024-07-15 16:10:19.686883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.894 qpair failed and we were unable to recover it. 00:28:50.894 [2024-07-15 16:10:19.686985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.895 [2024-07-15 16:10:19.686995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.895 qpair failed and we were unable to recover it. 00:28:50.895 [2024-07-15 16:10:19.687111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.895 [2024-07-15 16:10:19.687124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.895 qpair failed and we were unable to recover it. 00:28:50.895 [2024-07-15 16:10:19.687216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.895 [2024-07-15 16:10:19.687231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.895 qpair failed and we were unable to recover it. 00:28:50.895 [2024-07-15 16:10:19.687391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.895 [2024-07-15 16:10:19.687403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.895 qpair failed and we were unable to recover it. 00:28:50.895 [2024-07-15 16:10:19.687612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.895 [2024-07-15 16:10:19.687623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.895 qpair failed and we were unable to recover it. 00:28:50.895 [2024-07-15 16:10:19.687736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.895 [2024-07-15 16:10:19.687748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.895 qpair failed and we were unable to recover it. 00:28:50.895 [2024-07-15 16:10:19.687944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.895 [2024-07-15 16:10:19.687974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.895 qpair failed and we were unable to recover it. 00:28:50.895 [2024-07-15 16:10:19.688197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.895 [2024-07-15 16:10:19.688238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.895 qpair failed and we were unable to recover it. 00:28:50.895 [2024-07-15 16:10:19.688438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.895 [2024-07-15 16:10:19.688469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.895 qpair failed and we were unable to recover it. 00:28:50.895 [2024-07-15 16:10:19.688686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.895 [2024-07-15 16:10:19.688716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.895 qpair failed and we were unable to recover it. 00:28:50.895 [2024-07-15 16:10:19.688959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.895 [2024-07-15 16:10:19.688991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.895 qpair failed and we were unable to recover it. 00:28:50.895 [2024-07-15 16:10:19.689192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.895 [2024-07-15 16:10:19.689222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.895 qpair failed and we were unable to recover it. 00:28:50.895 [2024-07-15 16:10:19.689441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.895 [2024-07-15 16:10:19.689471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.895 qpair failed and we were unable to recover it. 00:28:50.895 [2024-07-15 16:10:19.689683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.895 [2024-07-15 16:10:19.689714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.895 qpair failed and we were unable to recover it. 00:28:50.895 [2024-07-15 16:10:19.689984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.895 [2024-07-15 16:10:19.690015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.895 qpair failed and we were unable to recover it. 00:28:50.895 [2024-07-15 16:10:19.690178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.895 [2024-07-15 16:10:19.690210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.895 qpair failed and we were unable to recover it. 00:28:50.895 [2024-07-15 16:10:19.690445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.895 [2024-07-15 16:10:19.690477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.895 qpair failed and we were unable to recover it. 00:28:50.895 [2024-07-15 16:10:19.690744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.895 [2024-07-15 16:10:19.690756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.895 qpair failed and we were unable to recover it. 00:28:50.895 [2024-07-15 16:10:19.690899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.895 [2024-07-15 16:10:19.690930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.895 qpair failed and we were unable to recover it. 00:28:50.895 [2024-07-15 16:10:19.691074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.895 [2024-07-15 16:10:19.691104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.895 qpair failed and we were unable to recover it. 00:28:50.895 [2024-07-15 16:10:19.691348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.895 [2024-07-15 16:10:19.691380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.895 qpair failed and we were unable to recover it. 00:28:50.895 [2024-07-15 16:10:19.691592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.895 [2024-07-15 16:10:19.691623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.895 qpair failed and we were unable to recover it. 00:28:50.895 [2024-07-15 16:10:19.691832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.895 [2024-07-15 16:10:19.691863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.895 qpair failed and we were unable to recover it. 00:28:50.895 [2024-07-15 16:10:19.691960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.895 [2024-07-15 16:10:19.691970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.895 qpair failed and we were unable to recover it. 00:28:50.895 [2024-07-15 16:10:19.692134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.895 [2024-07-15 16:10:19.692164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.895 qpair failed and we were unable to recover it. 00:28:50.895 [2024-07-15 16:10:19.692365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.895 [2024-07-15 16:10:19.692396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.895 qpair failed and we were unable to recover it. 00:28:50.895 [2024-07-15 16:10:19.692551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.895 [2024-07-15 16:10:19.692582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.895 qpair failed and we were unable to recover it. 00:28:50.895 [2024-07-15 16:10:19.692854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.895 [2024-07-15 16:10:19.692884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.895 qpair failed and we were unable to recover it. 00:28:50.895 [2024-07-15 16:10:19.693083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.895 [2024-07-15 16:10:19.693114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.895 qpair failed and we were unable to recover it. 00:28:50.895 [2024-07-15 16:10:19.693359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.895 [2024-07-15 16:10:19.693391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.895 qpair failed and we were unable to recover it. 00:28:50.895 [2024-07-15 16:10:19.693654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.895 [2024-07-15 16:10:19.693698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.895 qpair failed and we were unable to recover it. 00:28:50.895 [2024-07-15 16:10:19.693840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.895 [2024-07-15 16:10:19.693851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.895 qpair failed and we were unable to recover it. 00:28:50.895 [2024-07-15 16:10:19.694066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.895 [2024-07-15 16:10:19.694096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.895 qpair failed and we were unable to recover it. 00:28:50.895 [2024-07-15 16:10:19.694307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.895 [2024-07-15 16:10:19.694339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.895 qpair failed and we were unable to recover it. 00:28:50.895 [2024-07-15 16:10:19.694472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.895 [2024-07-15 16:10:19.694501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.895 qpair failed and we were unable to recover it. 00:28:50.895 [2024-07-15 16:10:19.694714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.895 [2024-07-15 16:10:19.694725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.895 qpair failed and we were unable to recover it. 00:28:50.895 [2024-07-15 16:10:19.694886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.895 [2024-07-15 16:10:19.694917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.895 qpair failed and we were unable to recover it. 00:28:50.895 [2024-07-15 16:10:19.695164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.895 [2024-07-15 16:10:19.695194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.895 qpair failed and we were unable to recover it. 00:28:50.895 [2024-07-15 16:10:19.695518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.895 [2024-07-15 16:10:19.695589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.895 qpair failed and we were unable to recover it. 00:28:50.895 [2024-07-15 16:10:19.695817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.895 [2024-07-15 16:10:19.695862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.895 qpair failed and we were unable to recover it. 00:28:50.895 [2024-07-15 16:10:19.696063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.895 [2024-07-15 16:10:19.696079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.895 qpair failed and we were unable to recover it. 00:28:50.896 [2024-07-15 16:10:19.696285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.896 [2024-07-15 16:10:19.696316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.896 qpair failed and we were unable to recover it. 00:28:50.896 [2024-07-15 16:10:19.696604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.896 [2024-07-15 16:10:19.696634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.896 qpair failed and we were unable to recover it. 00:28:50.896 [2024-07-15 16:10:19.696849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.896 [2024-07-15 16:10:19.696880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.896 qpair failed and we were unable to recover it. 00:28:50.896 [2024-07-15 16:10:19.697028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.896 [2024-07-15 16:10:19.697060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.896 qpair failed and we were unable to recover it. 00:28:50.896 [2024-07-15 16:10:19.697278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.896 [2024-07-15 16:10:19.697311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.896 qpair failed and we were unable to recover it. 00:28:50.896 [2024-07-15 16:10:19.697519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.896 [2024-07-15 16:10:19.697549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.896 qpair failed and we were unable to recover it. 00:28:50.896 [2024-07-15 16:10:19.697691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.896 [2024-07-15 16:10:19.697720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.896 qpair failed and we were unable to recover it. 00:28:50.896 [2024-07-15 16:10:19.697925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.896 [2024-07-15 16:10:19.697954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.896 qpair failed and we were unable to recover it. 00:28:50.896 [2024-07-15 16:10:19.698160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.896 [2024-07-15 16:10:19.698175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.896 qpair failed and we were unable to recover it. 00:28:50.896 [2024-07-15 16:10:19.698285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.896 [2024-07-15 16:10:19.698300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.896 qpair failed and we were unable to recover it. 00:28:50.896 [2024-07-15 16:10:19.698415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.896 [2024-07-15 16:10:19.698430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.896 qpair failed and we were unable to recover it. 00:28:50.896 [2024-07-15 16:10:19.698590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.896 [2024-07-15 16:10:19.698605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.896 qpair failed and we were unable to recover it. 00:28:50.896 [2024-07-15 16:10:19.698845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.896 [2024-07-15 16:10:19.698879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.896 qpair failed and we were unable to recover it. 00:28:50.896 [2024-07-15 16:10:19.699079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.896 [2024-07-15 16:10:19.699111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.896 qpair failed and we were unable to recover it. 00:28:50.896 [2024-07-15 16:10:19.699320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.896 [2024-07-15 16:10:19.699352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.896 qpair failed and we were unable to recover it. 00:28:50.896 [2024-07-15 16:10:19.699652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.896 [2024-07-15 16:10:19.699682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.896 qpair failed and we were unable to recover it. 00:28:50.896 [2024-07-15 16:10:19.699991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.896 [2024-07-15 16:10:19.700002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.896 qpair failed and we were unable to recover it. 00:28:50.896 [2024-07-15 16:10:19.700188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.896 [2024-07-15 16:10:19.700219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.896 qpair failed and we were unable to recover it. 00:28:50.896 [2024-07-15 16:10:19.700439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.896 [2024-07-15 16:10:19.700470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.896 qpair failed and we were unable to recover it. 00:28:50.896 [2024-07-15 16:10:19.700615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.896 [2024-07-15 16:10:19.700646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.896 qpair failed and we were unable to recover it. 00:28:50.896 [2024-07-15 16:10:19.700924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.896 [2024-07-15 16:10:19.700955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.896 qpair failed and we were unable to recover it. 00:28:50.896 [2024-07-15 16:10:19.701103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.896 [2024-07-15 16:10:19.701133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.896 qpair failed and we were unable to recover it. 00:28:50.896 [2024-07-15 16:10:19.701340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.896 [2024-07-15 16:10:19.701372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.896 qpair failed and we were unable to recover it. 00:28:50.896 [2024-07-15 16:10:19.701634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.896 [2024-07-15 16:10:19.701666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.896 qpair failed and we were unable to recover it. 00:28:50.896 [2024-07-15 16:10:19.701807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.896 [2024-07-15 16:10:19.701836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.896 qpair failed and we were unable to recover it. 00:28:50.896 [2024-07-15 16:10:19.702058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.896 [2024-07-15 16:10:19.702088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.896 qpair failed and we were unable to recover it. 00:28:50.896 [2024-07-15 16:10:19.702242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.896 [2024-07-15 16:10:19.702273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.896 qpair failed and we were unable to recover it. 00:28:50.896 [2024-07-15 16:10:19.702487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.896 [2024-07-15 16:10:19.702518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.896 qpair failed and we were unable to recover it. 00:28:50.896 [2024-07-15 16:10:19.702795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.896 [2024-07-15 16:10:19.702825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.896 qpair failed and we were unable to recover it. 00:28:50.896 [2024-07-15 16:10:19.703083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.896 [2024-07-15 16:10:19.703094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.896 qpair failed and we were unable to recover it. 00:28:50.896 [2024-07-15 16:10:19.703299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.896 [2024-07-15 16:10:19.703311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.896 qpair failed and we were unable to recover it. 00:28:50.896 [2024-07-15 16:10:19.703411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.896 [2024-07-15 16:10:19.703422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.896 qpair failed and we were unable to recover it. 00:28:50.896 [2024-07-15 16:10:19.703680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.896 [2024-07-15 16:10:19.703710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.896 qpair failed and we were unable to recover it. 00:28:50.896 [2024-07-15 16:10:19.703941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.896 [2024-07-15 16:10:19.703972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.896 qpair failed and we were unable to recover it. 00:28:50.896 [2024-07-15 16:10:19.704258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.896 [2024-07-15 16:10:19.704289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.896 qpair failed and we were unable to recover it. 00:28:50.896 [2024-07-15 16:10:19.704554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.896 [2024-07-15 16:10:19.704585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.896 qpair failed and we were unable to recover it. 00:28:50.896 [2024-07-15 16:10:19.704811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.896 [2024-07-15 16:10:19.704842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.896 qpair failed and we were unable to recover it. 00:28:50.896 [2024-07-15 16:10:19.704968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.896 [2024-07-15 16:10:19.704979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.896 qpair failed and we were unable to recover it. 00:28:50.896 [2024-07-15 16:10:19.705160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.896 [2024-07-15 16:10:19.705195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.896 qpair failed and we were unable to recover it. 00:28:50.896 [2024-07-15 16:10:19.705444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.896 [2024-07-15 16:10:19.705480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.897 qpair failed and we were unable to recover it. 00:28:50.897 [2024-07-15 16:10:19.705698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.897 [2024-07-15 16:10:19.705729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.897 qpair failed and we were unable to recover it. 00:28:50.897 [2024-07-15 16:10:19.705970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.897 [2024-07-15 16:10:19.705985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.897 qpair failed and we were unable to recover it. 00:28:50.897 [2024-07-15 16:10:19.706214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.897 [2024-07-15 16:10:19.706235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.897 qpair failed and we were unable to recover it. 00:28:50.897 [2024-07-15 16:10:19.706425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.897 [2024-07-15 16:10:19.706441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.897 qpair failed and we were unable to recover it. 00:28:50.897 [2024-07-15 16:10:19.706667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.897 [2024-07-15 16:10:19.706682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.897 qpair failed and we were unable to recover it. 00:28:50.897 [2024-07-15 16:10:19.706980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.897 [2024-07-15 16:10:19.706995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.897 qpair failed and we were unable to recover it. 00:28:50.897 [2024-07-15 16:10:19.707194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.897 [2024-07-15 16:10:19.707208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.897 qpair failed and we were unable to recover it. 00:28:50.897 [2024-07-15 16:10:19.707392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.897 [2024-07-15 16:10:19.707424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.897 qpair failed and we were unable to recover it. 00:28:50.897 [2024-07-15 16:10:19.707662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.897 [2024-07-15 16:10:19.707692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.897 qpair failed and we were unable to recover it. 00:28:50.897 [2024-07-15 16:10:19.707976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.897 [2024-07-15 16:10:19.708015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.897 qpair failed and we were unable to recover it. 00:28:50.897 [2024-07-15 16:10:19.708138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.897 [2024-07-15 16:10:19.708152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.897 qpair failed and we were unable to recover it. 00:28:50.897 [2024-07-15 16:10:19.708329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.897 [2024-07-15 16:10:19.708345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.897 qpair failed and we were unable to recover it. 00:28:50.897 [2024-07-15 16:10:19.708458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.897 [2024-07-15 16:10:19.708472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.897 qpair failed and we were unable to recover it. 00:28:50.897 [2024-07-15 16:10:19.708578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.897 [2024-07-15 16:10:19.708593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.897 qpair failed and we were unable to recover it. 00:28:50.897 [2024-07-15 16:10:19.708689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.897 [2024-07-15 16:10:19.708702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.897 qpair failed and we were unable to recover it. 00:28:50.897 [2024-07-15 16:10:19.708803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.897 [2024-07-15 16:10:19.708819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.897 qpair failed and we were unable to recover it. 00:28:50.897 [2024-07-15 16:10:19.709009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.897 [2024-07-15 16:10:19.709024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.897 qpair failed and we were unable to recover it. 00:28:50.897 [2024-07-15 16:10:19.709147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.897 [2024-07-15 16:10:19.709161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.897 qpair failed and we were unable to recover it. 00:28:50.897 [2024-07-15 16:10:19.709349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.897 [2024-07-15 16:10:19.709381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.897 qpair failed and we were unable to recover it. 00:28:50.897 [2024-07-15 16:10:19.709531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.897 [2024-07-15 16:10:19.709562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.897 qpair failed and we were unable to recover it. 00:28:50.897 [2024-07-15 16:10:19.709699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.897 [2024-07-15 16:10:19.709728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.897 qpair failed and we were unable to recover it. 00:28:50.897 [2024-07-15 16:10:19.709999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.897 [2024-07-15 16:10:19.710031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.897 qpair failed and we were unable to recover it. 00:28:50.897 [2024-07-15 16:10:19.710171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.897 [2024-07-15 16:10:19.710201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.897 qpair failed and we were unable to recover it. 00:28:50.897 [2024-07-15 16:10:19.710425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.897 [2024-07-15 16:10:19.710457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.897 qpair failed and we were unable to recover it. 00:28:50.897 [2024-07-15 16:10:19.710584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.897 [2024-07-15 16:10:19.710599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.897 qpair failed and we were unable to recover it. 00:28:50.897 [2024-07-15 16:10:19.710801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.897 [2024-07-15 16:10:19.710832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.897 qpair failed and we were unable to recover it. 00:28:50.897 [2024-07-15 16:10:19.710970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.897 [2024-07-15 16:10:19.711000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.897 qpair failed and we were unable to recover it. 00:28:50.897 [2024-07-15 16:10:19.711238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.897 [2024-07-15 16:10:19.711269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.897 qpair failed and we were unable to recover it. 00:28:50.897 [2024-07-15 16:10:19.711429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.897 [2024-07-15 16:10:19.711458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.897 qpair failed and we were unable to recover it. 00:28:50.897 [2024-07-15 16:10:19.711615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.897 [2024-07-15 16:10:19.711645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.897 qpair failed and we were unable to recover it. 00:28:50.897 [2024-07-15 16:10:19.711849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.897 [2024-07-15 16:10:19.711865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.897 qpair failed and we were unable to recover it. 00:28:50.897 [2024-07-15 16:10:19.712053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.897 [2024-07-15 16:10:19.712082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.897 qpair failed and we were unable to recover it. 00:28:50.897 [2024-07-15 16:10:19.712315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.897 [2024-07-15 16:10:19.712347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.897 qpair failed and we were unable to recover it. 00:28:50.897 [2024-07-15 16:10:19.712561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.897 [2024-07-15 16:10:19.712591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.897 qpair failed and we were unable to recover it. 00:28:50.897 [2024-07-15 16:10:19.712851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.897 [2024-07-15 16:10:19.712867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.898 qpair failed and we were unable to recover it. 00:28:50.898 [2024-07-15 16:10:19.712979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.898 [2024-07-15 16:10:19.712993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.898 qpair failed and we were unable to recover it. 00:28:50.898 [2024-07-15 16:10:19.713167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.898 [2024-07-15 16:10:19.713197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.898 qpair failed and we were unable to recover it. 00:28:50.898 [2024-07-15 16:10:19.713344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.898 [2024-07-15 16:10:19.713375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.898 qpair failed and we were unable to recover it. 00:28:50.898 [2024-07-15 16:10:19.713587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.898 [2024-07-15 16:10:19.713622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.898 qpair failed and we were unable to recover it. 00:28:50.898 [2024-07-15 16:10:19.713762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.898 [2024-07-15 16:10:19.713792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.898 qpair failed and we were unable to recover it. 00:28:50.898 [2024-07-15 16:10:19.714075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.898 [2024-07-15 16:10:19.714106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.898 qpair failed and we were unable to recover it. 00:28:50.898 [2024-07-15 16:10:19.714262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.898 [2024-07-15 16:10:19.714295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.898 qpair failed and we were unable to recover it. 00:28:50.898 [2024-07-15 16:10:19.714455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.898 [2024-07-15 16:10:19.714486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.898 qpair failed and we were unable to recover it. 00:28:50.898 [2024-07-15 16:10:19.714693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.898 [2024-07-15 16:10:19.714724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.898 qpair failed and we were unable to recover it. 00:28:50.898 [2024-07-15 16:10:19.714868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.898 [2024-07-15 16:10:19.714897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.898 qpair failed and we were unable to recover it. 00:28:50.898 [2024-07-15 16:10:19.715097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.898 [2024-07-15 16:10:19.715128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.898 qpair failed and we were unable to recover it. 00:28:50.898 [2024-07-15 16:10:19.715280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.898 [2024-07-15 16:10:19.715311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.898 qpair failed and we were unable to recover it. 00:28:50.898 [2024-07-15 16:10:19.715447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.898 [2024-07-15 16:10:19.715479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.898 qpair failed and we were unable to recover it. 00:28:50.898 [2024-07-15 16:10:19.715690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.898 [2024-07-15 16:10:19.715720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.898 qpair failed and we were unable to recover it. 00:28:50.898 [2024-07-15 16:10:19.715862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.898 [2024-07-15 16:10:19.715892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.898 qpair failed and we were unable to recover it. 00:28:50.898 [2024-07-15 16:10:19.716101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.898 [2024-07-15 16:10:19.716132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.898 qpair failed and we were unable to recover it. 00:28:50.898 [2024-07-15 16:10:19.716283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.898 [2024-07-15 16:10:19.716315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.898 qpair failed and we were unable to recover it. 00:28:50.898 [2024-07-15 16:10:19.716480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.898 [2024-07-15 16:10:19.716511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.898 qpair failed and we were unable to recover it. 00:28:50.898 [2024-07-15 16:10:19.716774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.898 [2024-07-15 16:10:19.716805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.898 qpair failed and we were unable to recover it. 00:28:50.898 [2024-07-15 16:10:19.717024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.898 [2024-07-15 16:10:19.717054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.898 qpair failed and we were unable to recover it. 00:28:50.898 [2024-07-15 16:10:19.717354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.898 [2024-07-15 16:10:19.717385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.898 qpair failed and we were unable to recover it. 00:28:50.898 [2024-07-15 16:10:19.717593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.898 [2024-07-15 16:10:19.717623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.898 qpair failed and we were unable to recover it. 00:28:50.898 [2024-07-15 16:10:19.717830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.898 [2024-07-15 16:10:19.717845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.898 qpair failed and we were unable to recover it. 00:28:50.898 [2024-07-15 16:10:19.717960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.898 [2024-07-15 16:10:19.717974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.898 qpair failed and we were unable to recover it. 00:28:50.898 [2024-07-15 16:10:19.718157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.898 [2024-07-15 16:10:19.718172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.898 qpair failed and we were unable to recover it. 00:28:50.898 [2024-07-15 16:10:19.718287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.898 [2024-07-15 16:10:19.718303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.898 qpair failed and we were unable to recover it. 00:28:50.898 [2024-07-15 16:10:19.718405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.898 [2024-07-15 16:10:19.718419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.898 qpair failed and we were unable to recover it. 00:28:50.898 [2024-07-15 16:10:19.718621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.898 [2024-07-15 16:10:19.718651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.898 qpair failed and we were unable to recover it. 00:28:50.898 [2024-07-15 16:10:19.718801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.898 [2024-07-15 16:10:19.718832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.898 qpair failed and we were unable to recover it. 00:28:50.898 [2024-07-15 16:10:19.718987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.898 [2024-07-15 16:10:19.719017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:50.898 qpair failed and we were unable to recover it. 00:28:50.898 [2024-07-15 16:10:19.719207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.898 [2024-07-15 16:10:19.719288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.898 qpair failed and we were unable to recover it. 00:28:50.898 [2024-07-15 16:10:19.719511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.898 [2024-07-15 16:10:19.719546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.898 qpair failed and we were unable to recover it. 00:28:50.898 [2024-07-15 16:10:19.719696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.898 [2024-07-15 16:10:19.719728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.898 qpair failed and we were unable to recover it. 00:28:50.898 [2024-07-15 16:10:19.719937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.898 [2024-07-15 16:10:19.719968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.898 qpair failed and we were unable to recover it. 00:28:50.898 [2024-07-15 16:10:19.720151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.898 [2024-07-15 16:10:19.720163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.898 qpair failed and we were unable to recover it. 00:28:50.898 [2024-07-15 16:10:19.720338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.898 [2024-07-15 16:10:19.720370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.898 qpair failed and we were unable to recover it. 00:28:50.898 [2024-07-15 16:10:19.720680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.898 [2024-07-15 16:10:19.720711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.898 qpair failed and we were unable to recover it. 00:28:50.898 [2024-07-15 16:10:19.720997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.898 [2024-07-15 16:10:19.721028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.898 qpair failed and we were unable to recover it. 00:28:50.898 [2024-07-15 16:10:19.721267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.898 [2024-07-15 16:10:19.721299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.898 qpair failed and we were unable to recover it. 00:28:50.898 [2024-07-15 16:10:19.721448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.898 [2024-07-15 16:10:19.721480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.898 qpair failed and we were unable to recover it. 00:28:50.898 [2024-07-15 16:10:19.721683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.898 [2024-07-15 16:10:19.721713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.899 qpair failed and we were unable to recover it. 00:28:50.899 [2024-07-15 16:10:19.721949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.899 [2024-07-15 16:10:19.721980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.899 qpair failed and we were unable to recover it. 00:28:50.899 [2024-07-15 16:10:19.722193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.899 [2024-07-15 16:10:19.722247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.899 qpair failed and we were unable to recover it. 00:28:50.899 [2024-07-15 16:10:19.722455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.899 [2024-07-15 16:10:19.722495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.899 qpair failed and we were unable to recover it. 00:28:50.899 [2024-07-15 16:10:19.722706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.899 [2024-07-15 16:10:19.722717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.899 qpair failed and we were unable to recover it. 00:28:50.899 [2024-07-15 16:10:19.722891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.899 [2024-07-15 16:10:19.722921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.899 qpair failed and we were unable to recover it. 00:28:50.899 [2024-07-15 16:10:19.723085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.899 [2024-07-15 16:10:19.723117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.899 qpair failed and we were unable to recover it. 00:28:50.899 [2024-07-15 16:10:19.723255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.899 [2024-07-15 16:10:19.723288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.899 qpair failed and we were unable to recover it. 00:28:50.899 [2024-07-15 16:10:19.723584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.899 [2024-07-15 16:10:19.723615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.899 qpair failed and we were unable to recover it. 00:28:50.899 [2024-07-15 16:10:19.723830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.899 [2024-07-15 16:10:19.723860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.899 qpair failed and we were unable to recover it. 00:28:50.899 [2024-07-15 16:10:19.724000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.899 [2024-07-15 16:10:19.724031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.899 qpair failed and we were unable to recover it. 00:28:50.899 [2024-07-15 16:10:19.724240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.899 [2024-07-15 16:10:19.724272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.899 qpair failed and we were unable to recover it. 00:28:50.899 [2024-07-15 16:10:19.724409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.899 [2024-07-15 16:10:19.724440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.899 qpair failed and we were unable to recover it. 00:28:50.899 [2024-07-15 16:10:19.724597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.899 [2024-07-15 16:10:19.724629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.899 qpair failed and we were unable to recover it. 00:28:50.899 [2024-07-15 16:10:19.724890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.899 [2024-07-15 16:10:19.724901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.899 qpair failed and we were unable to recover it. 00:28:50.899 [2024-07-15 16:10:19.725072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.899 [2024-07-15 16:10:19.725083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.899 qpair failed and we were unable to recover it. 00:28:50.899 [2024-07-15 16:10:19.725193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.899 [2024-07-15 16:10:19.725222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.899 qpair failed and we were unable to recover it. 00:28:50.899 [2024-07-15 16:10:19.725369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.899 [2024-07-15 16:10:19.725401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.899 qpair failed and we were unable to recover it. 00:28:50.899 [2024-07-15 16:10:19.725610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.899 [2024-07-15 16:10:19.725641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.899 qpair failed and we were unable to recover it. 00:28:50.899 [2024-07-15 16:10:19.725832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.899 [2024-07-15 16:10:19.725843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.899 qpair failed and we were unable to recover it. 00:28:50.899 [2024-07-15 16:10:19.726029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.899 [2024-07-15 16:10:19.726059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.899 qpair failed and we were unable to recover it. 00:28:50.899 [2024-07-15 16:10:19.726347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.899 [2024-07-15 16:10:19.726379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.899 qpair failed and we were unable to recover it. 00:28:50.899 [2024-07-15 16:10:19.726517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.899 [2024-07-15 16:10:19.726549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.899 qpair failed and we were unable to recover it. 00:28:50.899 [2024-07-15 16:10:19.726815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.899 [2024-07-15 16:10:19.726845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.899 qpair failed and we were unable to recover it. 00:28:50.899 [2024-07-15 16:10:19.726992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.899 [2024-07-15 16:10:19.727023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.899 qpair failed and we were unable to recover it. 00:28:50.899 [2024-07-15 16:10:19.727278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.899 [2024-07-15 16:10:19.727290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.899 qpair failed and we were unable to recover it. 00:28:50.899 [2024-07-15 16:10:19.727517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.899 [2024-07-15 16:10:19.727529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.899 qpair failed and we were unable to recover it. 00:28:50.899 [2024-07-15 16:10:19.727609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.899 [2024-07-15 16:10:19.727619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.899 qpair failed and we were unable to recover it. 00:28:50.899 [2024-07-15 16:10:19.727808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.899 [2024-07-15 16:10:19.727838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.899 qpair failed and we were unable to recover it. 00:28:50.899 [2024-07-15 16:10:19.728040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.899 [2024-07-15 16:10:19.728071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:50.899 qpair failed and we were unable to recover it. 00:28:50.899 [2024-07-15 16:10:19.728346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.899 [2024-07-15 16:10:19.728415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.899 qpair failed and we were unable to recover it. 00:28:50.899 [2024-07-15 16:10:19.728631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.899 [2024-07-15 16:10:19.728665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.899 qpair failed and we were unable to recover it. 00:28:50.899 [2024-07-15 16:10:19.728952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.899 [2024-07-15 16:10:19.728982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.899 qpair failed and we were unable to recover it. 00:28:50.899 [2024-07-15 16:10:19.729238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.899 [2024-07-15 16:10:19.729271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.899 qpair failed and we were unable to recover it. 00:28:50.899 [2024-07-15 16:10:19.729425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.899 [2024-07-15 16:10:19.729456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.899 qpair failed and we were unable to recover it. 00:28:50.899 [2024-07-15 16:10:19.729656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.899 [2024-07-15 16:10:19.729672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.899 qpair failed and we were unable to recover it. 00:28:50.899 [2024-07-15 16:10:19.729838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.899 [2024-07-15 16:10:19.729869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.899 qpair failed and we were unable to recover it. 00:28:50.899 [2024-07-15 16:10:19.730029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.899 [2024-07-15 16:10:19.730061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.899 qpair failed and we were unable to recover it. 00:28:50.899 [2024-07-15 16:10:19.730279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.899 [2024-07-15 16:10:19.730312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.899 qpair failed and we were unable to recover it. 00:28:50.899 [2024-07-15 16:10:19.730519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.899 [2024-07-15 16:10:19.730550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.899 qpair failed and we were unable to recover it. 00:28:50.899 [2024-07-15 16:10:19.730758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.899 [2024-07-15 16:10:19.730773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.899 qpair failed and we were unable to recover it. 00:28:50.899 [2024-07-15 16:10:19.730928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.900 [2024-07-15 16:10:19.730958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.900 qpair failed and we were unable to recover it. 00:28:50.900 [2024-07-15 16:10:19.731106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.900 [2024-07-15 16:10:19.731138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.900 qpair failed and we were unable to recover it. 00:28:50.900 [2024-07-15 16:10:19.731342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.900 [2024-07-15 16:10:19.731382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.900 qpair failed and we were unable to recover it. 00:28:50.900 [2024-07-15 16:10:19.731537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.900 [2024-07-15 16:10:19.731568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.900 qpair failed and we were unable to recover it. 00:28:50.900 [2024-07-15 16:10:19.731859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.900 [2024-07-15 16:10:19.731889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.900 qpair failed and we were unable to recover it. 00:28:50.900 [2024-07-15 16:10:19.732026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.900 [2024-07-15 16:10:19.732057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.900 qpair failed and we were unable to recover it. 00:28:50.900 [2024-07-15 16:10:19.732275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.900 [2024-07-15 16:10:19.732307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.900 qpair failed and we were unable to recover it. 00:28:50.900 [2024-07-15 16:10:19.732510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.900 [2024-07-15 16:10:19.732541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.900 qpair failed and we were unable to recover it. 00:28:50.900 [2024-07-15 16:10:19.732690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.900 [2024-07-15 16:10:19.732705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.900 qpair failed and we were unable to recover it. 00:28:50.900 [2024-07-15 16:10:19.732929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.900 [2024-07-15 16:10:19.732945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.900 qpair failed and we were unable to recover it. 00:28:50.900 [2024-07-15 16:10:19.733117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.900 [2024-07-15 16:10:19.733149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.900 qpair failed and we were unable to recover it. 00:28:50.900 [2024-07-15 16:10:19.733422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.900 [2024-07-15 16:10:19.733454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.900 qpair failed and we were unable to recover it. 00:28:50.900 [2024-07-15 16:10:19.733595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.900 [2024-07-15 16:10:19.733626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.900 qpair failed and we were unable to recover it. 00:28:50.900 [2024-07-15 16:10:19.733828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.900 [2024-07-15 16:10:19.733843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.900 qpair failed and we were unable to recover it. 00:28:50.900 [2024-07-15 16:10:19.733990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.900 [2024-07-15 16:10:19.734006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.900 qpair failed and we were unable to recover it. 00:28:50.900 [2024-07-15 16:10:19.734107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.900 [2024-07-15 16:10:19.734122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.900 qpair failed and we were unable to recover it. 00:28:50.900 [2024-07-15 16:10:19.734231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.900 [2024-07-15 16:10:19.734245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.900 qpair failed and we were unable to recover it. 00:28:50.900 [2024-07-15 16:10:19.734414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.900 [2024-07-15 16:10:19.734445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.900 qpair failed and we were unable to recover it. 00:28:50.900 [2024-07-15 16:10:19.734639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.900 [2024-07-15 16:10:19.734669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.900 qpair failed and we were unable to recover it. 00:28:50.900 [2024-07-15 16:10:19.734885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.900 [2024-07-15 16:10:19.734917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.900 qpair failed and we were unable to recover it. 00:28:50.900 [2024-07-15 16:10:19.735165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.900 [2024-07-15 16:10:19.735180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.900 qpair failed and we were unable to recover it. 00:28:50.900 [2024-07-15 16:10:19.735298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.900 [2024-07-15 16:10:19.735314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.900 qpair failed and we were unable to recover it. 00:28:50.900 [2024-07-15 16:10:19.735493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.900 [2024-07-15 16:10:19.735509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.900 qpair failed and we were unable to recover it. 00:28:50.900 [2024-07-15 16:10:19.735671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.900 [2024-07-15 16:10:19.735686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.900 qpair failed and we were unable to recover it. 00:28:50.900 [2024-07-15 16:10:19.735850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.900 [2024-07-15 16:10:19.735865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.900 qpair failed and we were unable to recover it. 00:28:50.900 [2024-07-15 16:10:19.735960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.900 [2024-07-15 16:10:19.735973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.900 qpair failed and we were unable to recover it. 00:28:50.900 [2024-07-15 16:10:19.736216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.900 [2024-07-15 16:10:19.736254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.900 qpair failed and we were unable to recover it. 00:28:50.900 [2024-07-15 16:10:19.736469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.900 [2024-07-15 16:10:19.736500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.900 qpair failed and we were unable to recover it. 00:28:50.900 [2024-07-15 16:10:19.736647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.900 [2024-07-15 16:10:19.736678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:50.900 qpair failed and we were unable to recover it. 00:28:50.900 [2024-07-15 16:10:19.736877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.900 [2024-07-15 16:10:19.736947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.900 qpair failed and we were unable to recover it. 00:28:50.900 [2024-07-15 16:10:19.737267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.900 [2024-07-15 16:10:19.737286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.900 qpair failed and we were unable to recover it. 00:28:50.900 [2024-07-15 16:10:19.737468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.900 [2024-07-15 16:10:19.737484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.900 qpair failed and we were unable to recover it. 00:28:50.900 [2024-07-15 16:10:19.737680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.900 [2024-07-15 16:10:19.737711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.900 qpair failed and we were unable to recover it. 00:28:50.900 [2024-07-15 16:10:19.737980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.900 [2024-07-15 16:10:19.738012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.900 qpair failed and we were unable to recover it. 00:28:50.900 [2024-07-15 16:10:19.738296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.900 [2024-07-15 16:10:19.738328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.900 qpair failed and we were unable to recover it. 00:28:50.900 [2024-07-15 16:10:19.738480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.900 [2024-07-15 16:10:19.738512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.900 qpair failed and we were unable to recover it. 00:28:50.900 [2024-07-15 16:10:19.738672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.900 [2024-07-15 16:10:19.738703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.900 qpair failed and we were unable to recover it. 00:28:50.900 [2024-07-15 16:10:19.738908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.900 [2024-07-15 16:10:19.738947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.900 qpair failed and we were unable to recover it. 00:28:50.900 [2024-07-15 16:10:19.739075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.900 [2024-07-15 16:10:19.739091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.900 qpair failed and we were unable to recover it. 00:28:50.900 [2024-07-15 16:10:19.739277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.900 [2024-07-15 16:10:19.739310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.900 qpair failed and we were unable to recover it. 00:28:50.900 [2024-07-15 16:10:19.739534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.900 [2024-07-15 16:10:19.739565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.900 qpair failed and we were unable to recover it. 00:28:50.900 [2024-07-15 16:10:19.739769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.901 [2024-07-15 16:10:19.739800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.901 qpair failed and we were unable to recover it. 00:28:50.901 [2024-07-15 16:10:19.740005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.901 [2024-07-15 16:10:19.740020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.901 qpair failed and we were unable to recover it. 00:28:50.901 [2024-07-15 16:10:19.740205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.901 [2024-07-15 16:10:19.740220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.901 qpair failed and we were unable to recover it. 00:28:50.901 [2024-07-15 16:10:19.740481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.901 [2024-07-15 16:10:19.740496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.901 qpair failed and we were unable to recover it. 00:28:50.901 [2024-07-15 16:10:19.740604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.901 [2024-07-15 16:10:19.740635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.901 qpair failed and we were unable to recover it. 00:28:50.901 [2024-07-15 16:10:19.740835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.901 [2024-07-15 16:10:19.740865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.901 qpair failed and we were unable to recover it. 00:28:50.901 [2024-07-15 16:10:19.741111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.901 [2024-07-15 16:10:19.741141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.901 qpair failed and we were unable to recover it. 00:28:50.901 [2024-07-15 16:10:19.741407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.901 [2024-07-15 16:10:19.741440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.901 qpair failed and we were unable to recover it. 00:28:50.901 [2024-07-15 16:10:19.741646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.901 [2024-07-15 16:10:19.741678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.901 qpair failed and we were unable to recover it. 00:28:50.901 [2024-07-15 16:10:19.741890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.901 [2024-07-15 16:10:19.741921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.901 qpair failed and we were unable to recover it. 00:28:50.901 [2024-07-15 16:10:19.742128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.901 [2024-07-15 16:10:19.742143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.901 qpair failed and we were unable to recover it. 00:28:50.901 [2024-07-15 16:10:19.742319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.901 [2024-07-15 16:10:19.742334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.901 qpair failed and we were unable to recover it. 00:28:50.901 [2024-07-15 16:10:19.742453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.901 [2024-07-15 16:10:19.742468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.901 qpair failed and we were unable to recover it. 00:28:50.901 [2024-07-15 16:10:19.742645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.901 [2024-07-15 16:10:19.742661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.901 qpair failed and we were unable to recover it. 00:28:50.901 [2024-07-15 16:10:19.742833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.901 [2024-07-15 16:10:19.742847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.901 qpair failed and we were unable to recover it. 00:28:50.901 [2024-07-15 16:10:19.742964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.901 [2024-07-15 16:10:19.742981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.901 qpair failed and we were unable to recover it. 00:28:50.901 [2024-07-15 16:10:19.743256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.901 [2024-07-15 16:10:19.743272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.901 qpair failed and we were unable to recover it. 00:28:50.901 [2024-07-15 16:10:19.743425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.901 [2024-07-15 16:10:19.743456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.901 qpair failed and we were unable to recover it. 00:28:50.901 [2024-07-15 16:10:19.743684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.901 [2024-07-15 16:10:19.743715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.901 qpair failed and we were unable to recover it. 00:28:50.901 [2024-07-15 16:10:19.743897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.901 [2024-07-15 16:10:19.743928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.901 qpair failed and we were unable to recover it. 00:28:50.901 [2024-07-15 16:10:19.744045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.901 [2024-07-15 16:10:19.744075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.901 qpair failed and we were unable to recover it. 00:28:50.901 [2024-07-15 16:10:19.744205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.901 [2024-07-15 16:10:19.744244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.901 qpair failed and we were unable to recover it. 00:28:50.901 [2024-07-15 16:10:19.744430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.901 [2024-07-15 16:10:19.744445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.901 qpair failed and we were unable to recover it. 00:28:50.901 [2024-07-15 16:10:19.744662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.901 [2024-07-15 16:10:19.744692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.901 qpair failed and we were unable to recover it. 00:28:50.901 [2024-07-15 16:10:19.744856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.901 [2024-07-15 16:10:19.744886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.901 qpair failed and we were unable to recover it. 00:28:50.901 [2024-07-15 16:10:19.745033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.901 [2024-07-15 16:10:19.745063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.901 qpair failed and we were unable to recover it. 00:28:50.901 [2024-07-15 16:10:19.745245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.901 [2024-07-15 16:10:19.745261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.901 qpair failed and we were unable to recover it. 00:28:50.901 [2024-07-15 16:10:19.745448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.901 [2024-07-15 16:10:19.745464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.901 qpair failed and we were unable to recover it. 00:28:50.901 [2024-07-15 16:10:19.745643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.901 [2024-07-15 16:10:19.745657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.901 qpair failed and we were unable to recover it. 00:28:50.901 [2024-07-15 16:10:19.745918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.901 [2024-07-15 16:10:19.745933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.901 qpair failed and we were unable to recover it. 00:28:50.901 [2024-07-15 16:10:19.746135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.901 [2024-07-15 16:10:19.746165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.901 qpair failed and we were unable to recover it. 00:28:50.901 [2024-07-15 16:10:19.746371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.901 [2024-07-15 16:10:19.746403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.901 qpair failed and we were unable to recover it. 00:28:50.901 [2024-07-15 16:10:19.746613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.901 [2024-07-15 16:10:19.746643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.901 qpair failed and we were unable to recover it. 00:28:50.901 [2024-07-15 16:10:19.746798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.901 [2024-07-15 16:10:19.746828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.901 qpair failed and we were unable to recover it. 00:28:50.901 [2024-07-15 16:10:19.747065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.901 [2024-07-15 16:10:19.747080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.901 qpair failed and we were unable to recover it. 00:28:50.901 [2024-07-15 16:10:19.747220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.901 [2024-07-15 16:10:19.747240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.901 qpair failed and we were unable to recover it. 00:28:50.901 [2024-07-15 16:10:19.747367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.901 [2024-07-15 16:10:19.747398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.901 qpair failed and we were unable to recover it. 00:28:50.902 [2024-07-15 16:10:19.747610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.902 [2024-07-15 16:10:19.747640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.902 qpair failed and we were unable to recover it. 00:28:50.902 [2024-07-15 16:10:19.747841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.902 [2024-07-15 16:10:19.747871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.902 qpair failed and we were unable to recover it. 00:28:50.902 [2024-07-15 16:10:19.748129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.902 [2024-07-15 16:10:19.748144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.902 qpair failed and we were unable to recover it. 00:28:50.902 [2024-07-15 16:10:19.748310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.902 [2024-07-15 16:10:19.748342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.902 qpair failed and we were unable to recover it. 00:28:50.902 [2024-07-15 16:10:19.748480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.902 [2024-07-15 16:10:19.748510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.902 qpair failed and we were unable to recover it. 00:28:50.902 [2024-07-15 16:10:19.748713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.902 [2024-07-15 16:10:19.748730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.902 qpair failed and we were unable to recover it. 00:28:50.902 [2024-07-15 16:10:19.748907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.902 [2024-07-15 16:10:19.748942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.902 qpair failed and we were unable to recover it. 00:28:50.902 [2024-07-15 16:10:19.749144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.902 [2024-07-15 16:10:19.749175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.902 qpair failed and we were unable to recover it. 00:28:50.902 [2024-07-15 16:10:19.749432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.902 [2024-07-15 16:10:19.749464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.902 qpair failed and we were unable to recover it. 00:28:50.902 [2024-07-15 16:10:19.749619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.902 [2024-07-15 16:10:19.749650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.902 qpair failed and we were unable to recover it. 00:28:50.902 [2024-07-15 16:10:19.749876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.902 [2024-07-15 16:10:19.749906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.902 qpair failed and we were unable to recover it. 00:28:50.902 [2024-07-15 16:10:19.750050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.902 [2024-07-15 16:10:19.750080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.902 qpair failed and we were unable to recover it. 00:28:50.902 [2024-07-15 16:10:19.750200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.902 [2024-07-15 16:10:19.750241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.902 qpair failed and we were unable to recover it. 00:28:50.902 [2024-07-15 16:10:19.750448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.902 [2024-07-15 16:10:19.750479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.902 qpair failed and we were unable to recover it. 00:28:50.902 [2024-07-15 16:10:19.750619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.902 [2024-07-15 16:10:19.750650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.902 qpair failed and we were unable to recover it. 00:28:50.902 [2024-07-15 16:10:19.750796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.902 [2024-07-15 16:10:19.750811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.902 qpair failed and we were unable to recover it. 00:28:50.902 [2024-07-15 16:10:19.750981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.902 [2024-07-15 16:10:19.751012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.902 qpair failed and we were unable to recover it. 00:28:50.902 [2024-07-15 16:10:19.751222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.902 [2024-07-15 16:10:19.751274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.902 qpair failed and we were unable to recover it. 00:28:50.902 [2024-07-15 16:10:19.751487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.902 [2024-07-15 16:10:19.751518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.902 qpair failed and we were unable to recover it. 00:28:50.902 [2024-07-15 16:10:19.751737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.902 [2024-07-15 16:10:19.751769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.902 qpair failed and we were unable to recover it. 00:28:50.902 [2024-07-15 16:10:19.751905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.902 [2024-07-15 16:10:19.751921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.902 qpair failed and we were unable to recover it. 00:28:50.902 [2024-07-15 16:10:19.752086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.902 [2024-07-15 16:10:19.752101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.902 qpair failed and we were unable to recover it. 00:28:50.902 [2024-07-15 16:10:19.752184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.902 [2024-07-15 16:10:19.752198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.902 qpair failed and we were unable to recover it. 00:28:50.902 [2024-07-15 16:10:19.752316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.902 [2024-07-15 16:10:19.752339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.902 qpair failed and we were unable to recover it. 00:28:50.902 [2024-07-15 16:10:19.752459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.902 [2024-07-15 16:10:19.752475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.902 qpair failed and we were unable to recover it. 00:28:50.902 [2024-07-15 16:10:19.752644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.902 [2024-07-15 16:10:19.752682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.902 qpair failed and we were unable to recover it. 00:28:50.902 [2024-07-15 16:10:19.752888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.902 [2024-07-15 16:10:19.752918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.902 qpair failed and we were unable to recover it. 00:28:50.902 [2024-07-15 16:10:19.753050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.902 [2024-07-15 16:10:19.753080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.902 qpair failed and we were unable to recover it. 00:28:50.902 [2024-07-15 16:10:19.753282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.902 [2024-07-15 16:10:19.753314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.902 qpair failed and we were unable to recover it. 00:28:50.902 [2024-07-15 16:10:19.753520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.902 [2024-07-15 16:10:19.753550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.902 qpair failed and we were unable to recover it. 00:28:50.902 [2024-07-15 16:10:19.753867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.902 [2024-07-15 16:10:19.753898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.902 qpair failed and we were unable to recover it. 00:28:50.902 [2024-07-15 16:10:19.754051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.902 [2024-07-15 16:10:19.754066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.902 qpair failed and we were unable to recover it. 00:28:50.902 [2024-07-15 16:10:19.754164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.902 [2024-07-15 16:10:19.754180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.902 qpair failed and we were unable to recover it. 00:28:50.903 [2024-07-15 16:10:19.754331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.903 [2024-07-15 16:10:19.754348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.903 qpair failed and we were unable to recover it. 00:28:50.903 [2024-07-15 16:10:19.754545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.903 [2024-07-15 16:10:19.754577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.903 qpair failed and we were unable to recover it. 00:28:50.903 [2024-07-15 16:10:19.754842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.903 [2024-07-15 16:10:19.754873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.903 qpair failed and we were unable to recover it. 00:28:50.903 [2024-07-15 16:10:19.755021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.903 [2024-07-15 16:10:19.755052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.903 qpair failed and we were unable to recover it. 00:28:50.903 [2024-07-15 16:10:19.755318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.903 [2024-07-15 16:10:19.755333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.903 qpair failed and we were unable to recover it. 00:28:50.903 [2024-07-15 16:10:19.755444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.903 [2024-07-15 16:10:19.755458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.903 qpair failed and we were unable to recover it. 00:28:50.903 [2024-07-15 16:10:19.755694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.903 [2024-07-15 16:10:19.755724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.903 qpair failed and we were unable to recover it. 00:28:50.903 [2024-07-15 16:10:19.755922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.903 [2024-07-15 16:10:19.755952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.903 qpair failed and we were unable to recover it. 00:28:50.903 [2024-07-15 16:10:19.756152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.903 [2024-07-15 16:10:19.756182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.903 qpair failed and we were unable to recover it. 00:28:50.903 [2024-07-15 16:10:19.756381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.903 [2024-07-15 16:10:19.756413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.903 qpair failed and we were unable to recover it. 00:28:50.903 [2024-07-15 16:10:19.756622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.903 [2024-07-15 16:10:19.756653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.903 qpair failed and we were unable to recover it. 00:28:50.903 [2024-07-15 16:10:19.756800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.903 [2024-07-15 16:10:19.756831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.903 qpair failed and we were unable to recover it. 00:28:50.903 [2024-07-15 16:10:19.756976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.903 [2024-07-15 16:10:19.757005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.903 qpair failed and we were unable to recover it. 00:28:50.903 [2024-07-15 16:10:19.757303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.903 [2024-07-15 16:10:19.757319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.903 qpair failed and we were unable to recover it. 00:28:50.903 [2024-07-15 16:10:19.757548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.903 [2024-07-15 16:10:19.757563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.903 qpair failed and we were unable to recover it. 00:28:50.903 [2024-07-15 16:10:19.757797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.903 [2024-07-15 16:10:19.757813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.903 qpair failed and we were unable to recover it. 00:28:50.903 [2024-07-15 16:10:19.758065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.903 [2024-07-15 16:10:19.758081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.903 qpair failed and we were unable to recover it. 00:28:50.903 [2024-07-15 16:10:19.758257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.903 [2024-07-15 16:10:19.758273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.903 qpair failed and we were unable to recover it. 00:28:50.903 [2024-07-15 16:10:19.758537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.903 [2024-07-15 16:10:19.758569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.903 qpair failed and we were unable to recover it. 00:28:50.903 [2024-07-15 16:10:19.758721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.903 [2024-07-15 16:10:19.758752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.903 qpair failed and we were unable to recover it. 00:28:50.903 [2024-07-15 16:10:19.758977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.903 [2024-07-15 16:10:19.759009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.903 qpair failed and we were unable to recover it. 00:28:50.903 [2024-07-15 16:10:19.759288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.903 [2024-07-15 16:10:19.759320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.903 qpair failed and we were unable to recover it. 00:28:50.903 [2024-07-15 16:10:19.759546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.903 [2024-07-15 16:10:19.759577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.903 qpair failed and we were unable to recover it. 00:28:50.903 [2024-07-15 16:10:19.759713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.903 [2024-07-15 16:10:19.759744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.903 qpair failed and we were unable to recover it. 00:28:50.903 [2024-07-15 16:10:19.759904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.903 [2024-07-15 16:10:19.759935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.903 qpair failed and we were unable to recover it. 00:28:50.903 [2024-07-15 16:10:19.760083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.903 [2024-07-15 16:10:19.760113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.903 qpair failed and we were unable to recover it. 00:28:50.903 [2024-07-15 16:10:19.760357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.903 [2024-07-15 16:10:19.760389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.903 qpair failed and we were unable to recover it. 00:28:50.903 [2024-07-15 16:10:19.760532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.903 [2024-07-15 16:10:19.760563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.903 qpair failed and we were unable to recover it. 00:28:50.903 [2024-07-15 16:10:19.760851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.903 [2024-07-15 16:10:19.760866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.903 qpair failed and we were unable to recover it. 00:28:50.903 [2024-07-15 16:10:19.761043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.903 [2024-07-15 16:10:19.761057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.903 qpair failed and we were unable to recover it. 00:28:50.903 [2024-07-15 16:10:19.761172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.903 [2024-07-15 16:10:19.761202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.903 qpair failed and we were unable to recover it. 00:28:50.903 [2024-07-15 16:10:19.761429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.903 [2024-07-15 16:10:19.761459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.903 qpair failed and we were unable to recover it. 00:28:50.903 [2024-07-15 16:10:19.761670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.903 [2024-07-15 16:10:19.761701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.903 qpair failed and we were unable to recover it. 00:28:50.903 [2024-07-15 16:10:19.761934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.903 [2024-07-15 16:10:19.761964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.903 qpair failed and we were unable to recover it. 00:28:50.903 [2024-07-15 16:10:19.762235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.903 [2024-07-15 16:10:19.762266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.903 qpair failed and we were unable to recover it. 00:28:50.903 [2024-07-15 16:10:19.762481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.903 [2024-07-15 16:10:19.762511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.903 qpair failed and we were unable to recover it. 00:28:50.903 [2024-07-15 16:10:19.762799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.903 [2024-07-15 16:10:19.762830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.903 qpair failed and we were unable to recover it. 00:28:50.903 [2024-07-15 16:10:19.763094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.903 [2024-07-15 16:10:19.763109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.903 qpair failed and we were unable to recover it. 00:28:50.903 [2024-07-15 16:10:19.763207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.903 [2024-07-15 16:10:19.763220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.903 qpair failed and we were unable to recover it. 00:28:50.903 [2024-07-15 16:10:19.763400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.904 [2024-07-15 16:10:19.763432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.904 qpair failed and we were unable to recover it. 00:28:50.904 [2024-07-15 16:10:19.763642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.904 [2024-07-15 16:10:19.763678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.904 qpair failed and we were unable to recover it. 00:28:50.904 [2024-07-15 16:10:19.763902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.904 [2024-07-15 16:10:19.763933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.904 qpair failed and we were unable to recover it. 00:28:50.904 [2024-07-15 16:10:19.764131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.904 [2024-07-15 16:10:19.764161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.904 qpair failed and we were unable to recover it. 00:28:50.904 [2024-07-15 16:10:19.764377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.904 [2024-07-15 16:10:19.764410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.904 qpair failed and we were unable to recover it. 00:28:50.904 [2024-07-15 16:10:19.764542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.904 [2024-07-15 16:10:19.764573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.904 qpair failed and we were unable to recover it. 00:28:50.904 [2024-07-15 16:10:19.764842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.904 [2024-07-15 16:10:19.764857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.904 qpair failed and we were unable to recover it. 00:28:50.904 [2024-07-15 16:10:19.764965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.904 [2024-07-15 16:10:19.764980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.904 qpair failed and we were unable to recover it. 00:28:50.904 [2024-07-15 16:10:19.765160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.904 [2024-07-15 16:10:19.765190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.904 qpair failed and we were unable to recover it. 00:28:50.904 [2024-07-15 16:10:19.765423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.904 [2024-07-15 16:10:19.765454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.904 qpair failed and we were unable to recover it. 00:28:50.904 [2024-07-15 16:10:19.765692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.904 [2024-07-15 16:10:19.765724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.904 qpair failed and we were unable to recover it. 00:28:50.904 [2024-07-15 16:10:19.765934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.904 [2024-07-15 16:10:19.765949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.904 qpair failed and we were unable to recover it. 00:28:50.904 [2024-07-15 16:10:19.766130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.904 [2024-07-15 16:10:19.766162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.904 qpair failed and we were unable to recover it. 00:28:50.904 [2024-07-15 16:10:19.766430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.904 [2024-07-15 16:10:19.766462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.904 qpair failed and we were unable to recover it. 00:28:50.904 [2024-07-15 16:10:19.766615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.904 [2024-07-15 16:10:19.766646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.904 qpair failed and we were unable to recover it. 00:28:50.904 [2024-07-15 16:10:19.766774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.904 [2024-07-15 16:10:19.766804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.904 qpair failed and we were unable to recover it. 00:28:50.904 [2024-07-15 16:10:19.767090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.904 [2024-07-15 16:10:19.767121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.904 qpair failed and we were unable to recover it. 00:28:50.904 [2024-07-15 16:10:19.767316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.904 [2024-07-15 16:10:19.767348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.904 qpair failed and we were unable to recover it. 00:28:50.904 [2024-07-15 16:10:19.767512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.904 [2024-07-15 16:10:19.767543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.904 qpair failed and we were unable to recover it. 00:28:50.904 [2024-07-15 16:10:19.767670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.904 [2024-07-15 16:10:19.767700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.904 qpair failed and we were unable to recover it. 00:28:50.904 [2024-07-15 16:10:19.767963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.904 [2024-07-15 16:10:19.767994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.904 qpair failed and we were unable to recover it. 00:28:50.904 [2024-07-15 16:10:19.768238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.904 [2024-07-15 16:10:19.768270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.904 qpair failed and we were unable to recover it. 00:28:50.904 [2024-07-15 16:10:19.768490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.904 [2024-07-15 16:10:19.768522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.904 qpair failed and we were unable to recover it. 00:28:50.904 [2024-07-15 16:10:19.768746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.904 [2024-07-15 16:10:19.768777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.904 qpair failed and we were unable to recover it. 00:28:50.904 [2024-07-15 16:10:19.768973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.904 [2024-07-15 16:10:19.769004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.904 qpair failed and we were unable to recover it. 00:28:50.904 [2024-07-15 16:10:19.769292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.904 [2024-07-15 16:10:19.769324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.904 qpair failed and we were unable to recover it. 00:28:50.904 [2024-07-15 16:10:19.769592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.904 [2024-07-15 16:10:19.769623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.904 qpair failed and we were unable to recover it. 00:28:50.904 [2024-07-15 16:10:19.769774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.904 [2024-07-15 16:10:19.769805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.904 qpair failed and we were unable to recover it. 00:28:50.904 [2024-07-15 16:10:19.770066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.904 [2024-07-15 16:10:19.770101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.904 qpair failed and we were unable to recover it. 00:28:50.904 [2024-07-15 16:10:19.770340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.904 [2024-07-15 16:10:19.770372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.904 qpair failed and we were unable to recover it. 00:28:50.904 [2024-07-15 16:10:19.770569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.904 [2024-07-15 16:10:19.770599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.904 qpair failed and we were unable to recover it. 00:28:50.904 [2024-07-15 16:10:19.770863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.904 [2024-07-15 16:10:19.770878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.904 qpair failed and we were unable to recover it. 00:28:50.904 [2024-07-15 16:10:19.770974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.904 [2024-07-15 16:10:19.770988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.904 qpair failed and we were unable to recover it. 00:28:50.904 [2024-07-15 16:10:19.771221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.904 [2024-07-15 16:10:19.771281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.904 qpair failed and we were unable to recover it. 00:28:50.904 [2024-07-15 16:10:19.771601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.904 [2024-07-15 16:10:19.771632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.904 qpair failed and we were unable to recover it. 00:28:50.904 [2024-07-15 16:10:19.771922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.904 [2024-07-15 16:10:19.771952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.904 qpair failed and we were unable to recover it. 00:28:50.904 [2024-07-15 16:10:19.772160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.904 [2024-07-15 16:10:19.772176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.904 qpair failed and we were unable to recover it. 00:28:50.904 [2024-07-15 16:10:19.772406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.904 [2024-07-15 16:10:19.772421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.904 qpair failed and we were unable to recover it. 00:28:50.904 [2024-07-15 16:10:19.772605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.904 [2024-07-15 16:10:19.772621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.904 qpair failed and we were unable to recover it. 00:28:50.904 [2024-07-15 16:10:19.772778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.904 [2024-07-15 16:10:19.772809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.904 qpair failed and we were unable to recover it. 00:28:50.904 [2024-07-15 16:10:19.773098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.904 [2024-07-15 16:10:19.773129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.904 qpair failed and we were unable to recover it. 00:28:50.905 [2024-07-15 16:10:19.773421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.905 [2024-07-15 16:10:19.773453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.905 qpair failed and we were unable to recover it. 00:28:50.905 [2024-07-15 16:10:19.773677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.905 [2024-07-15 16:10:19.773709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.905 qpair failed and we were unable to recover it. 00:28:50.905 [2024-07-15 16:10:19.773972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.905 [2024-07-15 16:10:19.774003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.905 qpair failed and we were unable to recover it. 00:28:50.905 [2024-07-15 16:10:19.774211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.905 [2024-07-15 16:10:19.774230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.905 qpair failed and we were unable to recover it. 00:28:50.905 [2024-07-15 16:10:19.774394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.905 [2024-07-15 16:10:19.774409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.905 qpair failed and we were unable to recover it. 00:28:50.905 [2024-07-15 16:10:19.774509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.905 [2024-07-15 16:10:19.774524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.905 qpair failed and we were unable to recover it. 00:28:50.905 [2024-07-15 16:10:19.774754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.905 [2024-07-15 16:10:19.774769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.905 qpair failed and we were unable to recover it. 00:28:50.905 [2024-07-15 16:10:19.775041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.905 [2024-07-15 16:10:19.775056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.905 qpair failed and we were unable to recover it. 00:28:50.905 [2024-07-15 16:10:19.775288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.905 [2024-07-15 16:10:19.775303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.905 qpair failed and we were unable to recover it. 00:28:50.905 [2024-07-15 16:10:19.775504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.905 [2024-07-15 16:10:19.775520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.905 qpair failed and we were unable to recover it. 00:28:50.905 [2024-07-15 16:10:19.775683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.905 [2024-07-15 16:10:19.775698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.905 qpair failed and we were unable to recover it. 00:28:50.905 [2024-07-15 16:10:19.775793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.905 [2024-07-15 16:10:19.775807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.905 qpair failed and we were unable to recover it. 00:28:50.905 [2024-07-15 16:10:19.775978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.905 [2024-07-15 16:10:19.775993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.905 qpair failed and we were unable to recover it. 00:28:50.905 [2024-07-15 16:10:19.776089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.905 [2024-07-15 16:10:19.776119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.905 qpair failed and we were unable to recover it. 00:28:50.905 [2024-07-15 16:10:19.776346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.905 [2024-07-15 16:10:19.776383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.905 qpair failed and we were unable to recover it. 00:28:50.905 [2024-07-15 16:10:19.776674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.905 [2024-07-15 16:10:19.776704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.905 qpair failed and we were unable to recover it. 00:28:50.905 [2024-07-15 16:10:19.776852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.905 [2024-07-15 16:10:19.776882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.905 qpair failed and we were unable to recover it. 00:28:50.905 [2024-07-15 16:10:19.777153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.905 [2024-07-15 16:10:19.777168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.905 qpair failed and we were unable to recover it. 00:28:50.905 [2024-07-15 16:10:19.777258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.905 [2024-07-15 16:10:19.777273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.905 qpair failed and we were unable to recover it. 00:28:50.905 [2024-07-15 16:10:19.777447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.905 [2024-07-15 16:10:19.777462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.905 qpair failed and we were unable to recover it. 00:28:50.905 [2024-07-15 16:10:19.777647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.905 [2024-07-15 16:10:19.777663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.905 qpair failed and we were unable to recover it. 00:28:50.905 [2024-07-15 16:10:19.777843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.905 [2024-07-15 16:10:19.777858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.905 qpair failed and we were unable to recover it. 00:28:50.905 [2024-07-15 16:10:19.778090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.905 [2024-07-15 16:10:19.778120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.905 qpair failed and we were unable to recover it. 00:28:50.905 [2024-07-15 16:10:19.778330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.905 [2024-07-15 16:10:19.778362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.905 qpair failed and we were unable to recover it. 00:28:50.905 [2024-07-15 16:10:19.778565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.905 [2024-07-15 16:10:19.778596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.905 qpair failed and we were unable to recover it. 00:28:50.905 [2024-07-15 16:10:19.778792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.905 [2024-07-15 16:10:19.778823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.905 qpair failed and we were unable to recover it. 00:28:50.905 [2024-07-15 16:10:19.779023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.905 [2024-07-15 16:10:19.779058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.905 qpair failed and we were unable to recover it. 00:28:50.905 [2024-07-15 16:10:19.779279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.905 [2024-07-15 16:10:19.779295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.905 qpair failed and we were unable to recover it. 00:28:50.905 [2024-07-15 16:10:19.779493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.905 [2024-07-15 16:10:19.779524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.905 qpair failed and we were unable to recover it. 00:28:50.905 [2024-07-15 16:10:19.779747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.905 [2024-07-15 16:10:19.779778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.905 qpair failed and we were unable to recover it. 00:28:50.905 [2024-07-15 16:10:19.779991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.905 [2024-07-15 16:10:19.780021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.905 qpair failed and we were unable to recover it. 00:28:50.905 [2024-07-15 16:10:19.780186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.905 [2024-07-15 16:10:19.780217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.905 qpair failed and we were unable to recover it. 00:28:50.905 [2024-07-15 16:10:19.780372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.905 [2024-07-15 16:10:19.780404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.905 qpair failed and we were unable to recover it. 00:28:50.905 [2024-07-15 16:10:19.780567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.905 [2024-07-15 16:10:19.780598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.906 qpair failed and we were unable to recover it. 00:28:50.906 [2024-07-15 16:10:19.780748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.906 [2024-07-15 16:10:19.780780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.906 qpair failed and we were unable to recover it. 00:28:50.906 [2024-07-15 16:10:19.781016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.906 [2024-07-15 16:10:19.781047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.906 qpair failed and we were unable to recover it. 00:28:50.906 [2024-07-15 16:10:19.781306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.906 [2024-07-15 16:10:19.781323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.906 qpair failed and we were unable to recover it. 00:28:50.906 [2024-07-15 16:10:19.781500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.906 [2024-07-15 16:10:19.781531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.906 qpair failed and we were unable to recover it. 00:28:50.906 [2024-07-15 16:10:19.781661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.906 [2024-07-15 16:10:19.781692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.906 qpair failed and we were unable to recover it. 00:28:50.906 [2024-07-15 16:10:19.781847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.906 [2024-07-15 16:10:19.781878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.906 qpair failed and we were unable to recover it. 00:28:50.906 [2024-07-15 16:10:19.782070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.906 [2024-07-15 16:10:19.782086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.906 qpair failed and we were unable to recover it. 00:28:50.906 [2024-07-15 16:10:19.782256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.906 [2024-07-15 16:10:19.782288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.906 qpair failed and we were unable to recover it. 00:28:50.906 [2024-07-15 16:10:19.782522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.906 [2024-07-15 16:10:19.782554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.906 qpair failed and we were unable to recover it. 00:28:50.906 [2024-07-15 16:10:19.782759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.906 [2024-07-15 16:10:19.782790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.906 qpair failed and we were unable to recover it. 00:28:50.906 [2024-07-15 16:10:19.782954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.906 [2024-07-15 16:10:19.782970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.906 qpair failed and we were unable to recover it. 00:28:50.906 [2024-07-15 16:10:19.783203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.906 [2024-07-15 16:10:19.783259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.906 qpair failed and we were unable to recover it. 00:28:50.906 [2024-07-15 16:10:19.783476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.906 [2024-07-15 16:10:19.783507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.906 qpair failed and we were unable to recover it. 00:28:50.906 [2024-07-15 16:10:19.783708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.906 [2024-07-15 16:10:19.783738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.906 qpair failed and we were unable to recover it. 00:28:50.906 [2024-07-15 16:10:19.783918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.906 [2024-07-15 16:10:19.783949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.906 qpair failed and we were unable to recover it. 00:28:50.906 [2024-07-15 16:10:19.784160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.906 [2024-07-15 16:10:19.784191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.906 qpair failed and we were unable to recover it. 00:28:50.906 [2024-07-15 16:10:19.784340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.906 [2024-07-15 16:10:19.784357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.906 qpair failed and we were unable to recover it. 00:28:50.906 [2024-07-15 16:10:19.784529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.906 [2024-07-15 16:10:19.784544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.906 qpair failed and we were unable to recover it. 00:28:50.906 [2024-07-15 16:10:19.784724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.906 [2024-07-15 16:10:19.784739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.906 qpair failed and we were unable to recover it. 00:28:50.906 [2024-07-15 16:10:19.784915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.906 [2024-07-15 16:10:19.784931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.906 qpair failed and we were unable to recover it. 00:28:50.906 [2024-07-15 16:10:19.785100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.906 [2024-07-15 16:10:19.785131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.906 qpair failed and we were unable to recover it. 00:28:50.906 [2024-07-15 16:10:19.785280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.906 [2024-07-15 16:10:19.785313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.906 qpair failed and we were unable to recover it. 00:28:50.906 [2024-07-15 16:10:19.785567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.906 [2024-07-15 16:10:19.785598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.906 qpair failed and we were unable to recover it. 00:28:50.906 [2024-07-15 16:10:19.785812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.906 [2024-07-15 16:10:19.785827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.906 qpair failed and we were unable to recover it. 00:28:50.906 [2024-07-15 16:10:19.785942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.906 [2024-07-15 16:10:19.785957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.906 qpair failed and we were unable to recover it. 00:28:50.906 [2024-07-15 16:10:19.786168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.906 [2024-07-15 16:10:19.786184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.906 qpair failed and we were unable to recover it. 00:28:50.906 [2024-07-15 16:10:19.786435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.906 [2024-07-15 16:10:19.786451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.906 qpair failed and we were unable to recover it. 00:28:50.906 [2024-07-15 16:10:19.786622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.906 [2024-07-15 16:10:19.786637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.906 qpair failed and we were unable to recover it. 00:28:50.906 [2024-07-15 16:10:19.786866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.906 [2024-07-15 16:10:19.786881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.906 qpair failed and we were unable to recover it. 00:28:50.906 [2024-07-15 16:10:19.787009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.906 [2024-07-15 16:10:19.787024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.906 qpair failed and we were unable to recover it. 00:28:50.906 [2024-07-15 16:10:19.787136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.906 [2024-07-15 16:10:19.787150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.906 qpair failed and we were unable to recover it. 00:28:50.906 [2024-07-15 16:10:19.787350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.906 [2024-07-15 16:10:19.787381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.906 qpair failed and we were unable to recover it. 00:28:50.907 [2024-07-15 16:10:19.787594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.907 [2024-07-15 16:10:19.787624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.907 qpair failed and we were unable to recover it. 00:28:50.907 [2024-07-15 16:10:19.787837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.907 [2024-07-15 16:10:19.787873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.907 qpair failed and we were unable to recover it. 00:28:50.907 [2024-07-15 16:10:19.788129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.907 [2024-07-15 16:10:19.788144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.907 qpair failed and we were unable to recover it. 00:28:50.907 [2024-07-15 16:10:19.788277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.907 [2024-07-15 16:10:19.788293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.907 qpair failed and we were unable to recover it. 00:28:50.907 [2024-07-15 16:10:19.788367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.907 [2024-07-15 16:10:19.788380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.907 qpair failed and we were unable to recover it. 00:28:50.907 [2024-07-15 16:10:19.788552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.907 [2024-07-15 16:10:19.788568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:50.907 qpair failed and we were unable to recover it. 00:28:51.183 [2024-07-15 16:10:19.788669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.183 [2024-07-15 16:10:19.788682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.183 qpair failed and we were unable to recover it. 00:28:51.183 [2024-07-15 16:10:19.788858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.183 [2024-07-15 16:10:19.788875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.183 qpair failed and we were unable to recover it. 00:28:51.183 [2024-07-15 16:10:19.789058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.183 [2024-07-15 16:10:19.789073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.183 qpair failed and we were unable to recover it. 00:28:51.183 [2024-07-15 16:10:19.789228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.183 [2024-07-15 16:10:19.789244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.183 qpair failed and we were unable to recover it. 00:28:51.183 [2024-07-15 16:10:19.789428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.183 [2024-07-15 16:10:19.789443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.183 qpair failed and we were unable to recover it. 00:28:51.183 [2024-07-15 16:10:19.789611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.183 [2024-07-15 16:10:19.789627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.183 qpair failed and we were unable to recover it. 00:28:51.183 [2024-07-15 16:10:19.789805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.183 [2024-07-15 16:10:19.789821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.183 qpair failed and we were unable to recover it. 00:28:51.183 [2024-07-15 16:10:19.789962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.183 [2024-07-15 16:10:19.789976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.183 qpair failed and we were unable to recover it. 00:28:51.183 [2024-07-15 16:10:19.790252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.183 [2024-07-15 16:10:19.790268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.183 qpair failed and we were unable to recover it. 00:28:51.183 [2024-07-15 16:10:19.790432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.183 [2024-07-15 16:10:19.790448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.183 qpair failed and we were unable to recover it. 00:28:51.183 [2024-07-15 16:10:19.790556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.183 [2024-07-15 16:10:19.790574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.183 qpair failed and we were unable to recover it. 00:28:51.183 [2024-07-15 16:10:19.790827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.183 [2024-07-15 16:10:19.790842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.183 qpair failed and we were unable to recover it. 00:28:51.183 [2024-07-15 16:10:19.791040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.183 [2024-07-15 16:10:19.791055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.183 qpair failed and we were unable to recover it. 00:28:51.183 [2024-07-15 16:10:19.791229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.183 [2024-07-15 16:10:19.791245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.183 qpair failed and we were unable to recover it. 00:28:51.184 [2024-07-15 16:10:19.791430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.184 [2024-07-15 16:10:19.791445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.184 qpair failed and we were unable to recover it. 00:28:51.184 [2024-07-15 16:10:19.791560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.184 [2024-07-15 16:10:19.791575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.184 qpair failed and we were unable to recover it. 00:28:51.184 [2024-07-15 16:10:19.791687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.184 [2024-07-15 16:10:19.791703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.184 qpair failed and we were unable to recover it. 00:28:51.184 [2024-07-15 16:10:19.791879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.184 [2024-07-15 16:10:19.791895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.184 qpair failed and we were unable to recover it. 00:28:51.184 [2024-07-15 16:10:19.792010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.184 [2024-07-15 16:10:19.792026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.184 qpair failed and we were unable to recover it. 00:28:51.184 [2024-07-15 16:10:19.792222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.184 [2024-07-15 16:10:19.792246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.184 qpair failed and we were unable to recover it. 00:28:51.184 [2024-07-15 16:10:19.792496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.184 [2024-07-15 16:10:19.792512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.184 qpair failed and we were unable to recover it. 00:28:51.184 [2024-07-15 16:10:19.792625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.184 [2024-07-15 16:10:19.792640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.184 qpair failed and we were unable to recover it. 00:28:51.184 [2024-07-15 16:10:19.792845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.184 [2024-07-15 16:10:19.792860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.184 qpair failed and we were unable to recover it. 00:28:51.184 [2024-07-15 16:10:19.792988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.184 [2024-07-15 16:10:19.793003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.184 qpair failed and we were unable to recover it. 00:28:51.184 [2024-07-15 16:10:19.793204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.184 [2024-07-15 16:10:19.793220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.184 qpair failed and we were unable to recover it. 00:28:51.184 [2024-07-15 16:10:19.793409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.184 [2024-07-15 16:10:19.793425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.184 qpair failed and we were unable to recover it. 00:28:51.184 [2024-07-15 16:10:19.793616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.184 [2024-07-15 16:10:19.793631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.184 qpair failed and we were unable to recover it. 00:28:51.184 [2024-07-15 16:10:19.793740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.184 [2024-07-15 16:10:19.793755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.184 qpair failed and we were unable to recover it. 00:28:51.184 [2024-07-15 16:10:19.793946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.184 [2024-07-15 16:10:19.793978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.184 qpair failed and we were unable to recover it. 00:28:51.184 [2024-07-15 16:10:19.794133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.184 [2024-07-15 16:10:19.794163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.184 qpair failed and we were unable to recover it. 00:28:51.184 [2024-07-15 16:10:19.794398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.184 [2024-07-15 16:10:19.794429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.184 qpair failed and we were unable to recover it. 00:28:51.184 [2024-07-15 16:10:19.794561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.184 [2024-07-15 16:10:19.794591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.184 qpair failed and we were unable to recover it. 00:28:51.184 [2024-07-15 16:10:19.794804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.184 [2024-07-15 16:10:19.794835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.184 qpair failed and we were unable to recover it. 00:28:51.184 [2024-07-15 16:10:19.795068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.184 [2024-07-15 16:10:19.795098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.184 qpair failed and we were unable to recover it. 00:28:51.184 [2024-07-15 16:10:19.795265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.184 [2024-07-15 16:10:19.795297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.184 qpair failed and we were unable to recover it. 00:28:51.184 [2024-07-15 16:10:19.795436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.184 [2024-07-15 16:10:19.795468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.184 qpair failed and we were unable to recover it. 00:28:51.184 [2024-07-15 16:10:19.795643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.184 [2024-07-15 16:10:19.795673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.184 qpair failed and we were unable to recover it. 00:28:51.184 [2024-07-15 16:10:19.795875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.184 [2024-07-15 16:10:19.795894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.184 qpair failed and we were unable to recover it. 00:28:51.184 [2024-07-15 16:10:19.796069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.184 [2024-07-15 16:10:19.796101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.184 qpair failed and we were unable to recover it. 00:28:51.184 [2024-07-15 16:10:19.796244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.184 [2024-07-15 16:10:19.796276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.184 qpair failed and we were unable to recover it. 00:28:51.184 [2024-07-15 16:10:19.796540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.184 [2024-07-15 16:10:19.796570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.184 qpair failed and we were unable to recover it. 00:28:51.184 [2024-07-15 16:10:19.796858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.184 [2024-07-15 16:10:19.796889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.184 qpair failed and we were unable to recover it. 00:28:51.184 [2024-07-15 16:10:19.797091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.184 [2024-07-15 16:10:19.797122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.184 qpair failed and we were unable to recover it. 00:28:51.184 [2024-07-15 16:10:19.797279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.184 [2024-07-15 16:10:19.797311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.184 qpair failed and we were unable to recover it. 00:28:51.184 [2024-07-15 16:10:19.797571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.184 [2024-07-15 16:10:19.797602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.184 qpair failed and we were unable to recover it. 00:28:51.184 [2024-07-15 16:10:19.797760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.184 [2024-07-15 16:10:19.797791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.184 qpair failed and we were unable to recover it. 00:28:51.184 [2024-07-15 16:10:19.797952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.184 [2024-07-15 16:10:19.797981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.184 qpair failed and we were unable to recover it. 00:28:51.184 [2024-07-15 16:10:19.798110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.184 [2024-07-15 16:10:19.798140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.184 qpair failed and we were unable to recover it. 00:28:51.184 [2024-07-15 16:10:19.798340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.184 [2024-07-15 16:10:19.798356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.184 qpair failed and we were unable to recover it. 00:28:51.184 [2024-07-15 16:10:19.798615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.184 [2024-07-15 16:10:19.798645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.184 qpair failed and we were unable to recover it. 00:28:51.184 [2024-07-15 16:10:19.798931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.184 [2024-07-15 16:10:19.798962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.184 qpair failed and we were unable to recover it. 00:28:51.184 [2024-07-15 16:10:19.799146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.184 [2024-07-15 16:10:19.799162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.184 qpair failed and we were unable to recover it. 00:28:51.184 [2024-07-15 16:10:19.799294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.184 [2024-07-15 16:10:19.799325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.184 qpair failed and we were unable to recover it. 00:28:51.184 [2024-07-15 16:10:19.799560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.184 [2024-07-15 16:10:19.799591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.184 qpair failed and we were unable to recover it. 00:28:51.184 [2024-07-15 16:10:19.799734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.184 [2024-07-15 16:10:19.799764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.184 qpair failed and we were unable to recover it. 00:28:51.185 [2024-07-15 16:10:19.799919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.185 [2024-07-15 16:10:19.799950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.185 qpair failed and we were unable to recover it. 00:28:51.185 [2024-07-15 16:10:19.800152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.185 [2024-07-15 16:10:19.800183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.185 qpair failed and we were unable to recover it. 00:28:51.185 [2024-07-15 16:10:19.800360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.185 [2024-07-15 16:10:19.800392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.185 qpair failed and we were unable to recover it. 00:28:51.185 [2024-07-15 16:10:19.800590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.185 [2024-07-15 16:10:19.800622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.185 qpair failed and we were unable to recover it. 00:28:51.185 [2024-07-15 16:10:19.800839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.185 [2024-07-15 16:10:19.800855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.185 qpair failed and we were unable to recover it. 00:28:51.185 [2024-07-15 16:10:19.801033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.185 [2024-07-15 16:10:19.801063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.185 qpair failed and we were unable to recover it. 00:28:51.185 [2024-07-15 16:10:19.801369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.185 [2024-07-15 16:10:19.801402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.185 qpair failed and we were unable to recover it. 00:28:51.185 [2024-07-15 16:10:19.801606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.185 [2024-07-15 16:10:19.801637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.185 qpair failed and we were unable to recover it. 00:28:51.185 [2024-07-15 16:10:19.801920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.185 [2024-07-15 16:10:19.801950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.185 qpair failed and we were unable to recover it. 00:28:51.185 [2024-07-15 16:10:19.802180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.185 [2024-07-15 16:10:19.802210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.185 qpair failed and we were unable to recover it. 00:28:51.185 [2024-07-15 16:10:19.802469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.185 [2024-07-15 16:10:19.802484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.185 qpair failed and we were unable to recover it. 00:28:51.185 [2024-07-15 16:10:19.802676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.185 [2024-07-15 16:10:19.802706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.185 qpair failed and we were unable to recover it. 00:28:51.185 [2024-07-15 16:10:19.802982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.185 [2024-07-15 16:10:19.803013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.185 qpair failed and we were unable to recover it. 00:28:51.185 [2024-07-15 16:10:19.803165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.185 [2024-07-15 16:10:19.803196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.185 qpair failed and we were unable to recover it. 00:28:51.185 [2024-07-15 16:10:19.803485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.185 [2024-07-15 16:10:19.803555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.185 qpair failed and we were unable to recover it. 00:28:51.185 [2024-07-15 16:10:19.803820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.185 [2024-07-15 16:10:19.803888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.185 qpair failed and we were unable to recover it. 00:28:51.185 [2024-07-15 16:10:19.804053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.185 [2024-07-15 16:10:19.804088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.185 qpair failed and we were unable to recover it. 00:28:51.185 [2024-07-15 16:10:19.804239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.185 [2024-07-15 16:10:19.804256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.185 qpair failed and we were unable to recover it. 00:28:51.185 [2024-07-15 16:10:19.804372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.185 [2024-07-15 16:10:19.804387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.185 qpair failed and we were unable to recover it. 00:28:51.185 [2024-07-15 16:10:19.804618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.185 [2024-07-15 16:10:19.804634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.185 qpair failed and we were unable to recover it. 00:28:51.185 [2024-07-15 16:10:19.804777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.185 [2024-07-15 16:10:19.804791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.185 qpair failed and we were unable to recover it. 00:28:51.185 [2024-07-15 16:10:19.804892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.185 [2024-07-15 16:10:19.804907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.185 qpair failed and we were unable to recover it. 00:28:51.185 [2024-07-15 16:10:19.805020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.185 [2024-07-15 16:10:19.805036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.185 qpair failed and we were unable to recover it. 00:28:51.185 [2024-07-15 16:10:19.805266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.185 [2024-07-15 16:10:19.805300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.185 qpair failed and we were unable to recover it. 00:28:51.185 [2024-07-15 16:10:19.805585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.185 [2024-07-15 16:10:19.805616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.185 qpair failed and we were unable to recover it. 00:28:51.185 [2024-07-15 16:10:19.805777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.185 [2024-07-15 16:10:19.805807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.185 qpair failed and we were unable to recover it. 00:28:51.185 [2024-07-15 16:10:19.806021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.185 [2024-07-15 16:10:19.806052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.185 qpair failed and we were unable to recover it. 00:28:51.185 [2024-07-15 16:10:19.806268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.185 [2024-07-15 16:10:19.806284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.185 qpair failed and we were unable to recover it. 00:28:51.185 [2024-07-15 16:10:19.806459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.185 [2024-07-15 16:10:19.806473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.185 qpair failed and we were unable to recover it. 00:28:51.185 [2024-07-15 16:10:19.806668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.185 [2024-07-15 16:10:19.806699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.185 qpair failed and we were unable to recover it. 00:28:51.185 [2024-07-15 16:10:19.806807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.185 [2024-07-15 16:10:19.806838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.185 qpair failed and we were unable to recover it. 00:28:51.185 [2024-07-15 16:10:19.807069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.185 [2024-07-15 16:10:19.807101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.185 qpair failed and we were unable to recover it. 00:28:51.185 [2024-07-15 16:10:19.807335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.185 [2024-07-15 16:10:19.807367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.185 qpair failed and we were unable to recover it. 00:28:51.185 [2024-07-15 16:10:19.807514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.185 [2024-07-15 16:10:19.807544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.185 qpair failed and we were unable to recover it. 00:28:51.185 [2024-07-15 16:10:19.807743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.185 [2024-07-15 16:10:19.807773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.185 qpair failed and we were unable to recover it. 00:28:51.185 [2024-07-15 16:10:19.807907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.185 [2024-07-15 16:10:19.807937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.185 qpair failed and we were unable to recover it. 00:28:51.185 [2024-07-15 16:10:19.808201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.185 [2024-07-15 16:10:19.808254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.185 qpair failed and we were unable to recover it. 00:28:51.185 [2024-07-15 16:10:19.808371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.185 [2024-07-15 16:10:19.808403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.185 qpair failed and we were unable to recover it. 00:28:51.185 [2024-07-15 16:10:19.808551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.185 [2024-07-15 16:10:19.808581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.185 qpair failed and we were unable to recover it. 00:28:51.185 [2024-07-15 16:10:19.808820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.185 [2024-07-15 16:10:19.808850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.185 qpair failed and we were unable to recover it. 00:28:51.185 [2024-07-15 16:10:19.808996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.185 [2024-07-15 16:10:19.809027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.186 qpair failed and we were unable to recover it. 00:28:51.186 [2024-07-15 16:10:19.809239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.186 [2024-07-15 16:10:19.809272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.186 qpair failed and we were unable to recover it. 00:28:51.186 [2024-07-15 16:10:19.809480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.186 [2024-07-15 16:10:19.809511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.186 qpair failed and we were unable to recover it. 00:28:51.186 [2024-07-15 16:10:19.809777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.186 [2024-07-15 16:10:19.809808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.186 qpair failed and we were unable to recover it. 00:28:51.186 [2024-07-15 16:10:19.809946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.186 [2024-07-15 16:10:19.809960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.186 qpair failed and we were unable to recover it. 00:28:51.186 [2024-07-15 16:10:19.810078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.186 [2024-07-15 16:10:19.810092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.186 qpair failed and we were unable to recover it. 00:28:51.186 [2024-07-15 16:10:19.810360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.186 [2024-07-15 16:10:19.810393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.186 qpair failed and we were unable to recover it. 00:28:51.186 [2024-07-15 16:10:19.810597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.186 [2024-07-15 16:10:19.810628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.186 qpair failed and we were unable to recover it. 00:28:51.186 [2024-07-15 16:10:19.810794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.186 [2024-07-15 16:10:19.810824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.186 qpair failed and we were unable to recover it. 00:28:51.186 [2024-07-15 16:10:19.811055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.186 [2024-07-15 16:10:19.811070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.186 qpair failed and we were unable to recover it. 00:28:51.186 [2024-07-15 16:10:19.811243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.186 [2024-07-15 16:10:19.811275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.186 qpair failed and we were unable to recover it. 00:28:51.186 [2024-07-15 16:10:19.811476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.186 [2024-07-15 16:10:19.811506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.186 qpair failed and we were unable to recover it. 00:28:51.186 [2024-07-15 16:10:19.811718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.186 [2024-07-15 16:10:19.811748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.186 qpair failed and we were unable to recover it. 00:28:51.186 [2024-07-15 16:10:19.811878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.186 [2024-07-15 16:10:19.811893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.186 qpair failed and we were unable to recover it. 00:28:51.186 [2024-07-15 16:10:19.812091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.186 [2024-07-15 16:10:19.812122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.186 qpair failed and we were unable to recover it. 00:28:51.186 [2024-07-15 16:10:19.812332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.186 [2024-07-15 16:10:19.812363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.186 qpair failed and we were unable to recover it. 00:28:51.186 [2024-07-15 16:10:19.812513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.186 [2024-07-15 16:10:19.812542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.186 qpair failed and we were unable to recover it. 00:28:51.186 [2024-07-15 16:10:19.812830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.186 [2024-07-15 16:10:19.812861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.186 qpair failed and we were unable to recover it. 00:28:51.186 [2024-07-15 16:10:19.813000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.186 [2024-07-15 16:10:19.813030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.186 qpair failed and we were unable to recover it. 00:28:51.186 [2024-07-15 16:10:19.813197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.186 [2024-07-15 16:10:19.813244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.186 qpair failed and we were unable to recover it. 00:28:51.186 [2024-07-15 16:10:19.813410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.186 [2024-07-15 16:10:19.813426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.186 qpair failed and we were unable to recover it. 00:28:51.186 [2024-07-15 16:10:19.813583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.186 [2024-07-15 16:10:19.813614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.186 qpair failed and we were unable to recover it. 00:28:51.186 [2024-07-15 16:10:19.813824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.186 [2024-07-15 16:10:19.813855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.186 qpair failed and we were unable to recover it. 00:28:51.186 [2024-07-15 16:10:19.814061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.186 [2024-07-15 16:10:19.814093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.186 qpair failed and we were unable to recover it. 00:28:51.186 [2024-07-15 16:10:19.814303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.186 [2024-07-15 16:10:19.814335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.186 qpair failed and we were unable to recover it. 00:28:51.186 [2024-07-15 16:10:19.814480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.186 [2024-07-15 16:10:19.814510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.186 qpair failed and we were unable to recover it. 00:28:51.186 [2024-07-15 16:10:19.814816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.186 [2024-07-15 16:10:19.814847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.186 qpair failed and we were unable to recover it. 00:28:51.186 [2024-07-15 16:10:19.815002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.186 [2024-07-15 16:10:19.815017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.186 qpair failed and we were unable to recover it. 00:28:51.186 [2024-07-15 16:10:19.815192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.186 [2024-07-15 16:10:19.815222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.186 qpair failed and we were unable to recover it. 00:28:51.186 [2024-07-15 16:10:19.815509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.186 [2024-07-15 16:10:19.815539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.186 qpair failed and we were unable to recover it. 00:28:51.186 [2024-07-15 16:10:19.815808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.186 [2024-07-15 16:10:19.815840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.186 qpair failed and we were unable to recover it. 00:28:51.186 [2024-07-15 16:10:19.815994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.186 [2024-07-15 16:10:19.816025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.186 qpair failed and we were unable to recover it. 00:28:51.186 [2024-07-15 16:10:19.816190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.186 [2024-07-15 16:10:19.816204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.186 qpair failed and we were unable to recover it. 00:28:51.186 [2024-07-15 16:10:19.816382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.186 [2024-07-15 16:10:19.816415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.186 qpair failed and we were unable to recover it. 00:28:51.186 [2024-07-15 16:10:19.816614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.186 [2024-07-15 16:10:19.816644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.186 qpair failed and we were unable to recover it. 00:28:51.186 [2024-07-15 16:10:19.816874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.186 [2024-07-15 16:10:19.816904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.186 qpair failed and we were unable to recover it. 00:28:51.186 [2024-07-15 16:10:19.817159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.186 [2024-07-15 16:10:19.817177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.186 qpair failed and we were unable to recover it. 00:28:51.186 [2024-07-15 16:10:19.817346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.186 [2024-07-15 16:10:19.817362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.186 qpair failed and we were unable to recover it. 00:28:51.186 [2024-07-15 16:10:19.817565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.186 [2024-07-15 16:10:19.817596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.186 qpair failed and we were unable to recover it. 00:28:51.186 [2024-07-15 16:10:19.817815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.186 [2024-07-15 16:10:19.817845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.186 qpair failed and we were unable to recover it. 00:28:51.186 [2024-07-15 16:10:19.818001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.186 [2024-07-15 16:10:19.818031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.186 qpair failed and we were unable to recover it. 00:28:51.186 [2024-07-15 16:10:19.818210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.187 [2024-07-15 16:10:19.818229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.187 qpair failed and we were unable to recover it. 00:28:51.187 [2024-07-15 16:10:19.818421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.187 [2024-07-15 16:10:19.818436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.187 qpair failed and we were unable to recover it. 00:28:51.187 [2024-07-15 16:10:19.818692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.187 [2024-07-15 16:10:19.818707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.187 qpair failed and we were unable to recover it. 00:28:51.187 [2024-07-15 16:10:19.818902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.187 [2024-07-15 16:10:19.818934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.187 qpair failed and we were unable to recover it. 00:28:51.187 [2024-07-15 16:10:19.819213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.187 [2024-07-15 16:10:19.819252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.187 qpair failed and we were unable to recover it. 00:28:51.187 [2024-07-15 16:10:19.819537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.187 [2024-07-15 16:10:19.819569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.187 qpair failed and we were unable to recover it. 00:28:51.187 [2024-07-15 16:10:19.819783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.187 [2024-07-15 16:10:19.819814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.187 qpair failed and we were unable to recover it. 00:28:51.187 [2024-07-15 16:10:19.820004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.187 [2024-07-15 16:10:19.820035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.187 qpair failed and we were unable to recover it. 00:28:51.187 [2024-07-15 16:10:19.820263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.187 [2024-07-15 16:10:19.820279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.187 qpair failed and we were unable to recover it. 00:28:51.187 [2024-07-15 16:10:19.820451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.187 [2024-07-15 16:10:19.820467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.187 qpair failed and we were unable to recover it. 00:28:51.187 [2024-07-15 16:10:19.820612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.187 [2024-07-15 16:10:19.820642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.187 qpair failed and we were unable to recover it. 00:28:51.187 [2024-07-15 16:10:19.820797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.187 [2024-07-15 16:10:19.820828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.187 qpair failed and we were unable to recover it. 00:28:51.187 [2024-07-15 16:10:19.821033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.187 [2024-07-15 16:10:19.821064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.187 qpair failed and we were unable to recover it. 00:28:51.187 [2024-07-15 16:10:19.821276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.187 [2024-07-15 16:10:19.821292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.187 qpair failed and we were unable to recover it. 00:28:51.187 [2024-07-15 16:10:19.821496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.187 [2024-07-15 16:10:19.821527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.187 qpair failed and we were unable to recover it. 00:28:51.187 [2024-07-15 16:10:19.821641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.187 [2024-07-15 16:10:19.821671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.187 qpair failed and we were unable to recover it. 00:28:51.187 [2024-07-15 16:10:19.821893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.187 [2024-07-15 16:10:19.821924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.187 qpair failed and we were unable to recover it. 00:28:51.187 [2024-07-15 16:10:19.822130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.187 [2024-07-15 16:10:19.822146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.187 qpair failed and we were unable to recover it. 00:28:51.187 [2024-07-15 16:10:19.822344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.187 [2024-07-15 16:10:19.822360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.187 qpair failed and we were unable to recover it. 00:28:51.187 [2024-07-15 16:10:19.822469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.187 [2024-07-15 16:10:19.822484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.187 qpair failed and we were unable to recover it. 00:28:51.187 [2024-07-15 16:10:19.822615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.187 [2024-07-15 16:10:19.822651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.187 qpair failed and we were unable to recover it. 00:28:51.187 [2024-07-15 16:10:19.822804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.187 [2024-07-15 16:10:19.822834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.187 qpair failed and we were unable to recover it. 00:28:51.187 [2024-07-15 16:10:19.822949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.187 [2024-07-15 16:10:19.822979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.187 qpair failed and we were unable to recover it. 00:28:51.187 [2024-07-15 16:10:19.823177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.187 [2024-07-15 16:10:19.823208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.187 qpair failed and we were unable to recover it. 00:28:51.187 [2024-07-15 16:10:19.823390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.187 [2024-07-15 16:10:19.823422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.187 qpair failed and we were unable to recover it. 00:28:51.187 [2024-07-15 16:10:19.823687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.187 [2024-07-15 16:10:19.823718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.187 qpair failed and we were unable to recover it. 00:28:51.187 [2024-07-15 16:10:19.823921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.187 [2024-07-15 16:10:19.823952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.187 qpair failed and we were unable to recover it. 00:28:51.187 [2024-07-15 16:10:19.824184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.187 [2024-07-15 16:10:19.824216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.187 qpair failed and we were unable to recover it. 00:28:51.187 [2024-07-15 16:10:19.824501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.187 [2024-07-15 16:10:19.824517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.187 qpair failed and we were unable to recover it. 00:28:51.187 [2024-07-15 16:10:19.824680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.187 [2024-07-15 16:10:19.824695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.187 qpair failed and we were unable to recover it. 00:28:51.187 [2024-07-15 16:10:19.824826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.187 [2024-07-15 16:10:19.824841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.187 qpair failed and we were unable to recover it. 00:28:51.187 [2024-07-15 16:10:19.825005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.187 [2024-07-15 16:10:19.825019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.187 qpair failed and we were unable to recover it. 00:28:51.187 [2024-07-15 16:10:19.825300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.187 [2024-07-15 16:10:19.825333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.187 qpair failed and we were unable to recover it. 00:28:51.187 [2024-07-15 16:10:19.825574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.187 [2024-07-15 16:10:19.825605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.188 qpair failed and we were unable to recover it. 00:28:51.188 [2024-07-15 16:10:19.825809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.188 [2024-07-15 16:10:19.825839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.188 qpair failed and we were unable to recover it. 00:28:51.188 [2024-07-15 16:10:19.826136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.188 [2024-07-15 16:10:19.826156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.188 qpair failed and we were unable to recover it. 00:28:51.188 [2024-07-15 16:10:19.826267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.188 [2024-07-15 16:10:19.826283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.188 qpair failed and we were unable to recover it. 00:28:51.188 [2024-07-15 16:10:19.826494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.188 [2024-07-15 16:10:19.826525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.188 qpair failed and we were unable to recover it. 00:28:51.188 [2024-07-15 16:10:19.826651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.188 [2024-07-15 16:10:19.826681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.188 qpair failed and we were unable to recover it. 00:28:51.188 [2024-07-15 16:10:19.826891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.188 [2024-07-15 16:10:19.826921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.188 qpair failed and we were unable to recover it. 00:28:51.188 [2024-07-15 16:10:19.827124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.188 [2024-07-15 16:10:19.827139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.188 qpair failed and we were unable to recover it. 00:28:51.188 [2024-07-15 16:10:19.827335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.188 [2024-07-15 16:10:19.827352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.188 qpair failed and we were unable to recover it. 00:28:51.188 [2024-07-15 16:10:19.827530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.188 [2024-07-15 16:10:19.827562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.188 qpair failed and we were unable to recover it. 00:28:51.188 [2024-07-15 16:10:19.827770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.188 [2024-07-15 16:10:19.827801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.188 qpair failed and we were unable to recover it. 00:28:51.188 [2024-07-15 16:10:19.828013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.188 [2024-07-15 16:10:19.828044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.188 qpair failed and we were unable to recover it. 00:28:51.188 [2024-07-15 16:10:19.828247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.188 [2024-07-15 16:10:19.828263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.188 qpair failed and we were unable to recover it. 00:28:51.188 [2024-07-15 16:10:19.828437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.188 [2024-07-15 16:10:19.828452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.188 qpair failed and we were unable to recover it. 00:28:51.188 [2024-07-15 16:10:19.828565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.188 [2024-07-15 16:10:19.828596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.188 qpair failed and we were unable to recover it. 00:28:51.188 [2024-07-15 16:10:19.828795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.188 [2024-07-15 16:10:19.828826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.188 qpair failed and we were unable to recover it. 00:28:51.188 [2024-07-15 16:10:19.829041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.188 [2024-07-15 16:10:19.829073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.188 qpair failed and we were unable to recover it. 00:28:51.188 [2024-07-15 16:10:19.829205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.188 [2024-07-15 16:10:19.829220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.188 qpair failed and we were unable to recover it. 00:28:51.188 [2024-07-15 16:10:19.829411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.188 [2024-07-15 16:10:19.829443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.188 qpair failed and we were unable to recover it. 00:28:51.188 [2024-07-15 16:10:19.829653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.188 [2024-07-15 16:10:19.829685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.188 qpair failed and we were unable to recover it. 00:28:51.188 [2024-07-15 16:10:19.829918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.188 [2024-07-15 16:10:19.829949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.188 qpair failed and we were unable to recover it. 00:28:51.188 [2024-07-15 16:10:19.830103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.188 [2024-07-15 16:10:19.830118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.188 qpair failed and we were unable to recover it. 00:28:51.188 [2024-07-15 16:10:19.830236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.188 [2024-07-15 16:10:19.830251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.188 qpair failed and we were unable to recover it. 00:28:51.188 [2024-07-15 16:10:19.830430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.188 [2024-07-15 16:10:19.830446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.188 qpair failed and we were unable to recover it. 00:28:51.188 [2024-07-15 16:10:19.830701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.188 [2024-07-15 16:10:19.830716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.188 qpair failed and we were unable to recover it. 00:28:51.188 [2024-07-15 16:10:19.830834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.188 [2024-07-15 16:10:19.830872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.188 qpair failed and we were unable to recover it. 00:28:51.188 [2024-07-15 16:10:19.831071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.188 [2024-07-15 16:10:19.831102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.188 qpair failed and we were unable to recover it. 00:28:51.188 [2024-07-15 16:10:19.831313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.188 [2024-07-15 16:10:19.831328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.188 qpair failed and we were unable to recover it. 00:28:51.188 [2024-07-15 16:10:19.831506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.188 [2024-07-15 16:10:19.831536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.188 qpair failed and we were unable to recover it. 00:28:51.188 [2024-07-15 16:10:19.831831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.188 [2024-07-15 16:10:19.831863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.188 qpair failed and we were unable to recover it. 00:28:51.188 [2024-07-15 16:10:19.832034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.188 [2024-07-15 16:10:19.832049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.188 qpair failed and we were unable to recover it. 00:28:51.188 [2024-07-15 16:10:19.832239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.188 [2024-07-15 16:10:19.832255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.188 qpair failed and we were unable to recover it. 00:28:51.188 [2024-07-15 16:10:19.832425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.188 [2024-07-15 16:10:19.832440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.188 qpair failed and we were unable to recover it. 00:28:51.188 [2024-07-15 16:10:19.832652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.188 [2024-07-15 16:10:19.832682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.188 qpair failed and we were unable to recover it. 00:28:51.188 [2024-07-15 16:10:19.832825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.188 [2024-07-15 16:10:19.832867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.188 qpair failed and we were unable to recover it. 00:28:51.188 [2024-07-15 16:10:19.833069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.188 [2024-07-15 16:10:19.833085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.188 qpair failed and we were unable to recover it. 00:28:51.188 [2024-07-15 16:10:19.833257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.188 [2024-07-15 16:10:19.833273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.188 qpair failed and we were unable to recover it. 00:28:51.188 [2024-07-15 16:10:19.833464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.188 [2024-07-15 16:10:19.833496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.188 qpair failed and we were unable to recover it. 00:28:51.188 [2024-07-15 16:10:19.833695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.188 [2024-07-15 16:10:19.833724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.188 qpair failed and we were unable to recover it. 00:28:51.188 [2024-07-15 16:10:19.833865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.188 [2024-07-15 16:10:19.833895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.188 qpair failed and we were unable to recover it. 00:28:51.189 [2024-07-15 16:10:19.834097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.189 [2024-07-15 16:10:19.834113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.189 qpair failed and we were unable to recover it. 00:28:51.189 [2024-07-15 16:10:19.834285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.189 [2024-07-15 16:10:19.834316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.189 qpair failed and we were unable to recover it. 00:28:51.189 [2024-07-15 16:10:19.834481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.189 [2024-07-15 16:10:19.834517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.189 qpair failed and we were unable to recover it. 00:28:51.189 [2024-07-15 16:10:19.834781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.189 [2024-07-15 16:10:19.834812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.189 qpair failed and we were unable to recover it. 00:28:51.189 [2024-07-15 16:10:19.835014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.189 [2024-07-15 16:10:19.835044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.189 qpair failed and we were unable to recover it. 00:28:51.189 [2024-07-15 16:10:19.835268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.189 [2024-07-15 16:10:19.835301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.189 qpair failed and we were unable to recover it. 00:28:51.189 [2024-07-15 16:10:19.835505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.189 [2024-07-15 16:10:19.835520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.189 qpair failed and we were unable to recover it. 00:28:51.189 [2024-07-15 16:10:19.835775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.189 [2024-07-15 16:10:19.835790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.189 qpair failed and we were unable to recover it. 00:28:51.189 [2024-07-15 16:10:19.835983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.189 [2024-07-15 16:10:19.835998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.189 qpair failed and we were unable to recover it. 00:28:51.189 [2024-07-15 16:10:19.836186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.189 [2024-07-15 16:10:19.836217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.189 qpair failed and we were unable to recover it. 00:28:51.189 [2024-07-15 16:10:19.836379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.189 [2024-07-15 16:10:19.836411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.189 qpair failed and we were unable to recover it. 00:28:51.189 [2024-07-15 16:10:19.836654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.189 [2024-07-15 16:10:19.836686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.189 qpair failed and we were unable to recover it. 00:28:51.189 [2024-07-15 16:10:19.836831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.189 [2024-07-15 16:10:19.836862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.189 qpair failed and we were unable to recover it. 00:28:51.189 [2024-07-15 16:10:19.837000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.189 [2024-07-15 16:10:19.837030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.189 qpair failed and we were unable to recover it. 00:28:51.189 [2024-07-15 16:10:19.837171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.189 [2024-07-15 16:10:19.837185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.189 qpair failed and we were unable to recover it. 00:28:51.189 [2024-07-15 16:10:19.837307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.189 [2024-07-15 16:10:19.837323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.189 qpair failed and we were unable to recover it. 00:28:51.189 [2024-07-15 16:10:19.837525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.189 [2024-07-15 16:10:19.837540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.189 qpair failed and we were unable to recover it. 00:28:51.189 [2024-07-15 16:10:19.837719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.189 [2024-07-15 16:10:19.837735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.189 qpair failed and we were unable to recover it. 00:28:51.189 [2024-07-15 16:10:19.837829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.189 [2024-07-15 16:10:19.837843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.189 qpair failed and we were unable to recover it. 00:28:51.189 [2024-07-15 16:10:19.838124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.189 [2024-07-15 16:10:19.838141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.189 qpair failed and we were unable to recover it. 00:28:51.189 [2024-07-15 16:10:19.838317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.189 [2024-07-15 16:10:19.838332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.189 qpair failed and we were unable to recover it. 00:28:51.189 [2024-07-15 16:10:19.838520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.189 [2024-07-15 16:10:19.838549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.189 qpair failed and we were unable to recover it. 00:28:51.189 [2024-07-15 16:10:19.838680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.189 [2024-07-15 16:10:19.838716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.189 qpair failed and we were unable to recover it. 00:28:51.189 [2024-07-15 16:10:19.838981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.189 [2024-07-15 16:10:19.839013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.189 qpair failed and we were unable to recover it. 00:28:51.189 [2024-07-15 16:10:19.839261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.189 [2024-07-15 16:10:19.839294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.189 qpair failed and we were unable to recover it. 00:28:51.189 [2024-07-15 16:10:19.839452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.189 [2024-07-15 16:10:19.839484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.189 qpair failed and we were unable to recover it. 00:28:51.189 [2024-07-15 16:10:19.839684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.189 [2024-07-15 16:10:19.839715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.189 qpair failed and we were unable to recover it. 00:28:51.189 [2024-07-15 16:10:19.839922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.189 [2024-07-15 16:10:19.839953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.189 qpair failed and we were unable to recover it. 00:28:51.189 [2024-07-15 16:10:19.840197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.189 [2024-07-15 16:10:19.840237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.189 qpair failed and we were unable to recover it. 00:28:51.189 [2024-07-15 16:10:19.840510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.189 [2024-07-15 16:10:19.840543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.189 qpair failed and we were unable to recover it. 00:28:51.189 [2024-07-15 16:10:19.840696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.189 [2024-07-15 16:10:19.840727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.189 qpair failed and we were unable to recover it. 00:28:51.189 [2024-07-15 16:10:19.840959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.189 [2024-07-15 16:10:19.840991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.189 qpair failed and we were unable to recover it. 00:28:51.189 [2024-07-15 16:10:19.841194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.189 [2024-07-15 16:10:19.841206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.189 qpair failed and we were unable to recover it. 00:28:51.189 [2024-07-15 16:10:19.841371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.189 [2024-07-15 16:10:19.841383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.189 qpair failed and we were unable to recover it. 00:28:51.189 [2024-07-15 16:10:19.841559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.189 [2024-07-15 16:10:19.841571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.189 qpair failed and we were unable to recover it. 00:28:51.189 [2024-07-15 16:10:19.841727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.189 [2024-07-15 16:10:19.841758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.189 qpair failed and we were unable to recover it. 00:28:51.189 [2024-07-15 16:10:19.841987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.189 [2024-07-15 16:10:19.842017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.189 qpair failed and we were unable to recover it. 00:28:51.189 [2024-07-15 16:10:19.842161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.189 [2024-07-15 16:10:19.842193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.189 qpair failed and we were unable to recover it. 00:28:51.189 [2024-07-15 16:10:19.842490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.189 [2024-07-15 16:10:19.842521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.189 qpair failed and we were unable to recover it. 00:28:51.189 [2024-07-15 16:10:19.842666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.189 [2024-07-15 16:10:19.842697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.189 qpair failed and we were unable to recover it. 00:28:51.189 [2024-07-15 16:10:19.842930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.189 [2024-07-15 16:10:19.842961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.190 qpair failed and we were unable to recover it. 00:28:51.190 [2024-07-15 16:10:19.843174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.190 [2024-07-15 16:10:19.843205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.190 qpair failed and we were unable to recover it. 00:28:51.190 [2024-07-15 16:10:19.843315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.190 [2024-07-15 16:10:19.843332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.190 qpair failed and we were unable to recover it. 00:28:51.190 [2024-07-15 16:10:19.843490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.190 [2024-07-15 16:10:19.843502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.190 qpair failed and we were unable to recover it. 00:28:51.190 [2024-07-15 16:10:19.843662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.190 [2024-07-15 16:10:19.843674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.190 qpair failed and we were unable to recover it. 00:28:51.190 [2024-07-15 16:10:19.843842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.190 [2024-07-15 16:10:19.843854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.190 qpair failed and we were unable to recover it. 00:28:51.190 [2024-07-15 16:10:19.843976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.190 [2024-07-15 16:10:19.843987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.190 qpair failed and we were unable to recover it. 00:28:51.190 [2024-07-15 16:10:19.844083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.190 [2024-07-15 16:10:19.844095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.190 qpair failed and we were unable to recover it. 00:28:51.190 [2024-07-15 16:10:19.844271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.190 [2024-07-15 16:10:19.844285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.190 qpair failed and we were unable to recover it. 00:28:51.190 [2024-07-15 16:10:19.845047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.190 [2024-07-15 16:10:19.845088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.190 qpair failed and we were unable to recover it. 00:28:51.190 [2024-07-15 16:10:19.845401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.190 [2024-07-15 16:10:19.845414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.190 qpair failed and we were unable to recover it. 00:28:51.190 [2024-07-15 16:10:19.845566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.190 [2024-07-15 16:10:19.845579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.190 qpair failed and we were unable to recover it. 00:28:51.190 [2024-07-15 16:10:19.845677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.190 [2024-07-15 16:10:19.845688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.190 qpair failed and we were unable to recover it. 00:28:51.190 [2024-07-15 16:10:19.845863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.190 [2024-07-15 16:10:19.845875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.190 qpair failed and we were unable to recover it. 00:28:51.190 [2024-07-15 16:10:19.846039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.190 [2024-07-15 16:10:19.846052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.190 qpair failed and we were unable to recover it. 00:28:51.190 [2024-07-15 16:10:19.846276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.190 [2024-07-15 16:10:19.846289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.190 qpair failed and we were unable to recover it. 00:28:51.190 [2024-07-15 16:10:19.846396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.190 [2024-07-15 16:10:19.846409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.190 qpair failed and we were unable to recover it. 00:28:51.190 [2024-07-15 16:10:19.846519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.190 [2024-07-15 16:10:19.846532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.190 qpair failed and we were unable to recover it. 00:28:51.190 [2024-07-15 16:10:19.846642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.190 [2024-07-15 16:10:19.846655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.190 qpair failed and we were unable to recover it. 00:28:51.190 [2024-07-15 16:10:19.846824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.190 [2024-07-15 16:10:19.846836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.190 qpair failed and we were unable to recover it. 00:28:51.190 [2024-07-15 16:10:19.847008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.190 [2024-07-15 16:10:19.847020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.190 qpair failed and we were unable to recover it. 00:28:51.190 [2024-07-15 16:10:19.847094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.190 [2024-07-15 16:10:19.847105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.190 qpair failed and we were unable to recover it. 00:28:51.190 [2024-07-15 16:10:19.847208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.190 [2024-07-15 16:10:19.847219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.190 qpair failed and we were unable to recover it. 00:28:51.190 [2024-07-15 16:10:19.847377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.190 [2024-07-15 16:10:19.847389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.190 qpair failed and we were unable to recover it. 00:28:51.190 [2024-07-15 16:10:19.847457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.190 [2024-07-15 16:10:19.847468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.190 qpair failed and we were unable to recover it. 00:28:51.190 [2024-07-15 16:10:19.847684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.190 [2024-07-15 16:10:19.847696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.190 qpair failed and we were unable to recover it. 00:28:51.190 [2024-07-15 16:10:19.847862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.190 [2024-07-15 16:10:19.847875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.190 qpair failed and we were unable to recover it. 00:28:51.190 [2024-07-15 16:10:19.847979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.190 [2024-07-15 16:10:19.847990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.190 qpair failed and we were unable to recover it. 00:28:51.190 [2024-07-15 16:10:19.848157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.190 [2024-07-15 16:10:19.848169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.190 qpair failed and we were unable to recover it. 00:28:51.190 [2024-07-15 16:10:19.848308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.190 [2024-07-15 16:10:19.848345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.190 qpair failed and we were unable to recover it. 00:28:51.190 [2024-07-15 16:10:19.848476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.190 [2024-07-15 16:10:19.848492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.190 qpair failed and we were unable to recover it. 00:28:51.190 [2024-07-15 16:10:19.848599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.190 [2024-07-15 16:10:19.848614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.190 qpair failed and we were unable to recover it. 00:28:51.190 [2024-07-15 16:10:19.848857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.190 [2024-07-15 16:10:19.848888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.190 qpair failed and we were unable to recover it. 00:28:51.190 [2024-07-15 16:10:19.849102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.190 [2024-07-15 16:10:19.849117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.190 qpair failed and we were unable to recover it. 00:28:51.190 [2024-07-15 16:10:19.849311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.190 [2024-07-15 16:10:19.849344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.190 qpair failed and we were unable to recover it. 00:28:51.190 [2024-07-15 16:10:19.849464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.190 [2024-07-15 16:10:19.849495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.190 qpair failed and we were unable to recover it. 00:28:51.190 [2024-07-15 16:10:19.849635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.190 [2024-07-15 16:10:19.849665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.190 qpair failed and we were unable to recover it. 00:28:51.190 [2024-07-15 16:10:19.849817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.190 [2024-07-15 16:10:19.849848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.190 qpair failed and we were unable to recover it. 00:28:51.190 [2024-07-15 16:10:19.849980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.190 [2024-07-15 16:10:19.850012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.190 qpair failed and we were unable to recover it. 00:28:51.190 [2024-07-15 16:10:19.850310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.190 [2024-07-15 16:10:19.850343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.190 qpair failed and we were unable to recover it. 00:28:51.190 [2024-07-15 16:10:19.850514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.190 [2024-07-15 16:10:19.850529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.190 qpair failed and we were unable to recover it. 00:28:51.190 [2024-07-15 16:10:19.850706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.190 [2024-07-15 16:10:19.850737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.191 qpair failed and we were unable to recover it. 00:28:51.191 [2024-07-15 16:10:19.850950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.191 [2024-07-15 16:10:19.850988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.191 qpair failed and we were unable to recover it. 00:28:51.191 [2024-07-15 16:10:19.851141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.191 [2024-07-15 16:10:19.851172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.191 qpair failed and we were unable to recover it. 00:28:51.191 [2024-07-15 16:10:19.851349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.191 [2024-07-15 16:10:19.851366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.191 qpair failed and we were unable to recover it. 00:28:51.191 [2024-07-15 16:10:19.851488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.191 [2024-07-15 16:10:19.851503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.191 qpair failed and we were unable to recover it. 00:28:51.191 [2024-07-15 16:10:19.851747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.191 [2024-07-15 16:10:19.851778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.191 qpair failed and we were unable to recover it. 00:28:51.191 [2024-07-15 16:10:19.852053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.191 [2024-07-15 16:10:19.852085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.191 qpair failed and we were unable to recover it. 00:28:51.191 [2024-07-15 16:10:19.852293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.191 [2024-07-15 16:10:19.852309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.191 qpair failed and we were unable to recover it. 00:28:51.191 [2024-07-15 16:10:19.852481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.191 [2024-07-15 16:10:19.852496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.191 qpair failed and we were unable to recover it. 00:28:51.191 [2024-07-15 16:10:19.852578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.191 [2024-07-15 16:10:19.852622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.191 qpair failed and we were unable to recover it. 00:28:51.191 [2024-07-15 16:10:19.852774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.191 [2024-07-15 16:10:19.852806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.191 qpair failed and we were unable to recover it. 00:28:51.191 [2024-07-15 16:10:19.853044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.191 [2024-07-15 16:10:19.853074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.191 qpair failed and we were unable to recover it. 00:28:51.191 [2024-07-15 16:10:19.853228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.191 [2024-07-15 16:10:19.853244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.191 qpair failed and we were unable to recover it. 00:28:51.191 [2024-07-15 16:10:19.853434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.191 [2024-07-15 16:10:19.853465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.191 qpair failed and we were unable to recover it. 00:28:51.191 [2024-07-15 16:10:19.853607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.191 [2024-07-15 16:10:19.853638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.191 qpair failed and we were unable to recover it. 00:28:51.191 [2024-07-15 16:10:19.853928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.191 [2024-07-15 16:10:19.853960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.191 qpair failed and we were unable to recover it. 00:28:51.191 [2024-07-15 16:10:19.854110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.191 [2024-07-15 16:10:19.854141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.191 qpair failed and we were unable to recover it. 00:28:51.191 [2024-07-15 16:10:19.854344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.191 [2024-07-15 16:10:19.854360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.191 qpair failed and we were unable to recover it. 00:28:51.191 [2024-07-15 16:10:19.854475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.191 [2024-07-15 16:10:19.854490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.191 qpair failed and we were unable to recover it. 00:28:51.191 [2024-07-15 16:10:19.854604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.191 [2024-07-15 16:10:19.854619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.191 qpair failed and we were unable to recover it. 00:28:51.191 [2024-07-15 16:10:19.854817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.191 [2024-07-15 16:10:19.854832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.191 qpair failed and we were unable to recover it. 00:28:51.191 [2024-07-15 16:10:19.855000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.191 [2024-07-15 16:10:19.855014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.191 qpair failed and we were unable to recover it. 00:28:51.191 [2024-07-15 16:10:19.855096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.191 [2024-07-15 16:10:19.855124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.191 qpair failed and we were unable to recover it. 00:28:51.191 [2024-07-15 16:10:19.855329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.191 [2024-07-15 16:10:19.855360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.191 qpair failed and we were unable to recover it. 00:28:51.191 [2024-07-15 16:10:19.855580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.191 [2024-07-15 16:10:19.855610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.191 qpair failed and we were unable to recover it. 00:28:51.191 [2024-07-15 16:10:19.855809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.191 [2024-07-15 16:10:19.855839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.191 qpair failed and we were unable to recover it. 00:28:51.191 [2024-07-15 16:10:19.855973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.191 [2024-07-15 16:10:19.856002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.191 qpair failed and we were unable to recover it. 00:28:51.191 [2024-07-15 16:10:19.856139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.191 [2024-07-15 16:10:19.856170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.191 qpair failed and we were unable to recover it. 00:28:51.191 [2024-07-15 16:10:19.856328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.191 [2024-07-15 16:10:19.856370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.191 qpair failed and we were unable to recover it. 00:28:51.191 [2024-07-15 16:10:19.856574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.191 [2024-07-15 16:10:19.856604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.191 qpair failed and we were unable to recover it. 00:28:51.191 [2024-07-15 16:10:19.856844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.191 [2024-07-15 16:10:19.856875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.191 qpair failed and we were unable to recover it. 00:28:51.191 [2024-07-15 16:10:19.857101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.191 [2024-07-15 16:10:19.857132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.191 qpair failed and we were unable to recover it. 00:28:51.191 [2024-07-15 16:10:19.857351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.191 [2024-07-15 16:10:19.857383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.191 qpair failed and we were unable to recover it. 00:28:51.191 [2024-07-15 16:10:19.857589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.191 [2024-07-15 16:10:19.857600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.191 qpair failed and we were unable to recover it. 00:28:51.191 [2024-07-15 16:10:19.857760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.191 [2024-07-15 16:10:19.857771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.191 qpair failed and we were unable to recover it. 00:28:51.191 [2024-07-15 16:10:19.857931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.191 [2024-07-15 16:10:19.857962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.191 qpair failed and we were unable to recover it. 00:28:51.191 [2024-07-15 16:10:19.858100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.191 [2024-07-15 16:10:19.858130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.191 qpair failed and we were unable to recover it. 00:28:51.191 [2024-07-15 16:10:19.858272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.191 [2024-07-15 16:10:19.858303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.191 qpair failed and we were unable to recover it. 00:28:51.191 [2024-07-15 16:10:19.858504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.191 [2024-07-15 16:10:19.858534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.191 qpair failed and we were unable to recover it. 00:28:51.191 [2024-07-15 16:10:19.858756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.191 [2024-07-15 16:10:19.858785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.191 qpair failed and we were unable to recover it. 00:28:51.191 [2024-07-15 16:10:19.859052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.191 [2024-07-15 16:10:19.859093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.191 qpair failed and we were unable to recover it. 00:28:51.191 [2024-07-15 16:10:19.859207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.192 [2024-07-15 16:10:19.859221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.192 qpair failed and we were unable to recover it. 00:28:51.192 [2024-07-15 16:10:19.859364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.192 [2024-07-15 16:10:19.859376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.192 qpair failed and we were unable to recover it. 00:28:51.192 [2024-07-15 16:10:19.859478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.192 [2024-07-15 16:10:19.859490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.192 qpair failed and we were unable to recover it. 00:28:51.192 [2024-07-15 16:10:19.859685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.192 [2024-07-15 16:10:19.859716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.192 qpair failed and we were unable to recover it. 00:28:51.192 [2024-07-15 16:10:19.859838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.192 [2024-07-15 16:10:19.859870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.192 qpair failed and we were unable to recover it. 00:28:51.192 [2024-07-15 16:10:19.860074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.192 [2024-07-15 16:10:19.860104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.192 qpair failed and we were unable to recover it. 00:28:51.192 [2024-07-15 16:10:19.860318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.192 [2024-07-15 16:10:19.860330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.192 qpair failed and we were unable to recover it. 00:28:51.192 [2024-07-15 16:10:19.860500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.192 [2024-07-15 16:10:19.860531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.192 qpair failed and we were unable to recover it. 00:28:51.192 [2024-07-15 16:10:19.860670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.192 [2024-07-15 16:10:19.860701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.192 qpair failed and we were unable to recover it. 00:28:51.192 [2024-07-15 16:10:19.860900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.192 [2024-07-15 16:10:19.860931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.192 qpair failed and we were unable to recover it. 00:28:51.192 [2024-07-15 16:10:19.861128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.192 [2024-07-15 16:10:19.861159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.192 qpair failed and we were unable to recover it. 00:28:51.192 [2024-07-15 16:10:19.861367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.192 [2024-07-15 16:10:19.861398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.192 qpair failed and we were unable to recover it. 00:28:51.192 [2024-07-15 16:10:19.861536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.192 [2024-07-15 16:10:19.861566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.192 qpair failed and we were unable to recover it. 00:28:51.192 [2024-07-15 16:10:19.861723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.192 [2024-07-15 16:10:19.861754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.192 qpair failed and we were unable to recover it. 00:28:51.192 [2024-07-15 16:10:19.861961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.192 [2024-07-15 16:10:19.861992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.192 qpair failed and we were unable to recover it. 00:28:51.192 [2024-07-15 16:10:19.862149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.192 [2024-07-15 16:10:19.862179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.192 qpair failed and we were unable to recover it. 00:28:51.192 [2024-07-15 16:10:19.862326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.192 [2024-07-15 16:10:19.862358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.192 qpair failed and we were unable to recover it. 00:28:51.192 [2024-07-15 16:10:19.862562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.192 [2024-07-15 16:10:19.862594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.192 qpair failed and we were unable to recover it. 00:28:51.192 [2024-07-15 16:10:19.862726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.192 [2024-07-15 16:10:19.862757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.192 qpair failed and we were unable to recover it. 00:28:51.192 [2024-07-15 16:10:19.863021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.192 [2024-07-15 16:10:19.863062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.192 qpair failed and we were unable to recover it. 00:28:51.192 [2024-07-15 16:10:19.863235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.192 [2024-07-15 16:10:19.863248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.192 qpair failed and we were unable to recover it. 00:28:51.192 [2024-07-15 16:10:19.863446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.192 [2024-07-15 16:10:19.863477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.192 qpair failed and we were unable to recover it. 00:28:51.192 [2024-07-15 16:10:19.863619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.192 [2024-07-15 16:10:19.863648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.192 qpair failed and we were unable to recover it. 00:28:51.192 [2024-07-15 16:10:19.863789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.192 [2024-07-15 16:10:19.863820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.192 qpair failed and we were unable to recover it. 00:28:51.192 [2024-07-15 16:10:19.864130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.192 [2024-07-15 16:10:19.864161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.192 qpair failed and we were unable to recover it. 00:28:51.192 [2024-07-15 16:10:19.864296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.192 [2024-07-15 16:10:19.864327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.192 qpair failed and we were unable to recover it. 00:28:51.192 [2024-07-15 16:10:19.864471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.192 [2024-07-15 16:10:19.864501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.192 qpair failed and we were unable to recover it. 00:28:51.192 [2024-07-15 16:10:19.864688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.192 [2024-07-15 16:10:19.864757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.192 qpair failed and we were unable to recover it. 00:28:51.192 [2024-07-15 16:10:19.864991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.192 [2024-07-15 16:10:19.865025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.192 qpair failed and we were unable to recover it. 00:28:51.192 [2024-07-15 16:10:19.865164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.192 [2024-07-15 16:10:19.865179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.192 qpair failed and we were unable to recover it. 00:28:51.192 [2024-07-15 16:10:19.865363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.192 [2024-07-15 16:10:19.865395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.192 qpair failed and we were unable to recover it. 00:28:51.192 [2024-07-15 16:10:19.865524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.192 [2024-07-15 16:10:19.865554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.192 qpair failed and we were unable to recover it. 00:28:51.192 [2024-07-15 16:10:19.865778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.192 [2024-07-15 16:10:19.865809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.192 qpair failed and we were unable to recover it. 00:28:51.192 [2024-07-15 16:10:19.866009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.192 [2024-07-15 16:10:19.866040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.192 qpair failed and we were unable to recover it. 00:28:51.192 [2024-07-15 16:10:19.866171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.192 [2024-07-15 16:10:19.866201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.192 qpair failed and we were unable to recover it. 00:28:51.193 [2024-07-15 16:10:19.866525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.193 [2024-07-15 16:10:19.866557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.193 qpair failed and we were unable to recover it. 00:28:51.193 [2024-07-15 16:10:19.866768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.193 [2024-07-15 16:10:19.866800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.193 qpair failed and we were unable to recover it. 00:28:51.193 [2024-07-15 16:10:19.867009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.193 [2024-07-15 16:10:19.867047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.193 qpair failed and we were unable to recover it. 00:28:51.193 [2024-07-15 16:10:19.867150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.193 [2024-07-15 16:10:19.867164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.193 qpair failed and we were unable to recover it. 00:28:51.193 [2024-07-15 16:10:19.867419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.193 [2024-07-15 16:10:19.867435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.193 qpair failed and we were unable to recover it. 00:28:51.193 [2024-07-15 16:10:19.867609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.193 [2024-07-15 16:10:19.867645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.193 qpair failed and we were unable to recover it. 00:28:51.193 [2024-07-15 16:10:19.867779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.193 [2024-07-15 16:10:19.867809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.193 qpair failed and we were unable to recover it. 00:28:51.193 [2024-07-15 16:10:19.867956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.193 [2024-07-15 16:10:19.867987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.193 qpair failed and we were unable to recover it. 00:28:51.193 [2024-07-15 16:10:19.868122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.193 [2024-07-15 16:10:19.868152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.193 qpair failed and we were unable to recover it. 00:28:51.193 [2024-07-15 16:10:19.868368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.193 [2024-07-15 16:10:19.868399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.193 qpair failed and we were unable to recover it. 00:28:51.193 [2024-07-15 16:10:19.868563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.193 [2024-07-15 16:10:19.868593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.193 qpair failed and we were unable to recover it. 00:28:51.193 [2024-07-15 16:10:19.868801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.193 [2024-07-15 16:10:19.868833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.193 qpair failed and we were unable to recover it. 00:28:51.193 [2024-07-15 16:10:19.869067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.193 [2024-07-15 16:10:19.869098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.193 qpair failed and we were unable to recover it. 00:28:51.193 [2024-07-15 16:10:19.869314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.193 [2024-07-15 16:10:19.869347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.193 qpair failed and we were unable to recover it. 00:28:51.193 [2024-07-15 16:10:19.869638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.193 [2024-07-15 16:10:19.869669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.193 qpair failed and we were unable to recover it. 00:28:51.193 [2024-07-15 16:10:19.869930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.193 [2024-07-15 16:10:19.869961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.193 qpair failed and we were unable to recover it. 00:28:51.193 [2024-07-15 16:10:19.870153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.193 [2024-07-15 16:10:19.870169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.193 qpair failed and we were unable to recover it. 00:28:51.193 [2024-07-15 16:10:19.870358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.193 [2024-07-15 16:10:19.870390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.193 qpair failed and we were unable to recover it. 00:28:51.193 [2024-07-15 16:10:19.870538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.193 [2024-07-15 16:10:19.870569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.193 qpair failed and we were unable to recover it. 00:28:51.193 [2024-07-15 16:10:19.870716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.193 [2024-07-15 16:10:19.870747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.193 qpair failed and we were unable to recover it. 00:28:51.193 [2024-07-15 16:10:19.870874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.193 [2024-07-15 16:10:19.870904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.193 qpair failed and we were unable to recover it. 00:28:51.193 [2024-07-15 16:10:19.871048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.193 [2024-07-15 16:10:19.871065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.193 qpair failed and we were unable to recover it. 00:28:51.193 [2024-07-15 16:10:19.871256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.193 [2024-07-15 16:10:19.871287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.193 qpair failed and we were unable to recover it. 00:28:51.193 [2024-07-15 16:10:19.871492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.193 [2024-07-15 16:10:19.871522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.193 qpair failed and we were unable to recover it. 00:28:51.193 [2024-07-15 16:10:19.871660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.193 [2024-07-15 16:10:19.871691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.193 qpair failed and we were unable to recover it. 00:28:51.193 [2024-07-15 16:10:19.871845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.193 [2024-07-15 16:10:19.871876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.193 qpair failed and we were unable to recover it. 00:28:51.193 [2024-07-15 16:10:19.872097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.193 [2024-07-15 16:10:19.872128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.193 qpair failed and we were unable to recover it. 00:28:51.193 [2024-07-15 16:10:19.872355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.193 [2024-07-15 16:10:19.872370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.193 qpair failed and we were unable to recover it. 00:28:51.193 [2024-07-15 16:10:19.872490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.193 [2024-07-15 16:10:19.872505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.193 qpair failed and we were unable to recover it. 00:28:51.193 [2024-07-15 16:10:19.872734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.193 [2024-07-15 16:10:19.872749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.193 qpair failed and we were unable to recover it. 00:28:51.193 [2024-07-15 16:10:19.872851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.193 [2024-07-15 16:10:19.872866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.193 qpair failed and we were unable to recover it. 00:28:51.193 [2024-07-15 16:10:19.873055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.193 [2024-07-15 16:10:19.873071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.193 qpair failed and we were unable to recover it. 00:28:51.193 [2024-07-15 16:10:19.873252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.193 [2024-07-15 16:10:19.873272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.193 qpair failed and we were unable to recover it. 00:28:51.193 [2024-07-15 16:10:19.873408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.193 [2024-07-15 16:10:19.873440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.193 qpair failed and we were unable to recover it. 00:28:51.193 [2024-07-15 16:10:19.873708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.193 [2024-07-15 16:10:19.873740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.193 qpair failed and we were unable to recover it. 00:28:51.193 [2024-07-15 16:10:19.874005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.193 [2024-07-15 16:10:19.874045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.193 qpair failed and we were unable to recover it. 00:28:51.193 [2024-07-15 16:10:19.874164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.193 [2024-07-15 16:10:19.874179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.193 qpair failed and we were unable to recover it. 00:28:51.193 [2024-07-15 16:10:19.874315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.193 [2024-07-15 16:10:19.874328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.193 qpair failed and we were unable to recover it. 00:28:51.193 [2024-07-15 16:10:19.874426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.193 [2024-07-15 16:10:19.874438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.193 qpair failed and we were unable to recover it. 00:28:51.193 [2024-07-15 16:10:19.874589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.193 [2024-07-15 16:10:19.874619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.193 qpair failed and we were unable to recover it. 00:28:51.193 [2024-07-15 16:10:19.874820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.193 [2024-07-15 16:10:19.874850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.193 qpair failed and we were unable to recover it. 00:28:51.194 [2024-07-15 16:10:19.875001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.194 [2024-07-15 16:10:19.875031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.194 qpair failed and we were unable to recover it. 00:28:51.194 [2024-07-15 16:10:19.875164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.194 [2024-07-15 16:10:19.875176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.194 qpair failed and we were unable to recover it. 00:28:51.194 [2024-07-15 16:10:19.875328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.194 [2024-07-15 16:10:19.875340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.194 qpair failed and we were unable to recover it. 00:28:51.194 [2024-07-15 16:10:19.875438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.194 [2024-07-15 16:10:19.875449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.194 qpair failed and we were unable to recover it. 00:28:51.194 [2024-07-15 16:10:19.875619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.194 [2024-07-15 16:10:19.875655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.194 qpair failed and we were unable to recover it. 00:28:51.194 [2024-07-15 16:10:19.875789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.194 [2024-07-15 16:10:19.875819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.194 qpair failed and we were unable to recover it. 00:28:51.194 [2024-07-15 16:10:19.876028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.194 [2024-07-15 16:10:19.876058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.194 qpair failed and we were unable to recover it. 00:28:51.194 [2024-07-15 16:10:19.876316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.194 [2024-07-15 16:10:19.876328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.194 qpair failed and we were unable to recover it. 00:28:51.194 [2024-07-15 16:10:19.876475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.194 [2024-07-15 16:10:19.876506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.194 qpair failed and we were unable to recover it. 00:28:51.194 [2024-07-15 16:10:19.876706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.194 [2024-07-15 16:10:19.876735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.194 qpair failed and we were unable to recover it. 00:28:51.194 [2024-07-15 16:10:19.876950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.194 [2024-07-15 16:10:19.876980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.194 qpair failed and we were unable to recover it. 00:28:51.194 [2024-07-15 16:10:19.877139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.194 [2024-07-15 16:10:19.877150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.194 qpair failed and we were unable to recover it. 00:28:51.194 [2024-07-15 16:10:19.877317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.194 [2024-07-15 16:10:19.877348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.194 qpair failed and we were unable to recover it. 00:28:51.194 [2024-07-15 16:10:19.877566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.194 [2024-07-15 16:10:19.877597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.194 qpair failed and we were unable to recover it. 00:28:51.194 [2024-07-15 16:10:19.877760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.194 [2024-07-15 16:10:19.877790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.194 qpair failed and we were unable to recover it. 00:28:51.194 [2024-07-15 16:10:19.877942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.194 [2024-07-15 16:10:19.877972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.194 qpair failed and we were unable to recover it. 00:28:51.194 [2024-07-15 16:10:19.878159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.194 [2024-07-15 16:10:19.878189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.194 qpair failed and we were unable to recover it. 00:28:51.194 [2024-07-15 16:10:19.878357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.194 [2024-07-15 16:10:19.878388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.194 qpair failed and we were unable to recover it. 00:28:51.194 [2024-07-15 16:10:19.878672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.194 [2024-07-15 16:10:19.878704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.194 qpair failed and we were unable to recover it. 00:28:51.194 [2024-07-15 16:10:19.878910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.194 [2024-07-15 16:10:19.878940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.194 qpair failed and we were unable to recover it. 00:28:51.194 [2024-07-15 16:10:19.879168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.194 [2024-07-15 16:10:19.879198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.194 qpair failed and we were unable to recover it. 00:28:51.194 [2024-07-15 16:10:19.879426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.194 [2024-07-15 16:10:19.879443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.194 qpair failed and we were unable to recover it. 00:28:51.194 [2024-07-15 16:10:19.879562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.194 [2024-07-15 16:10:19.879594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.194 qpair failed and we were unable to recover it. 00:28:51.194 [2024-07-15 16:10:19.879831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.194 [2024-07-15 16:10:19.879862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.194 qpair failed and we were unable to recover it. 00:28:51.194 [2024-07-15 16:10:19.880064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.194 [2024-07-15 16:10:19.880096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.194 qpair failed and we were unable to recover it. 00:28:51.194 [2024-07-15 16:10:19.880292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.194 [2024-07-15 16:10:19.880307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.194 qpair failed and we were unable to recover it. 00:28:51.194 [2024-07-15 16:10:19.880409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.194 [2024-07-15 16:10:19.880440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.194 qpair failed and we were unable to recover it. 00:28:51.194 [2024-07-15 16:10:19.880588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.194 [2024-07-15 16:10:19.880620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.194 qpair failed and we were unable to recover it. 00:28:51.194 [2024-07-15 16:10:19.880819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.194 [2024-07-15 16:10:19.880850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.194 qpair failed and we were unable to recover it. 00:28:51.194 [2024-07-15 16:10:19.880980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.194 [2024-07-15 16:10:19.881010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.194 qpair failed and we were unable to recover it. 00:28:51.194 [2024-07-15 16:10:19.881169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.194 [2024-07-15 16:10:19.881200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.194 qpair failed and we were unable to recover it. 00:28:51.194 [2024-07-15 16:10:19.881424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.194 [2024-07-15 16:10:19.881440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.194 qpair failed and we were unable to recover it. 00:28:51.194 [2024-07-15 16:10:19.881525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.194 [2024-07-15 16:10:19.881539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.194 qpair failed and we were unable to recover it. 00:28:51.194 [2024-07-15 16:10:19.881690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.194 [2024-07-15 16:10:19.881704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.194 qpair failed and we were unable to recover it. 00:28:51.194 [2024-07-15 16:10:19.881829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.194 [2024-07-15 16:10:19.881860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.194 qpair failed and we were unable to recover it. 00:28:51.194 [2024-07-15 16:10:19.882053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.194 [2024-07-15 16:10:19.882083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.194 qpair failed and we were unable to recover it. 00:28:51.194 [2024-07-15 16:10:19.882234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.194 [2024-07-15 16:10:19.882266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.194 qpair failed and we were unable to recover it. 00:28:51.194 [2024-07-15 16:10:19.882533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.194 [2024-07-15 16:10:19.882544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.194 qpair failed and we were unable to recover it. 00:28:51.194 [2024-07-15 16:10:19.882642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.194 [2024-07-15 16:10:19.882672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.194 qpair failed and we were unable to recover it. 00:28:51.194 [2024-07-15 16:10:19.882892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.194 [2024-07-15 16:10:19.882922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.194 qpair failed and we were unable to recover it. 00:28:51.194 [2024-07-15 16:10:19.883073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.194 [2024-07-15 16:10:19.883103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.195 qpair failed and we were unable to recover it. 00:28:51.195 [2024-07-15 16:10:19.883319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.195 [2024-07-15 16:10:19.883331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.195 qpair failed and we were unable to recover it. 00:28:51.195 [2024-07-15 16:10:19.883492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.195 [2024-07-15 16:10:19.883523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.195 qpair failed and we were unable to recover it. 00:28:51.195 [2024-07-15 16:10:19.883729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.195 [2024-07-15 16:10:19.883760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.195 qpair failed and we were unable to recover it. 00:28:51.195 [2024-07-15 16:10:19.884004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.195 [2024-07-15 16:10:19.884035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.195 qpair failed and we were unable to recover it. 00:28:51.195 [2024-07-15 16:10:19.884213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.195 [2024-07-15 16:10:19.884228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.195 qpair failed and we were unable to recover it. 00:28:51.195 [2024-07-15 16:10:19.884343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.195 [2024-07-15 16:10:19.884355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.195 qpair failed and we were unable to recover it. 00:28:51.195 [2024-07-15 16:10:19.884459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.195 [2024-07-15 16:10:19.884470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.195 qpair failed and we were unable to recover it. 00:28:51.195 [2024-07-15 16:10:19.884644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.195 [2024-07-15 16:10:19.884655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.195 qpair failed and we were unable to recover it. 00:28:51.195 [2024-07-15 16:10:19.884815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.195 [2024-07-15 16:10:19.884845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.195 qpair failed and we were unable to recover it. 00:28:51.195 [2024-07-15 16:10:19.885047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.195 [2024-07-15 16:10:19.885077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.195 qpair failed and we were unable to recover it. 00:28:51.195 [2024-07-15 16:10:19.885370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.195 [2024-07-15 16:10:19.885403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.195 qpair failed and we were unable to recover it. 00:28:51.195 [2024-07-15 16:10:19.885559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.195 [2024-07-15 16:10:19.885590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.195 qpair failed and we were unable to recover it. 00:28:51.195 [2024-07-15 16:10:19.885740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.195 [2024-07-15 16:10:19.885771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.195 qpair failed and we were unable to recover it. 00:28:51.195 [2024-07-15 16:10:19.885885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.195 [2024-07-15 16:10:19.885915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.195 qpair failed and we were unable to recover it. 00:28:51.195 [2024-07-15 16:10:19.886044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.195 [2024-07-15 16:10:19.886087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.195 qpair failed and we were unable to recover it. 00:28:51.195 [2024-07-15 16:10:19.886239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.195 [2024-07-15 16:10:19.886252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.195 qpair failed and we were unable to recover it. 00:28:51.195 [2024-07-15 16:10:19.886480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.195 [2024-07-15 16:10:19.886511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.195 qpair failed and we were unable to recover it. 00:28:51.195 [2024-07-15 16:10:19.886736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.195 [2024-07-15 16:10:19.886767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.195 qpair failed and we were unable to recover it. 00:28:51.195 [2024-07-15 16:10:19.886999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.195 [2024-07-15 16:10:19.887010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.195 qpair failed and we were unable to recover it. 00:28:51.195 [2024-07-15 16:10:19.887142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.195 [2024-07-15 16:10:19.887153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.195 qpair failed and we were unable to recover it. 00:28:51.195 [2024-07-15 16:10:19.887313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.195 [2024-07-15 16:10:19.887327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.195 qpair failed and we were unable to recover it. 00:28:51.195 [2024-07-15 16:10:19.887565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.195 [2024-07-15 16:10:19.887577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.195 qpair failed and we were unable to recover it. 00:28:51.195 [2024-07-15 16:10:19.887809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.195 [2024-07-15 16:10:19.887820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.195 qpair failed and we were unable to recover it. 00:28:51.195 [2024-07-15 16:10:19.887992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.195 [2024-07-15 16:10:19.888004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.195 qpair failed and we were unable to recover it. 00:28:51.195 [2024-07-15 16:10:19.888190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.195 [2024-07-15 16:10:19.888222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.195 qpair failed and we were unable to recover it. 00:28:51.195 [2024-07-15 16:10:19.888379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.195 [2024-07-15 16:10:19.888410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.195 qpair failed and we were unable to recover it. 00:28:51.195 [2024-07-15 16:10:19.888557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.195 [2024-07-15 16:10:19.888588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.195 qpair failed and we were unable to recover it. 00:28:51.195 [2024-07-15 16:10:19.888788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.195 [2024-07-15 16:10:19.888819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.195 qpair failed and we were unable to recover it. 00:28:51.195 [2024-07-15 16:10:19.889028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.195 [2024-07-15 16:10:19.889039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.195 qpair failed and we were unable to recover it. 00:28:51.195 [2024-07-15 16:10:19.889136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.195 [2024-07-15 16:10:19.889147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.195 qpair failed and we were unable to recover it. 00:28:51.195 [2024-07-15 16:10:19.889328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.195 [2024-07-15 16:10:19.889366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.195 qpair failed and we were unable to recover it. 00:28:51.195 [2024-07-15 16:10:19.889577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.195 [2024-07-15 16:10:19.889608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.195 qpair failed and we were unable to recover it. 00:28:51.195 [2024-07-15 16:10:19.889809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.195 [2024-07-15 16:10:19.889839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.195 qpair failed and we were unable to recover it. 00:28:51.195 [2024-07-15 16:10:19.889984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.195 [2024-07-15 16:10:19.890013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.195 qpair failed and we were unable to recover it. 00:28:51.195 [2024-07-15 16:10:19.890144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.195 [2024-07-15 16:10:19.890157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.195 qpair failed and we were unable to recover it. 00:28:51.195 [2024-07-15 16:10:19.890399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.195 [2024-07-15 16:10:19.890411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.195 qpair failed and we were unable to recover it. 00:28:51.195 [2024-07-15 16:10:19.890579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.195 [2024-07-15 16:10:19.890610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.195 qpair failed and we were unable to recover it. 00:28:51.195 [2024-07-15 16:10:19.890812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.195 [2024-07-15 16:10:19.890841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.195 qpair failed and we were unable to recover it. 00:28:51.195 [2024-07-15 16:10:19.891052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.195 [2024-07-15 16:10:19.891064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.195 qpair failed and we were unable to recover it. 00:28:51.195 [2024-07-15 16:10:19.891253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.195 [2024-07-15 16:10:19.891265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.195 qpair failed and we were unable to recover it. 00:28:51.195 [2024-07-15 16:10:19.891457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.196 [2024-07-15 16:10:19.891489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.196 qpair failed and we were unable to recover it. 00:28:51.196 [2024-07-15 16:10:19.891685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.196 [2024-07-15 16:10:19.891715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.196 qpair failed and we were unable to recover it. 00:28:51.196 [2024-07-15 16:10:19.891873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.196 [2024-07-15 16:10:19.891902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.196 qpair failed and we were unable to recover it. 00:28:51.196 [2024-07-15 16:10:19.892034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.196 [2024-07-15 16:10:19.892062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.196 qpair failed and we were unable to recover it. 00:28:51.196 [2024-07-15 16:10:19.892265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.196 [2024-07-15 16:10:19.892297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.196 qpair failed and we were unable to recover it. 00:28:51.196 [2024-07-15 16:10:19.892520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.196 [2024-07-15 16:10:19.892550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.196 qpair failed and we were unable to recover it. 00:28:51.196 [2024-07-15 16:10:19.892751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.196 [2024-07-15 16:10:19.892782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.196 qpair failed and we were unable to recover it. 00:28:51.196 [2024-07-15 16:10:19.893071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.196 [2024-07-15 16:10:19.893102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.196 qpair failed and we were unable to recover it. 00:28:51.196 [2024-07-15 16:10:19.893289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.196 [2024-07-15 16:10:19.893302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.196 qpair failed and we were unable to recover it. 00:28:51.196 [2024-07-15 16:10:19.893431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.196 [2024-07-15 16:10:19.893461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.196 qpair failed and we were unable to recover it. 00:28:51.196 [2024-07-15 16:10:19.893611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.196 [2024-07-15 16:10:19.893642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.196 qpair failed and we were unable to recover it. 00:28:51.196 [2024-07-15 16:10:19.893807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.196 [2024-07-15 16:10:19.893838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.196 qpair failed and we were unable to recover it. 00:28:51.196 [2024-07-15 16:10:19.894049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.196 [2024-07-15 16:10:19.894080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.196 qpair failed and we were unable to recover it. 00:28:51.196 [2024-07-15 16:10:19.894250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.196 [2024-07-15 16:10:19.894292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.196 qpair failed and we were unable to recover it. 00:28:51.196 [2024-07-15 16:10:19.894418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.196 [2024-07-15 16:10:19.894449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.196 qpair failed and we were unable to recover it. 00:28:51.196 [2024-07-15 16:10:19.894713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.196 [2024-07-15 16:10:19.894744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.196 qpair failed and we were unable to recover it. 00:28:51.196 [2024-07-15 16:10:19.894897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.196 [2024-07-15 16:10:19.894927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.196 qpair failed and we were unable to recover it. 00:28:51.196 [2024-07-15 16:10:19.895076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.196 [2024-07-15 16:10:19.895115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.196 qpair failed and we were unable to recover it. 00:28:51.196 [2024-07-15 16:10:19.895321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.196 [2024-07-15 16:10:19.895334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.196 qpair failed and we were unable to recover it. 00:28:51.196 [2024-07-15 16:10:19.895490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.196 [2024-07-15 16:10:19.895502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.196 qpair failed and we were unable to recover it. 00:28:51.196 [2024-07-15 16:10:19.895678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.196 [2024-07-15 16:10:19.895690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.196 qpair failed and we were unable to recover it. 00:28:51.196 [2024-07-15 16:10:19.895962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.196 [2024-07-15 16:10:19.895993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.196 qpair failed and we were unable to recover it. 00:28:51.196 [2024-07-15 16:10:19.896223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.196 [2024-07-15 16:10:19.896238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.196 qpair failed and we were unable to recover it. 00:28:51.196 [2024-07-15 16:10:19.896400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.196 [2024-07-15 16:10:19.896412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.196 qpair failed and we were unable to recover it. 00:28:51.196 [2024-07-15 16:10:19.896585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.196 [2024-07-15 16:10:19.896596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.196 qpair failed and we were unable to recover it. 00:28:51.196 [2024-07-15 16:10:19.896817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.196 [2024-07-15 16:10:19.896828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.196 qpair failed and we were unable to recover it. 00:28:51.196 [2024-07-15 16:10:19.897002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.196 [2024-07-15 16:10:19.897014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.196 qpair failed and we were unable to recover it. 00:28:51.196 [2024-07-15 16:10:19.897243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.196 [2024-07-15 16:10:19.897276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.196 qpair failed and we were unable to recover it. 00:28:51.196 [2024-07-15 16:10:19.897487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.196 [2024-07-15 16:10:19.897518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.196 qpair failed and we were unable to recover it. 00:28:51.196 [2024-07-15 16:10:19.897651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.196 [2024-07-15 16:10:19.897682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.196 qpair failed and we were unable to recover it. 00:28:51.196 [2024-07-15 16:10:19.897898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.196 [2024-07-15 16:10:19.897933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.196 qpair failed and we were unable to recover it. 00:28:51.196 [2024-07-15 16:10:19.898195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.196 [2024-07-15 16:10:19.898252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.196 qpair failed and we were unable to recover it. 00:28:51.196 [2024-07-15 16:10:19.898389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.196 [2024-07-15 16:10:19.898421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.196 qpair failed and we were unable to recover it. 00:28:51.196 [2024-07-15 16:10:19.898710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.196 [2024-07-15 16:10:19.898740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.196 qpair failed and we were unable to recover it. 00:28:51.196 [2024-07-15 16:10:19.898890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.196 [2024-07-15 16:10:19.898920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.196 qpair failed and we were unable to recover it. 00:28:51.196 [2024-07-15 16:10:19.899119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.196 [2024-07-15 16:10:19.899131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.196 qpair failed and we were unable to recover it. 00:28:51.197 [2024-07-15 16:10:19.899299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.197 [2024-07-15 16:10:19.899312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.197 qpair failed and we were unable to recover it. 00:28:51.197 [2024-07-15 16:10:19.899508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.197 [2024-07-15 16:10:19.899520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.197 qpair failed and we were unable to recover it. 00:28:51.197 [2024-07-15 16:10:19.899700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.197 [2024-07-15 16:10:19.899712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.197 qpair failed and we were unable to recover it. 00:28:51.197 [2024-07-15 16:10:19.899823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.197 [2024-07-15 16:10:19.899835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.197 qpair failed and we were unable to recover it. 00:28:51.197 [2024-07-15 16:10:19.899947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.197 [2024-07-15 16:10:19.899959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.197 qpair failed and we were unable to recover it. 00:28:51.197 [2024-07-15 16:10:19.900113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.197 [2024-07-15 16:10:19.900126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.197 qpair failed and we were unable to recover it. 00:28:51.197 [2024-07-15 16:10:19.900309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.197 [2024-07-15 16:10:19.900320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.197 qpair failed and we were unable to recover it. 00:28:51.197 [2024-07-15 16:10:19.900485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.197 [2024-07-15 16:10:19.900496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.197 qpair failed and we were unable to recover it. 00:28:51.197 [2024-07-15 16:10:19.900664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.197 [2024-07-15 16:10:19.900695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.197 qpair failed and we were unable to recover it. 00:28:51.197 [2024-07-15 16:10:19.900854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.197 [2024-07-15 16:10:19.900885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.197 qpair failed and we were unable to recover it. 00:28:51.197 [2024-07-15 16:10:19.901084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.197 [2024-07-15 16:10:19.901127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.197 qpair failed and we were unable to recover it. 00:28:51.197 [2024-07-15 16:10:19.901240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.197 [2024-07-15 16:10:19.901253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.197 qpair failed and we were unable to recover it. 00:28:51.197 [2024-07-15 16:10:19.901416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.197 [2024-07-15 16:10:19.901447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.197 qpair failed and we were unable to recover it. 00:28:51.197 [2024-07-15 16:10:19.901643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.197 [2024-07-15 16:10:19.901674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.197 qpair failed and we were unable to recover it. 00:28:51.197 [2024-07-15 16:10:19.901892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.197 [2024-07-15 16:10:19.901922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.197 qpair failed and we were unable to recover it. 00:28:51.197 [2024-07-15 16:10:19.902061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.197 [2024-07-15 16:10:19.902073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.197 qpair failed and we were unable to recover it. 00:28:51.197 [2024-07-15 16:10:19.902161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.197 [2024-07-15 16:10:19.902171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.197 qpair failed and we were unable to recover it. 00:28:51.197 [2024-07-15 16:10:19.902345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.197 [2024-07-15 16:10:19.902357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.197 qpair failed and we were unable to recover it. 00:28:51.197 [2024-07-15 16:10:19.902547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.197 [2024-07-15 16:10:19.902578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.197 qpair failed and we were unable to recover it. 00:28:51.197 [2024-07-15 16:10:19.902866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.197 [2024-07-15 16:10:19.902896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.197 qpair failed and we were unable to recover it. 00:28:51.197 [2024-07-15 16:10:19.903044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.197 [2024-07-15 16:10:19.903075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.197 qpair failed and we were unable to recover it. 00:28:51.197 [2024-07-15 16:10:19.903293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.197 [2024-07-15 16:10:19.903325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.197 qpair failed and we were unable to recover it. 00:28:51.197 [2024-07-15 16:10:19.903554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.197 [2024-07-15 16:10:19.903584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.197 qpair failed and we were unable to recover it. 00:28:51.197 [2024-07-15 16:10:19.903742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.197 [2024-07-15 16:10:19.903773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.197 qpair failed and we were unable to recover it. 00:28:51.197 [2024-07-15 16:10:19.903977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.197 [2024-07-15 16:10:19.904008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.197 qpair failed and we were unable to recover it. 00:28:51.197 [2024-07-15 16:10:19.904210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.197 [2024-07-15 16:10:19.904222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.197 qpair failed and we were unable to recover it. 00:28:51.197 [2024-07-15 16:10:19.904344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.197 [2024-07-15 16:10:19.904356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.197 qpair failed and we were unable to recover it. 00:28:51.197 [2024-07-15 16:10:19.904587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.197 [2024-07-15 16:10:19.904617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.197 qpair failed and we were unable to recover it. 00:28:51.197 [2024-07-15 16:10:19.904853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.197 [2024-07-15 16:10:19.904883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.197 qpair failed and we were unable to recover it. 00:28:51.197 [2024-07-15 16:10:19.905026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.197 [2024-07-15 16:10:19.905057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.197 qpair failed and we were unable to recover it. 00:28:51.197 [2024-07-15 16:10:19.905201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.197 [2024-07-15 16:10:19.905239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.197 qpair failed and we were unable to recover it. 00:28:51.197 [2024-07-15 16:10:19.905445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.197 [2024-07-15 16:10:19.905484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.197 qpair failed and we were unable to recover it. 00:28:51.197 [2024-07-15 16:10:19.905571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.197 [2024-07-15 16:10:19.905583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.197 qpair failed and we were unable to recover it. 00:28:51.197 [2024-07-15 16:10:19.905756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.197 [2024-07-15 16:10:19.905786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.197 qpair failed and we were unable to recover it. 00:28:51.197 [2024-07-15 16:10:19.905992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.197 [2024-07-15 16:10:19.906028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.197 qpair failed and we were unable to recover it. 00:28:51.197 [2024-07-15 16:10:19.906164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.197 [2024-07-15 16:10:19.906195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.197 qpair failed and we were unable to recover it. 00:28:51.197 [2024-07-15 16:10:19.906413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.197 [2024-07-15 16:10:19.906426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.197 qpair failed and we were unable to recover it. 00:28:51.197 [2024-07-15 16:10:19.906648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.197 [2024-07-15 16:10:19.906660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.197 qpair failed and we were unable to recover it. 00:28:51.197 [2024-07-15 16:10:19.906835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.197 [2024-07-15 16:10:19.906847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.197 qpair failed and we were unable to recover it. 00:28:51.197 [2024-07-15 16:10:19.906955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.197 [2024-07-15 16:10:19.906986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.197 qpair failed and we were unable to recover it. 00:28:51.198 [2024-07-15 16:10:19.907138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.198 [2024-07-15 16:10:19.907169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.198 qpair failed and we were unable to recover it. 00:28:51.198 [2024-07-15 16:10:19.907400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.198 [2024-07-15 16:10:19.907432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.198 qpair failed and we were unable to recover it. 00:28:51.198 [2024-07-15 16:10:19.907613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.198 [2024-07-15 16:10:19.907625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.198 qpair failed and we were unable to recover it. 00:28:51.198 [2024-07-15 16:10:19.907883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.198 [2024-07-15 16:10:19.907913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.198 qpair failed and we were unable to recover it. 00:28:51.198 [2024-07-15 16:10:19.908176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.198 [2024-07-15 16:10:19.908188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.198 qpair failed and we were unable to recover it. 00:28:51.198 [2024-07-15 16:10:19.908347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.198 [2024-07-15 16:10:19.908360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.198 qpair failed and we were unable to recover it. 00:28:51.198 [2024-07-15 16:10:19.908532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.198 [2024-07-15 16:10:19.908543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.198 qpair failed and we were unable to recover it. 00:28:51.198 [2024-07-15 16:10:19.908770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.198 [2024-07-15 16:10:19.908782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.198 qpair failed and we were unable to recover it. 00:28:51.198 [2024-07-15 16:10:19.908952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.198 [2024-07-15 16:10:19.908964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.198 qpair failed and we were unable to recover it. 00:28:51.198 [2024-07-15 16:10:19.909163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.198 [2024-07-15 16:10:19.909194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.198 qpair failed and we were unable to recover it. 00:28:51.198 [2024-07-15 16:10:19.909482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.198 [2024-07-15 16:10:19.909513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.198 qpair failed and we were unable to recover it. 00:28:51.198 [2024-07-15 16:10:19.909727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.198 [2024-07-15 16:10:19.909757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.198 qpair failed and we were unable to recover it. 00:28:51.198 [2024-07-15 16:10:19.909997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.198 [2024-07-15 16:10:19.910027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.198 qpair failed and we were unable to recover it. 00:28:51.198 [2024-07-15 16:10:19.910227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.198 [2024-07-15 16:10:19.910239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.198 qpair failed and we were unable to recover it. 00:28:51.198 [2024-07-15 16:10:19.910445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.198 [2024-07-15 16:10:19.910457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.198 qpair failed and we were unable to recover it. 00:28:51.198 [2024-07-15 16:10:19.910634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.198 [2024-07-15 16:10:19.910664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.198 qpair failed and we were unable to recover it. 00:28:51.198 [2024-07-15 16:10:19.910882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.198 [2024-07-15 16:10:19.910912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.198 qpair failed and we were unable to recover it. 00:28:51.198 [2024-07-15 16:10:19.911080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.198 [2024-07-15 16:10:19.911118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.198 qpair failed and we were unable to recover it. 00:28:51.198 [2024-07-15 16:10:19.911290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.198 [2024-07-15 16:10:19.911302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.198 qpair failed and we were unable to recover it. 00:28:51.198 [2024-07-15 16:10:19.911472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.198 [2024-07-15 16:10:19.911484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.198 qpair failed and we were unable to recover it. 00:28:51.198 [2024-07-15 16:10:19.911555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.198 [2024-07-15 16:10:19.911565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.198 qpair failed and we were unable to recover it. 00:28:51.198 [2024-07-15 16:10:19.911721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.198 [2024-07-15 16:10:19.911733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.198 qpair failed and we were unable to recover it. 00:28:51.198 [2024-07-15 16:10:19.911924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.198 [2024-07-15 16:10:19.911955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.198 qpair failed and we were unable to recover it. 00:28:51.198 [2024-07-15 16:10:19.912105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.198 [2024-07-15 16:10:19.912135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.198 qpair failed and we were unable to recover it. 00:28:51.198 [2024-07-15 16:10:19.912451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.198 [2024-07-15 16:10:19.912484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.198 qpair failed and we were unable to recover it. 00:28:51.198 [2024-07-15 16:10:19.912689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.198 [2024-07-15 16:10:19.912700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.198 qpair failed and we were unable to recover it. 00:28:51.198 [2024-07-15 16:10:19.912826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.198 [2024-07-15 16:10:19.912857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.198 qpair failed and we were unable to recover it. 00:28:51.198 [2024-07-15 16:10:19.913065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.198 [2024-07-15 16:10:19.913096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.198 qpair failed and we were unable to recover it. 00:28:51.198 [2024-07-15 16:10:19.913247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.198 [2024-07-15 16:10:19.913277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.198 qpair failed and we were unable to recover it. 00:28:51.198 [2024-07-15 16:10:19.913428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.198 [2024-07-15 16:10:19.913440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.198 qpair failed and we were unable to recover it. 00:28:51.198 [2024-07-15 16:10:19.913542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.198 [2024-07-15 16:10:19.913554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.198 qpair failed and we were unable to recover it. 00:28:51.198 [2024-07-15 16:10:19.913725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.198 [2024-07-15 16:10:19.913754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.198 qpair failed and we were unable to recover it. 00:28:51.198 [2024-07-15 16:10:19.913957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.198 [2024-07-15 16:10:19.913987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.198 qpair failed and we were unable to recover it. 00:28:51.198 [2024-07-15 16:10:19.914136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.198 [2024-07-15 16:10:19.914167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.198 qpair failed and we were unable to recover it. 00:28:51.198 [2024-07-15 16:10:19.914358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.198 [2024-07-15 16:10:19.914371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.198 qpair failed and we were unable to recover it. 00:28:51.198 [2024-07-15 16:10:19.914528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.198 [2024-07-15 16:10:19.914540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.198 qpair failed and we were unable to recover it. 00:28:51.198 [2024-07-15 16:10:19.914715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.198 [2024-07-15 16:10:19.914727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.198 qpair failed and we were unable to recover it. 00:28:51.198 [2024-07-15 16:10:19.914892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.198 [2024-07-15 16:10:19.914922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.198 qpair failed and we were unable to recover it. 00:28:51.198 [2024-07-15 16:10:19.915211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.198 [2024-07-15 16:10:19.915253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.198 qpair failed and we were unable to recover it. 00:28:51.198 [2024-07-15 16:10:19.915546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.198 [2024-07-15 16:10:19.915577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.198 qpair failed and we were unable to recover it. 00:28:51.198 [2024-07-15 16:10:19.915835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.198 [2024-07-15 16:10:19.915865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.198 qpair failed and we were unable to recover it. 00:28:51.199 [2024-07-15 16:10:19.916068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.199 [2024-07-15 16:10:19.916099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.199 qpair failed and we were unable to recover it. 00:28:51.199 [2024-07-15 16:10:19.916303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.199 [2024-07-15 16:10:19.916334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.199 qpair failed and we were unable to recover it. 00:28:51.199 [2024-07-15 16:10:19.916537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.199 [2024-07-15 16:10:19.916564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.199 qpair failed and we were unable to recover it. 00:28:51.199 [2024-07-15 16:10:19.916822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.199 [2024-07-15 16:10:19.916853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.199 qpair failed and we were unable to recover it. 00:28:51.199 [2024-07-15 16:10:19.917060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.199 [2024-07-15 16:10:19.917090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.199 qpair failed and we were unable to recover it. 00:28:51.199 [2024-07-15 16:10:19.917363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.199 [2024-07-15 16:10:19.917375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.199 qpair failed and we were unable to recover it. 00:28:51.199 [2024-07-15 16:10:19.917538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.199 [2024-07-15 16:10:19.917570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.199 qpair failed and we were unable to recover it. 00:28:51.199 [2024-07-15 16:10:19.917788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.199 [2024-07-15 16:10:19.917819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.199 qpair failed and we were unable to recover it. 00:28:51.199 [2024-07-15 16:10:19.918020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.199 [2024-07-15 16:10:19.918050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.199 qpair failed and we were unable to recover it. 00:28:51.199 [2024-07-15 16:10:19.918250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.199 [2024-07-15 16:10:19.918278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.199 qpair failed and we were unable to recover it. 00:28:51.199 [2024-07-15 16:10:19.918451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.199 [2024-07-15 16:10:19.918463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.199 qpair failed and we were unable to recover it. 00:28:51.199 [2024-07-15 16:10:19.918564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.199 [2024-07-15 16:10:19.918575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.199 qpair failed and we were unable to recover it. 00:28:51.199 [2024-07-15 16:10:19.918695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.199 [2024-07-15 16:10:19.918725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.199 qpair failed and we were unable to recover it. 00:28:51.199 [2024-07-15 16:10:19.918858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.199 [2024-07-15 16:10:19.918887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.199 qpair failed and we were unable to recover it. 00:28:51.199 [2024-07-15 16:10:19.919036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.199 [2024-07-15 16:10:19.919067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.199 qpair failed and we were unable to recover it. 00:28:51.199 [2024-07-15 16:10:19.919329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.199 [2024-07-15 16:10:19.919362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.199 qpair failed and we were unable to recover it. 00:28:51.199 [2024-07-15 16:10:19.919535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.199 [2024-07-15 16:10:19.919566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.199 qpair failed and we were unable to recover it. 00:28:51.199 [2024-07-15 16:10:19.919832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.199 [2024-07-15 16:10:19.919861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.199 qpair failed and we were unable to recover it. 00:28:51.199 [2024-07-15 16:10:19.920071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.199 [2024-07-15 16:10:19.920102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.199 qpair failed and we were unable to recover it. 00:28:51.199 [2024-07-15 16:10:19.920328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.199 [2024-07-15 16:10:19.920340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.199 qpair failed and we were unable to recover it. 00:28:51.199 [2024-07-15 16:10:19.920569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.199 [2024-07-15 16:10:19.920599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.199 qpair failed and we were unable to recover it. 00:28:51.199 [2024-07-15 16:10:19.920893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.199 [2024-07-15 16:10:19.920924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.199 qpair failed and we were unable to recover it. 00:28:51.199 [2024-07-15 16:10:19.921161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.199 [2024-07-15 16:10:19.921191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.199 qpair failed and we were unable to recover it. 00:28:51.199 [2024-07-15 16:10:19.921396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.199 [2024-07-15 16:10:19.921408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.199 qpair failed and we were unable to recover it. 00:28:51.199 [2024-07-15 16:10:19.921629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.199 [2024-07-15 16:10:19.921659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.199 qpair failed and we were unable to recover it. 00:28:51.199 [2024-07-15 16:10:19.921874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.199 [2024-07-15 16:10:19.921904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.199 qpair failed and we were unable to recover it. 00:28:51.199 [2024-07-15 16:10:19.922166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.199 [2024-07-15 16:10:19.922198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.199 qpair failed and we were unable to recover it. 00:28:51.199 [2024-07-15 16:10:19.922370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.199 [2024-07-15 16:10:19.922402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.199 qpair failed and we were unable to recover it. 00:28:51.199 [2024-07-15 16:10:19.922612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.199 [2024-07-15 16:10:19.922643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.199 qpair failed and we were unable to recover it. 00:28:51.199 [2024-07-15 16:10:19.922861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.199 [2024-07-15 16:10:19.922891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.199 qpair failed and we were unable to recover it. 00:28:51.199 [2024-07-15 16:10:19.923101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.199 [2024-07-15 16:10:19.923131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.199 qpair failed and we were unable to recover it. 00:28:51.199 [2024-07-15 16:10:19.923364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.199 [2024-07-15 16:10:19.923396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.199 qpair failed and we were unable to recover it. 00:28:51.199 [2024-07-15 16:10:19.923560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.199 [2024-07-15 16:10:19.923590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.199 qpair failed and we were unable to recover it. 00:28:51.199 [2024-07-15 16:10:19.923747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.199 [2024-07-15 16:10:19.923781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.199 qpair failed and we were unable to recover it. 00:28:51.199 [2024-07-15 16:10:19.924049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.199 [2024-07-15 16:10:19.924085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.199 qpair failed and we were unable to recover it. 00:28:51.199 [2024-07-15 16:10:19.924266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.199 [2024-07-15 16:10:19.924278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.199 qpair failed and we were unable to recover it. 00:28:51.199 [2024-07-15 16:10:19.924443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.199 [2024-07-15 16:10:19.924475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.199 qpair failed and we were unable to recover it. 00:28:51.199 [2024-07-15 16:10:19.924628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.199 [2024-07-15 16:10:19.924658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.199 qpair failed and we were unable to recover it. 00:28:51.199 [2024-07-15 16:10:19.924862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.199 [2024-07-15 16:10:19.924893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.199 qpair failed and we were unable to recover it. 00:28:51.199 [2024-07-15 16:10:19.925038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.199 [2024-07-15 16:10:19.925067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.199 qpair failed and we were unable to recover it. 00:28:51.199 [2024-07-15 16:10:19.925306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.199 [2024-07-15 16:10:19.925318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.200 qpair failed and we were unable to recover it. 00:28:51.200 [2024-07-15 16:10:19.925490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.200 [2024-07-15 16:10:19.925501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.200 qpair failed and we were unable to recover it. 00:28:51.200 [2024-07-15 16:10:19.925731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.200 [2024-07-15 16:10:19.925762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.200 qpair failed and we were unable to recover it. 00:28:51.200 [2024-07-15 16:10:19.925915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.200 [2024-07-15 16:10:19.925945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.200 qpair failed and we were unable to recover it. 00:28:51.200 [2024-07-15 16:10:19.926148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.200 [2024-07-15 16:10:19.926178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.200 qpair failed and we were unable to recover it. 00:28:51.200 [2024-07-15 16:10:19.926449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.200 [2024-07-15 16:10:19.926481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.200 qpair failed and we were unable to recover it. 00:28:51.200 [2024-07-15 16:10:19.926802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.200 [2024-07-15 16:10:19.926833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.200 qpair failed and we were unable to recover it. 00:28:51.200 [2024-07-15 16:10:19.927064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.200 [2024-07-15 16:10:19.927094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.200 qpair failed and we were unable to recover it. 00:28:51.200 [2024-07-15 16:10:19.927357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.200 [2024-07-15 16:10:19.927369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.200 qpair failed and we were unable to recover it. 00:28:51.200 [2024-07-15 16:10:19.927528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.200 [2024-07-15 16:10:19.927540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.200 qpair failed and we were unable to recover it. 00:28:51.200 [2024-07-15 16:10:19.927706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.200 [2024-07-15 16:10:19.927736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.200 qpair failed and we were unable to recover it. 00:28:51.200 [2024-07-15 16:10:19.928023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.200 [2024-07-15 16:10:19.928054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.200 qpair failed and we were unable to recover it. 00:28:51.200 [2024-07-15 16:10:19.928261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.200 [2024-07-15 16:10:19.928293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.200 qpair failed and we were unable to recover it. 00:28:51.200 [2024-07-15 16:10:19.928450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.200 [2024-07-15 16:10:19.928480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.200 qpair failed and we were unable to recover it. 00:28:51.200 [2024-07-15 16:10:19.928768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.200 [2024-07-15 16:10:19.928798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.200 qpair failed and we were unable to recover it. 00:28:51.200 [2024-07-15 16:10:19.929007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.200 [2024-07-15 16:10:19.929038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.200 qpair failed and we were unable to recover it. 00:28:51.200 [2024-07-15 16:10:19.929271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.200 [2024-07-15 16:10:19.929283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.200 qpair failed and we were unable to recover it. 00:28:51.200 [2024-07-15 16:10:19.929462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.200 [2024-07-15 16:10:19.929473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.200 qpair failed and we were unable to recover it. 00:28:51.200 [2024-07-15 16:10:19.929573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.200 [2024-07-15 16:10:19.929616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.200 qpair failed and we were unable to recover it. 00:28:51.200 [2024-07-15 16:10:19.929838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.200 [2024-07-15 16:10:19.929868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.200 qpair failed and we were unable to recover it. 00:28:51.200 [2024-07-15 16:10:19.930024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.200 [2024-07-15 16:10:19.930055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.200 qpair failed and we were unable to recover it. 00:28:51.200 [2024-07-15 16:10:19.930268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.200 [2024-07-15 16:10:19.930300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.200 qpair failed and we were unable to recover it. 00:28:51.200 [2024-07-15 16:10:19.930513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.200 [2024-07-15 16:10:19.930526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.200 qpair failed and we were unable to recover it. 00:28:51.200 [2024-07-15 16:10:19.930633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.200 [2024-07-15 16:10:19.930645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.200 qpair failed and we were unable to recover it. 00:28:51.200 [2024-07-15 16:10:19.930866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.200 [2024-07-15 16:10:19.930878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.200 qpair failed and we were unable to recover it. 00:28:51.200 [2024-07-15 16:10:19.931050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.200 [2024-07-15 16:10:19.931062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.200 qpair failed and we were unable to recover it. 00:28:51.200 [2024-07-15 16:10:19.931233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.200 [2024-07-15 16:10:19.931265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.200 qpair failed and we were unable to recover it. 00:28:51.200 [2024-07-15 16:10:19.931415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.200 [2024-07-15 16:10:19.931446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.200 qpair failed and we were unable to recover it. 00:28:51.200 [2024-07-15 16:10:19.931668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.200 [2024-07-15 16:10:19.931699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.200 qpair failed and we were unable to recover it. 00:28:51.200 [2024-07-15 16:10:19.931943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.200 [2024-07-15 16:10:19.931974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.200 qpair failed and we were unable to recover it. 00:28:51.200 [2024-07-15 16:10:19.932269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.200 [2024-07-15 16:10:19.932301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.200 qpair failed and we were unable to recover it. 00:28:51.200 [2024-07-15 16:10:19.932516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.200 [2024-07-15 16:10:19.932547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.200 qpair failed and we were unable to recover it. 00:28:51.200 [2024-07-15 16:10:19.932701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.200 [2024-07-15 16:10:19.932732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.200 qpair failed and we were unable to recover it. 00:28:51.200 [2024-07-15 16:10:19.932962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.200 [2024-07-15 16:10:19.932998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.200 qpair failed and we were unable to recover it. 00:28:51.200 [2024-07-15 16:10:19.933265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.200 [2024-07-15 16:10:19.933297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.200 qpair failed and we were unable to recover it. 00:28:51.200 [2024-07-15 16:10:19.933440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.200 [2024-07-15 16:10:19.933472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.200 qpair failed and we were unable to recover it. 00:28:51.200 [2024-07-15 16:10:19.933673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.201 [2024-07-15 16:10:19.933703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.201 qpair failed and we were unable to recover it. 00:28:51.201 [2024-07-15 16:10:19.933970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.201 [2024-07-15 16:10:19.934001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.201 qpair failed and we were unable to recover it. 00:28:51.201 [2024-07-15 16:10:19.934284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.201 [2024-07-15 16:10:19.934296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.201 qpair failed and we were unable to recover it. 00:28:51.201 [2024-07-15 16:10:19.934527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.201 [2024-07-15 16:10:19.934539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.201 qpair failed and we were unable to recover it. 00:28:51.201 [2024-07-15 16:10:19.934700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.201 [2024-07-15 16:10:19.934712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.201 qpair failed and we were unable to recover it. 00:28:51.201 [2024-07-15 16:10:19.934875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.201 [2024-07-15 16:10:19.934886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.201 qpair failed and we were unable to recover it. 00:28:51.201 [2024-07-15 16:10:19.935000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.201 [2024-07-15 16:10:19.935011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.201 qpair failed and we were unable to recover it. 00:28:51.201 [2024-07-15 16:10:19.935259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.201 [2024-07-15 16:10:19.935291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.201 qpair failed and we were unable to recover it. 00:28:51.201 [2024-07-15 16:10:19.935519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.201 [2024-07-15 16:10:19.935531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.201 qpair failed and we were unable to recover it. 00:28:51.201 [2024-07-15 16:10:19.935705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.201 [2024-07-15 16:10:19.935736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.201 qpair failed and we were unable to recover it. 00:28:51.201 [2024-07-15 16:10:19.935948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.201 [2024-07-15 16:10:19.935977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.201 qpair failed and we were unable to recover it. 00:28:51.201 [2024-07-15 16:10:19.936126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.201 [2024-07-15 16:10:19.936157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.201 qpair failed and we were unable to recover it. 00:28:51.201 [2024-07-15 16:10:19.936339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.201 [2024-07-15 16:10:19.936371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.201 qpair failed and we were unable to recover it. 00:28:51.201 [2024-07-15 16:10:19.936573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.201 [2024-07-15 16:10:19.936604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.201 qpair failed and we were unable to recover it. 00:28:51.201 [2024-07-15 16:10:19.936903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.201 [2024-07-15 16:10:19.936934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.201 qpair failed and we were unable to recover it. 00:28:51.201 [2024-07-15 16:10:19.937045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.201 [2024-07-15 16:10:19.937075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.201 qpair failed and we were unable to recover it. 00:28:51.201 [2024-07-15 16:10:19.937286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.201 [2024-07-15 16:10:19.937317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.201 qpair failed and we were unable to recover it. 00:28:51.201 [2024-07-15 16:10:19.937464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.201 [2024-07-15 16:10:19.937496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.201 qpair failed and we were unable to recover it. 00:28:51.201 [2024-07-15 16:10:19.937760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.201 [2024-07-15 16:10:19.937771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.201 qpair failed and we were unable to recover it. 00:28:51.201 [2024-07-15 16:10:19.937941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.201 [2024-07-15 16:10:19.937952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.201 qpair failed and we were unable to recover it. 00:28:51.201 [2024-07-15 16:10:19.938188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.201 [2024-07-15 16:10:19.938199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.201 qpair failed and we were unable to recover it. 00:28:51.201 [2024-07-15 16:10:19.938315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.201 [2024-07-15 16:10:19.938328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.201 qpair failed and we were unable to recover it. 00:28:51.201 [2024-07-15 16:10:19.938489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.201 [2024-07-15 16:10:19.938501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.201 qpair failed and we were unable to recover it. 00:28:51.201 [2024-07-15 16:10:19.938612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.201 [2024-07-15 16:10:19.938624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.201 qpair failed and we were unable to recover it. 00:28:51.201 [2024-07-15 16:10:19.938806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.201 [2024-07-15 16:10:19.938818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.201 qpair failed and we were unable to recover it. 00:28:51.201 [2024-07-15 16:10:19.938970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.201 [2024-07-15 16:10:19.938982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.201 qpair failed and we were unable to recover it. 00:28:51.201 [2024-07-15 16:10:19.939111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.201 [2024-07-15 16:10:19.939142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.201 qpair failed and we were unable to recover it. 00:28:51.201 [2024-07-15 16:10:19.939296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.201 [2024-07-15 16:10:19.939328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.201 qpair failed and we were unable to recover it. 00:28:51.201 [2024-07-15 16:10:19.939549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.201 [2024-07-15 16:10:19.939580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.201 qpair failed and we were unable to recover it. 00:28:51.201 [2024-07-15 16:10:19.939756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.201 [2024-07-15 16:10:19.939787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.201 qpair failed and we were unable to recover it. 00:28:51.201 [2024-07-15 16:10:19.939945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.201 [2024-07-15 16:10:19.939976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.201 qpair failed and we were unable to recover it. 00:28:51.201 [2024-07-15 16:10:19.940175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.201 [2024-07-15 16:10:19.940206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.201 qpair failed and we were unable to recover it. 00:28:51.201 [2024-07-15 16:10:19.940480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.201 [2024-07-15 16:10:19.940511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.201 qpair failed and we were unable to recover it. 00:28:51.201 [2024-07-15 16:10:19.940788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.201 [2024-07-15 16:10:19.940818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.201 qpair failed and we were unable to recover it. 00:28:51.201 [2024-07-15 16:10:19.941035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.201 [2024-07-15 16:10:19.941066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.201 qpair failed and we were unable to recover it. 00:28:51.201 [2024-07-15 16:10:19.941209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.201 [2024-07-15 16:10:19.941249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.201 qpair failed and we were unable to recover it. 00:28:51.201 [2024-07-15 16:10:19.941389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.201 [2024-07-15 16:10:19.941419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.201 qpair failed and we were unable to recover it. 00:28:51.201 [2024-07-15 16:10:19.941582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.201 [2024-07-15 16:10:19.941617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.201 qpair failed and we were unable to recover it. 00:28:51.201 [2024-07-15 16:10:19.941883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.201 [2024-07-15 16:10:19.941913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.201 qpair failed and we were unable to recover it. 00:28:51.201 [2024-07-15 16:10:19.942200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.201 [2024-07-15 16:10:19.942240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.201 qpair failed and we were unable to recover it. 00:28:51.201 [2024-07-15 16:10:19.942509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.201 [2024-07-15 16:10:19.942539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.201 qpair failed and we were unable to recover it. 00:28:51.201 [2024-07-15 16:10:19.942677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.202 [2024-07-15 16:10:19.942707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.202 qpair failed and we were unable to recover it. 00:28:51.202 [2024-07-15 16:10:19.942889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.202 [2024-07-15 16:10:19.942919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.202 qpair failed and we were unable to recover it. 00:28:51.202 [2024-07-15 16:10:19.943155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.202 [2024-07-15 16:10:19.943185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.202 qpair failed and we were unable to recover it. 00:28:51.202 [2024-07-15 16:10:19.943472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.202 [2024-07-15 16:10:19.943503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.202 qpair failed and we were unable to recover it. 00:28:51.202 [2024-07-15 16:10:19.943706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.202 [2024-07-15 16:10:19.943718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.202 qpair failed and we were unable to recover it. 00:28:51.202 [2024-07-15 16:10:19.943962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.202 [2024-07-15 16:10:19.943974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.202 qpair failed and we were unable to recover it. 00:28:51.202 [2024-07-15 16:10:19.944100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.202 [2024-07-15 16:10:19.944144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.202 qpair failed and we were unable to recover it. 00:28:51.202 [2024-07-15 16:10:19.944280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.202 [2024-07-15 16:10:19.944311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.202 qpair failed and we were unable to recover it. 00:28:51.202 [2024-07-15 16:10:19.944452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.202 [2024-07-15 16:10:19.944482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.202 qpair failed and we were unable to recover it. 00:28:51.202 [2024-07-15 16:10:19.944694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.202 [2024-07-15 16:10:19.944705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.202 qpair failed and we were unable to recover it. 00:28:51.202 [2024-07-15 16:10:19.944934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.202 [2024-07-15 16:10:19.944965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.202 qpair failed and we were unable to recover it. 00:28:51.202 [2024-07-15 16:10:19.945174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.202 [2024-07-15 16:10:19.945204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.202 qpair failed and we were unable to recover it. 00:28:51.202 [2024-07-15 16:10:19.945413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.202 [2024-07-15 16:10:19.945444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.202 qpair failed and we were unable to recover it. 00:28:51.202 [2024-07-15 16:10:19.945646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.202 [2024-07-15 16:10:19.945658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.202 qpair failed and we were unable to recover it. 00:28:51.202 [2024-07-15 16:10:19.945796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.202 [2024-07-15 16:10:19.945806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.202 qpair failed and we were unable to recover it. 00:28:51.202 [2024-07-15 16:10:19.945977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.202 [2024-07-15 16:10:19.945990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.202 qpair failed and we were unable to recover it. 00:28:51.202 [2024-07-15 16:10:19.946144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.202 [2024-07-15 16:10:19.946155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.202 qpair failed and we were unable to recover it. 00:28:51.202 [2024-07-15 16:10:19.946342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.202 [2024-07-15 16:10:19.946354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.202 qpair failed and we were unable to recover it. 00:28:51.202 [2024-07-15 16:10:19.946504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.202 [2024-07-15 16:10:19.946535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.202 qpair failed and we were unable to recover it. 00:28:51.202 [2024-07-15 16:10:19.946682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.202 [2024-07-15 16:10:19.946712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.202 qpair failed and we were unable to recover it. 00:28:51.202 [2024-07-15 16:10:19.946923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.202 [2024-07-15 16:10:19.946954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.202 qpair failed and we were unable to recover it. 00:28:51.202 [2024-07-15 16:10:19.947089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.202 [2024-07-15 16:10:19.947119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.202 qpair failed and we were unable to recover it. 00:28:51.202 [2024-07-15 16:10:19.947409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.202 [2024-07-15 16:10:19.947421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.202 qpair failed and we were unable to recover it. 00:28:51.202 [2024-07-15 16:10:19.947648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.202 [2024-07-15 16:10:19.947680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.202 qpair failed and we were unable to recover it. 00:28:51.202 [2024-07-15 16:10:19.947916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.202 [2024-07-15 16:10:19.947946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.202 qpair failed and we were unable to recover it. 00:28:51.202 [2024-07-15 16:10:19.948243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.202 [2024-07-15 16:10:19.948275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.202 qpair failed and we were unable to recover it. 00:28:51.202 [2024-07-15 16:10:19.948482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.202 [2024-07-15 16:10:19.948512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.202 qpair failed and we were unable to recover it. 00:28:51.202 [2024-07-15 16:10:19.948649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.202 [2024-07-15 16:10:19.948661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.202 qpair failed and we were unable to recover it. 00:28:51.202 [2024-07-15 16:10:19.948833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.202 [2024-07-15 16:10:19.948859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.202 qpair failed and we were unable to recover it. 00:28:51.202 [2024-07-15 16:10:19.948983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.202 [2024-07-15 16:10:19.949013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.202 qpair failed and we were unable to recover it. 00:28:51.202 [2024-07-15 16:10:19.949212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.202 [2024-07-15 16:10:19.949253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.202 qpair failed and we were unable to recover it. 00:28:51.202 [2024-07-15 16:10:19.949419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.202 [2024-07-15 16:10:19.949449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.202 qpair failed and we were unable to recover it. 00:28:51.202 [2024-07-15 16:10:19.949657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.202 [2024-07-15 16:10:19.949686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.202 qpair failed and we were unable to recover it. 00:28:51.202 [2024-07-15 16:10:19.949830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.202 [2024-07-15 16:10:19.949859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.202 qpair failed and we were unable to recover it. 00:28:51.202 [2024-07-15 16:10:19.950083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.202 [2024-07-15 16:10:19.950114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.202 qpair failed and we were unable to recover it. 00:28:51.202 [2024-07-15 16:10:19.950425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.203 [2024-07-15 16:10:19.950458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.203 qpair failed and we were unable to recover it. 00:28:51.203 [2024-07-15 16:10:19.950673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.203 [2024-07-15 16:10:19.950708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.203 qpair failed and we were unable to recover it. 00:28:51.203 [2024-07-15 16:10:19.950975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.203 [2024-07-15 16:10:19.951006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.203 qpair failed and we were unable to recover it. 00:28:51.203 [2024-07-15 16:10:19.951294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.203 [2024-07-15 16:10:19.951325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.203 qpair failed and we were unable to recover it. 00:28:51.203 [2024-07-15 16:10:19.951437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.203 [2024-07-15 16:10:19.951467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.203 qpair failed and we were unable to recover it. 00:28:51.203 [2024-07-15 16:10:19.951676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.203 [2024-07-15 16:10:19.951688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.203 qpair failed and we were unable to recover it. 00:28:51.203 [2024-07-15 16:10:19.951909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.203 [2024-07-15 16:10:19.951921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.203 qpair failed and we were unable to recover it. 00:28:51.203 [2024-07-15 16:10:19.952097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.203 [2024-07-15 16:10:19.952108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.203 qpair failed and we were unable to recover it. 00:28:51.203 [2024-07-15 16:10:19.952238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.203 [2024-07-15 16:10:19.952271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.203 qpair failed and we were unable to recover it. 00:28:51.203 [2024-07-15 16:10:19.952487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.203 [2024-07-15 16:10:19.952517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.203 qpair failed and we were unable to recover it. 00:28:51.203 [2024-07-15 16:10:19.952780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.203 [2024-07-15 16:10:19.952811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.203 qpair failed and we were unable to recover it. 00:28:51.203 [2024-07-15 16:10:19.953008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.203 [2024-07-15 16:10:19.953037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.203 qpair failed and we were unable to recover it. 00:28:51.203 [2024-07-15 16:10:19.953256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.203 [2024-07-15 16:10:19.953267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.203 qpair failed and we were unable to recover it. 00:28:51.203 [2024-07-15 16:10:19.953500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.203 [2024-07-15 16:10:19.953530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.203 qpair failed and we were unable to recover it. 00:28:51.203 [2024-07-15 16:10:19.953681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.203 [2024-07-15 16:10:19.953711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.203 qpair failed and we were unable to recover it. 00:28:51.203 [2024-07-15 16:10:19.953842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.203 [2024-07-15 16:10:19.953872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.203 qpair failed and we were unable to recover it. 00:28:51.203 [2024-07-15 16:10:19.954032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.203 [2024-07-15 16:10:19.954062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.203 qpair failed and we were unable to recover it. 00:28:51.203 [2024-07-15 16:10:19.954228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.203 [2024-07-15 16:10:19.954242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.203 qpair failed and we were unable to recover it. 00:28:51.203 [2024-07-15 16:10:19.954347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.203 [2024-07-15 16:10:19.954359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.203 qpair failed and we were unable to recover it. 00:28:51.203 [2024-07-15 16:10:19.954544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.203 [2024-07-15 16:10:19.954556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.203 qpair failed and we were unable to recover it. 00:28:51.203 [2024-07-15 16:10:19.954790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.203 [2024-07-15 16:10:19.954820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.203 qpair failed and we were unable to recover it. 00:28:51.203 [2024-07-15 16:10:19.954950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.203 [2024-07-15 16:10:19.954981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.203 qpair failed and we were unable to recover it. 00:28:51.203 [2024-07-15 16:10:19.955120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.203 [2024-07-15 16:10:19.955150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.203 qpair failed and we were unable to recover it. 00:28:51.203 [2024-07-15 16:10:19.955354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.203 [2024-07-15 16:10:19.955367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.203 qpair failed and we were unable to recover it. 00:28:51.203 [2024-07-15 16:10:19.955543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.203 [2024-07-15 16:10:19.955573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.203 qpair failed and we were unable to recover it. 00:28:51.203 [2024-07-15 16:10:19.955719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.203 [2024-07-15 16:10:19.955750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.203 qpair failed and we were unable to recover it. 00:28:51.203 [2024-07-15 16:10:19.955960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.203 [2024-07-15 16:10:19.955990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.203 qpair failed and we were unable to recover it. 00:28:51.203 [2024-07-15 16:10:19.956190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.203 [2024-07-15 16:10:19.956219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.203 qpair failed and we were unable to recover it. 00:28:51.203 [2024-07-15 16:10:19.956380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.203 [2024-07-15 16:10:19.956391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.203 qpair failed and we were unable to recover it. 00:28:51.203 [2024-07-15 16:10:19.956569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.203 [2024-07-15 16:10:19.956600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.203 qpair failed and we were unable to recover it. 00:28:51.203 [2024-07-15 16:10:19.956811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.203 [2024-07-15 16:10:19.956841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.203 qpair failed and we were unable to recover it. 00:28:51.203 [2024-07-15 16:10:19.957006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.203 [2024-07-15 16:10:19.957036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.203 qpair failed and we were unable to recover it. 00:28:51.203 [2024-07-15 16:10:19.957242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.203 [2024-07-15 16:10:19.957274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.203 qpair failed and we were unable to recover it. 00:28:51.203 [2024-07-15 16:10:19.957460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.203 [2024-07-15 16:10:19.957489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.203 qpair failed and we were unable to recover it. 00:28:51.203 [2024-07-15 16:10:19.957659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.203 [2024-07-15 16:10:19.957690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.203 qpair failed and we were unable to recover it. 00:28:51.203 [2024-07-15 16:10:19.957960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.203 [2024-07-15 16:10:19.957990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.203 qpair failed and we were unable to recover it. 00:28:51.204 [2024-07-15 16:10:19.958201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.204 [2024-07-15 16:10:19.958239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.204 qpair failed and we were unable to recover it. 00:28:51.204 [2024-07-15 16:10:19.958459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.204 [2024-07-15 16:10:19.958489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.204 qpair failed and we were unable to recover it. 00:28:51.204 [2024-07-15 16:10:19.958690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.204 [2024-07-15 16:10:19.958702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.204 qpair failed and we were unable to recover it. 00:28:51.204 [2024-07-15 16:10:19.958895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.204 [2024-07-15 16:10:19.958926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.204 qpair failed and we were unable to recover it. 00:28:51.204 [2024-07-15 16:10:19.959076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.204 [2024-07-15 16:10:19.959106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.204 qpair failed and we were unable to recover it. 00:28:51.204 [2024-07-15 16:10:19.959249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.204 [2024-07-15 16:10:19.959285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.204 qpair failed and we were unable to recover it. 00:28:51.204 [2024-07-15 16:10:19.959546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.204 [2024-07-15 16:10:19.959558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.204 qpair failed and we were unable to recover it. 00:28:51.204 [2024-07-15 16:10:19.959655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.204 [2024-07-15 16:10:19.959666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.204 qpair failed and we were unable to recover it. 00:28:51.204 [2024-07-15 16:10:19.959847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.204 [2024-07-15 16:10:19.959878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.204 qpair failed and we were unable to recover it. 00:28:51.204 [2024-07-15 16:10:19.960164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.204 [2024-07-15 16:10:19.960194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.204 qpair failed and we were unable to recover it. 00:28:51.204 [2024-07-15 16:10:19.960519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.204 [2024-07-15 16:10:19.960531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.204 qpair failed and we were unable to recover it. 00:28:51.204 [2024-07-15 16:10:19.960694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.204 [2024-07-15 16:10:19.960724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.204 qpair failed and we were unable to recover it. 00:28:51.204 [2024-07-15 16:10:19.961032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.204 [2024-07-15 16:10:19.961062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.204 qpair failed and we were unable to recover it. 00:28:51.204 [2024-07-15 16:10:19.961284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.204 [2024-07-15 16:10:19.961296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.204 qpair failed and we were unable to recover it. 00:28:51.204 [2024-07-15 16:10:19.961548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.204 [2024-07-15 16:10:19.961561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.204 qpair failed and we were unable to recover it. 00:28:51.204 [2024-07-15 16:10:19.961676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.204 [2024-07-15 16:10:19.961687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.204 qpair failed and we were unable to recover it. 00:28:51.204 [2024-07-15 16:10:19.961790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.204 [2024-07-15 16:10:19.961801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.204 qpair failed and we were unable to recover it. 00:28:51.204 [2024-07-15 16:10:19.961977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.204 [2024-07-15 16:10:19.962007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.204 qpair failed and we were unable to recover it. 00:28:51.204 [2024-07-15 16:10:19.962141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.204 [2024-07-15 16:10:19.962170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.204 qpair failed and we were unable to recover it. 00:28:51.204 [2024-07-15 16:10:19.962401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.204 [2024-07-15 16:10:19.962431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.204 qpair failed and we were unable to recover it. 00:28:51.204 [2024-07-15 16:10:19.962593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.204 [2024-07-15 16:10:19.962624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.204 qpair failed and we were unable to recover it. 00:28:51.204 [2024-07-15 16:10:19.962848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.204 [2024-07-15 16:10:19.962879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.204 qpair failed and we were unable to recover it. 00:28:51.204 [2024-07-15 16:10:19.963077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.204 [2024-07-15 16:10:19.963107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.204 qpair failed and we were unable to recover it. 00:28:51.204 [2024-07-15 16:10:19.963262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.204 [2024-07-15 16:10:19.963294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.204 qpair failed and we were unable to recover it. 00:28:51.204 [2024-07-15 16:10:19.963498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.204 [2024-07-15 16:10:19.963528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.204 qpair failed and we were unable to recover it. 00:28:51.204 [2024-07-15 16:10:19.963749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.204 [2024-07-15 16:10:19.963760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.204 qpair failed and we were unable to recover it. 00:28:51.204 [2024-07-15 16:10:19.963989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.204 [2024-07-15 16:10:19.964019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.204 qpair failed and we were unable to recover it. 00:28:51.204 [2024-07-15 16:10:19.964182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.204 [2024-07-15 16:10:19.964211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.204 qpair failed and we were unable to recover it. 00:28:51.204 [2024-07-15 16:10:19.964445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.204 [2024-07-15 16:10:19.964475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.204 qpair failed and we were unable to recover it. 00:28:51.204 [2024-07-15 16:10:19.964707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.204 [2024-07-15 16:10:19.964736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.204 qpair failed and we were unable to recover it. 00:28:51.204 [2024-07-15 16:10:19.964951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.204 [2024-07-15 16:10:19.964982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.204 qpair failed and we were unable to recover it. 00:28:51.204 [2024-07-15 16:10:19.965251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.204 [2024-07-15 16:10:19.965281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.204 qpair failed and we were unable to recover it. 00:28:51.204 [2024-07-15 16:10:19.965547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.204 [2024-07-15 16:10:19.965617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.204 qpair failed and we were unable to recover it. 00:28:51.204 [2024-07-15 16:10:19.965785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.204 [2024-07-15 16:10:19.965819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.204 qpair failed and we were unable to recover it. 00:28:51.204 [2024-07-15 16:10:19.965940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.204 [2024-07-15 16:10:19.965971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.204 qpair failed and we were unable to recover it. 00:28:51.204 [2024-07-15 16:10:19.966209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.204 [2024-07-15 16:10:19.966262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.204 qpair failed and we were unable to recover it. 00:28:51.204 [2024-07-15 16:10:19.966394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.204 [2024-07-15 16:10:19.966409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.204 qpair failed and we were unable to recover it. 00:28:51.204 [2024-07-15 16:10:19.966592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.204 [2024-07-15 16:10:19.966607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.204 qpair failed and we were unable to recover it. 00:28:51.204 [2024-07-15 16:10:19.966856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.204 [2024-07-15 16:10:19.966887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.204 qpair failed and we were unable to recover it. 00:28:51.204 [2024-07-15 16:10:19.967098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.204 [2024-07-15 16:10:19.967129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.204 qpair failed and we were unable to recover it. 00:28:51.205 [2024-07-15 16:10:19.967469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.205 [2024-07-15 16:10:19.967501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.205 qpair failed and we were unable to recover it. 00:28:51.205 [2024-07-15 16:10:19.967790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.205 [2024-07-15 16:10:19.967821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.205 qpair failed and we were unable to recover it. 00:28:51.205 [2024-07-15 16:10:19.968054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.205 [2024-07-15 16:10:19.968085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.205 qpair failed and we were unable to recover it. 00:28:51.205 [2024-07-15 16:10:19.968235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.205 [2024-07-15 16:10:19.968250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.205 qpair failed and we were unable to recover it. 00:28:51.205 [2024-07-15 16:10:19.968422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.205 [2024-07-15 16:10:19.968453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.205 qpair failed and we were unable to recover it. 00:28:51.205 [2024-07-15 16:10:19.968654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.205 [2024-07-15 16:10:19.968684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.205 qpair failed and we were unable to recover it. 00:28:51.205 [2024-07-15 16:10:19.968911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.205 [2024-07-15 16:10:19.968943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.205 qpair failed and we were unable to recover it. 00:28:51.205 [2024-07-15 16:10:19.969083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.205 [2024-07-15 16:10:19.969113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.205 qpair failed and we were unable to recover it. 00:28:51.205 [2024-07-15 16:10:19.969338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.205 [2024-07-15 16:10:19.969370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.205 qpair failed and we were unable to recover it. 00:28:51.205 [2024-07-15 16:10:19.969497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.205 [2024-07-15 16:10:19.969512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.205 qpair failed and we were unable to recover it. 00:28:51.205 [2024-07-15 16:10:19.969746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.205 [2024-07-15 16:10:19.969761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.205 qpair failed and we were unable to recover it. 00:28:51.205 [2024-07-15 16:10:19.969950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.205 [2024-07-15 16:10:19.969965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.205 qpair failed and we were unable to recover it. 00:28:51.205 [2024-07-15 16:10:19.970218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.205 [2024-07-15 16:10:19.970237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.205 qpair failed and we were unable to recover it. 00:28:51.205 [2024-07-15 16:10:19.970336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.205 [2024-07-15 16:10:19.970351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.205 qpair failed and we were unable to recover it. 00:28:51.205 [2024-07-15 16:10:19.970557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.205 [2024-07-15 16:10:19.970588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.205 qpair failed and we were unable to recover it. 00:28:51.205 [2024-07-15 16:10:19.970750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.205 [2024-07-15 16:10:19.970781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.205 qpair failed and we were unable to recover it. 00:28:51.205 [2024-07-15 16:10:19.971009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.205 [2024-07-15 16:10:19.971040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.205 qpair failed and we were unable to recover it. 00:28:51.205 [2024-07-15 16:10:19.971250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.205 [2024-07-15 16:10:19.971282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.205 qpair failed and we were unable to recover it. 00:28:51.205 [2024-07-15 16:10:19.971442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.205 [2024-07-15 16:10:19.971472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.205 qpair failed and we were unable to recover it. 00:28:51.205 [2024-07-15 16:10:19.971608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.205 [2024-07-15 16:10:19.971645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.205 qpair failed and we were unable to recover it. 00:28:51.205 [2024-07-15 16:10:19.971863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.205 [2024-07-15 16:10:19.971894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.205 qpair failed and we were unable to recover it. 00:28:51.205 [2024-07-15 16:10:19.972127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.205 [2024-07-15 16:10:19.972158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.205 qpair failed and we were unable to recover it. 00:28:51.205 [2024-07-15 16:10:19.972309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.205 [2024-07-15 16:10:19.972340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.205 qpair failed and we were unable to recover it. 00:28:51.205 [2024-07-15 16:10:19.972563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.205 [2024-07-15 16:10:19.972594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.205 qpair failed and we were unable to recover it. 00:28:51.205 [2024-07-15 16:10:19.972737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.206 [2024-07-15 16:10:19.972752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.206 qpair failed and we were unable to recover it. 00:28:51.206 [2024-07-15 16:10:19.972981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.206 [2024-07-15 16:10:19.972997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.206 qpair failed and we were unable to recover it. 00:28:51.206 [2024-07-15 16:10:19.973106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.206 [2024-07-15 16:10:19.973121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.206 qpair failed and we were unable to recover it. 00:28:51.206 [2024-07-15 16:10:19.973296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.206 [2024-07-15 16:10:19.973311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.206 qpair failed and we were unable to recover it. 00:28:51.206 [2024-07-15 16:10:19.973492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.206 [2024-07-15 16:10:19.973524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.206 qpair failed and we were unable to recover it. 00:28:51.206 [2024-07-15 16:10:19.973790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.206 [2024-07-15 16:10:19.973820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.206 qpair failed and we were unable to recover it. 00:28:51.206 [2024-07-15 16:10:19.973956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.206 [2024-07-15 16:10:19.973987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.206 qpair failed and we were unable to recover it. 00:28:51.206 [2024-07-15 16:10:19.974151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.206 [2024-07-15 16:10:19.974182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.206 qpair failed and we were unable to recover it. 00:28:51.206 [2024-07-15 16:10:19.974306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.206 [2024-07-15 16:10:19.974321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.206 qpair failed and we were unable to recover it. 00:28:51.206 [2024-07-15 16:10:19.974511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.206 [2024-07-15 16:10:19.974526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.206 qpair failed and we were unable to recover it. 00:28:51.206 [2024-07-15 16:10:19.974703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.206 [2024-07-15 16:10:19.974719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.206 qpair failed and we were unable to recover it. 00:28:51.206 [2024-07-15 16:10:19.974888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.206 [2024-07-15 16:10:19.974902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.206 qpair failed and we were unable to recover it. 00:28:51.206 [2024-07-15 16:10:19.975135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.206 [2024-07-15 16:10:19.975150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.206 qpair failed and we were unable to recover it. 00:28:51.206 [2024-07-15 16:10:19.975258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.206 [2024-07-15 16:10:19.975291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.206 qpair failed and we were unable to recover it. 00:28:51.206 [2024-07-15 16:10:19.975482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.206 [2024-07-15 16:10:19.975513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.206 qpair failed and we were unable to recover it. 00:28:51.206 [2024-07-15 16:10:19.975719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.206 [2024-07-15 16:10:19.975750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.206 qpair failed and we were unable to recover it. 00:28:51.206 [2024-07-15 16:10:19.975890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.206 [2024-07-15 16:10:19.975920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.206 qpair failed and we were unable to recover it. 00:28:51.206 [2024-07-15 16:10:19.976132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.206 [2024-07-15 16:10:19.976163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.206 qpair failed and we were unable to recover it. 00:28:51.206 [2024-07-15 16:10:19.976385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.206 [2024-07-15 16:10:19.976400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.206 qpair failed and we were unable to recover it. 00:28:51.206 [2024-07-15 16:10:19.976561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.206 [2024-07-15 16:10:19.976592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.206 qpair failed and we were unable to recover it. 00:28:51.206 [2024-07-15 16:10:19.976741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.206 [2024-07-15 16:10:19.976772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.206 qpair failed and we were unable to recover it. 00:28:51.206 [2024-07-15 16:10:19.977060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.206 [2024-07-15 16:10:19.977091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.206 qpair failed and we were unable to recover it. 00:28:51.206 [2024-07-15 16:10:19.977389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.206 [2024-07-15 16:10:19.977426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.206 qpair failed and we were unable to recover it. 00:28:51.206 [2024-07-15 16:10:19.977565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.206 [2024-07-15 16:10:19.977596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.206 qpair failed and we were unable to recover it. 00:28:51.206 [2024-07-15 16:10:19.977748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.206 [2024-07-15 16:10:19.977779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.206 qpair failed and we were unable to recover it. 00:28:51.206 [2024-07-15 16:10:19.977975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.206 [2024-07-15 16:10:19.978005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.206 qpair failed and we were unable to recover it. 00:28:51.206 [2024-07-15 16:10:19.978282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.206 [2024-07-15 16:10:19.978315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.206 qpair failed and we were unable to recover it. 00:28:51.206 [2024-07-15 16:10:19.978540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.206 [2024-07-15 16:10:19.978571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.206 qpair failed and we were unable to recover it. 00:28:51.206 [2024-07-15 16:10:19.978720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.206 [2024-07-15 16:10:19.978750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.206 qpair failed and we were unable to recover it. 00:28:51.206 [2024-07-15 16:10:19.979016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.206 [2024-07-15 16:10:19.979048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.206 qpair failed and we were unable to recover it. 00:28:51.206 [2024-07-15 16:10:19.979187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.206 [2024-07-15 16:10:19.979219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.206 qpair failed and we were unable to recover it. 00:28:51.206 [2024-07-15 16:10:19.979455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.206 [2024-07-15 16:10:19.979486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.206 qpair failed and we were unable to recover it. 00:28:51.206 [2024-07-15 16:10:19.979715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.206 [2024-07-15 16:10:19.979746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.206 qpair failed and we were unable to recover it. 00:28:51.206 [2024-07-15 16:10:19.980013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.206 [2024-07-15 16:10:19.980043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.206 qpair failed and we were unable to recover it. 00:28:51.206 [2024-07-15 16:10:19.980273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.206 [2024-07-15 16:10:19.980305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.206 qpair failed and we were unable to recover it. 00:28:51.206 [2024-07-15 16:10:19.980512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.206 [2024-07-15 16:10:19.980527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.206 qpair failed and we were unable to recover it. 00:28:51.206 [2024-07-15 16:10:19.980768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.206 [2024-07-15 16:10:19.980799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.206 qpair failed and we were unable to recover it. 00:28:51.206 [2024-07-15 16:10:19.980999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.206 [2024-07-15 16:10:19.981030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.206 qpair failed and we were unable to recover it. 00:28:51.206 [2024-07-15 16:10:19.981321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.206 [2024-07-15 16:10:19.981353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.206 qpair failed and we were unable to recover it. 00:28:51.206 [2024-07-15 16:10:19.981557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.206 [2024-07-15 16:10:19.981587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.206 qpair failed and we were unable to recover it. 00:28:51.207 [2024-07-15 16:10:19.981787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.207 [2024-07-15 16:10:19.981802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.207 qpair failed and we were unable to recover it. 00:28:51.207 [2024-07-15 16:10:19.981923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.207 [2024-07-15 16:10:19.981954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.207 qpair failed and we were unable to recover it. 00:28:51.207 [2024-07-15 16:10:19.982107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.207 [2024-07-15 16:10:19.982137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.207 qpair failed and we were unable to recover it. 00:28:51.207 [2024-07-15 16:10:19.982282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.207 [2024-07-15 16:10:19.982315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.207 qpair failed and we were unable to recover it. 00:28:51.207 [2024-07-15 16:10:19.982510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.207 [2024-07-15 16:10:19.982526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.207 qpair failed and we were unable to recover it. 00:28:51.207 [2024-07-15 16:10:19.982667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.207 [2024-07-15 16:10:19.982698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.207 qpair failed and we were unable to recover it. 00:28:51.207 [2024-07-15 16:10:19.982863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.207 [2024-07-15 16:10:19.982894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.207 qpair failed and we were unable to recover it. 00:28:51.207 [2024-07-15 16:10:19.983090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.207 [2024-07-15 16:10:19.983121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.207 qpair failed and we were unable to recover it. 00:28:51.207 [2024-07-15 16:10:19.983256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.207 [2024-07-15 16:10:19.983289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.207 qpair failed and we were unable to recover it. 00:28:51.207 [2024-07-15 16:10:19.983500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.207 [2024-07-15 16:10:19.983536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.207 qpair failed and we were unable to recover it. 00:28:51.207 [2024-07-15 16:10:19.983681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.207 [2024-07-15 16:10:19.983712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.207 qpair failed and we were unable to recover it. 00:28:51.207 [2024-07-15 16:10:19.983919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.207 [2024-07-15 16:10:19.983949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.207 qpair failed and we were unable to recover it. 00:28:51.207 [2024-07-15 16:10:19.984244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.207 [2024-07-15 16:10:19.984276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.207 qpair failed and we were unable to recover it. 00:28:51.207 [2024-07-15 16:10:19.984421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.207 [2024-07-15 16:10:19.984453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.207 qpair failed and we were unable to recover it. 00:28:51.207 [2024-07-15 16:10:19.984666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.207 [2024-07-15 16:10:19.984697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.207 qpair failed and we were unable to recover it. 00:28:51.207 [2024-07-15 16:10:19.984922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.207 [2024-07-15 16:10:19.984953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.207 qpair failed and we were unable to recover it. 00:28:51.207 [2024-07-15 16:10:19.985099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.207 [2024-07-15 16:10:19.985131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.207 qpair failed and we were unable to recover it. 00:28:51.207 [2024-07-15 16:10:19.985324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.207 [2024-07-15 16:10:19.985358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.207 qpair failed and we were unable to recover it. 00:28:51.207 [2024-07-15 16:10:19.985535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.207 [2024-07-15 16:10:19.985566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.207 qpair failed and we were unable to recover it. 00:28:51.207 [2024-07-15 16:10:19.985832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.207 [2024-07-15 16:10:19.985864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.207 qpair failed and we were unable to recover it. 00:28:51.207 [2024-07-15 16:10:19.986104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.207 [2024-07-15 16:10:19.986135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.207 qpair failed and we were unable to recover it. 00:28:51.207 [2024-07-15 16:10:19.986457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.207 [2024-07-15 16:10:19.986489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.207 qpair failed and we were unable to recover it. 00:28:51.207 [2024-07-15 16:10:19.986637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.207 [2024-07-15 16:10:19.986668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.207 qpair failed and we were unable to recover it. 00:28:51.207 [2024-07-15 16:10:19.986978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.207 [2024-07-15 16:10:19.987047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.207 qpair failed and we were unable to recover it. 00:28:51.207 [2024-07-15 16:10:19.987204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.207 [2024-07-15 16:10:19.987251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.207 qpair failed and we were unable to recover it. 00:28:51.207 [2024-07-15 16:10:19.987456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.207 [2024-07-15 16:10:19.987488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.207 qpair failed and we were unable to recover it. 00:28:51.207 [2024-07-15 16:10:19.987638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.207 [2024-07-15 16:10:19.987669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.207 qpair failed and we were unable to recover it. 00:28:51.207 [2024-07-15 16:10:19.987939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.207 [2024-07-15 16:10:19.987970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.207 qpair failed and we were unable to recover it. 00:28:51.207 [2024-07-15 16:10:19.988256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.207 [2024-07-15 16:10:19.988289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.207 qpair failed and we were unable to recover it. 00:28:51.207 [2024-07-15 16:10:19.988454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.207 [2024-07-15 16:10:19.988486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.207 qpair failed and we were unable to recover it. 00:28:51.207 [2024-07-15 16:10:19.988670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.207 [2024-07-15 16:10:19.988686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.207 qpair failed and we were unable to recover it. 00:28:51.207 [2024-07-15 16:10:19.988881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.207 [2024-07-15 16:10:19.988911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.207 qpair failed and we were unable to recover it. 00:28:51.208 [2024-07-15 16:10:19.989056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.208 [2024-07-15 16:10:19.989087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.208 qpair failed and we were unable to recover it. 00:28:51.208 [2024-07-15 16:10:19.989351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.208 [2024-07-15 16:10:19.989383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.208 qpair failed and we were unable to recover it. 00:28:51.208 [2024-07-15 16:10:19.989607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.208 [2024-07-15 16:10:19.989639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.208 qpair failed and we were unable to recover it. 00:28:51.208 [2024-07-15 16:10:19.989785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.208 [2024-07-15 16:10:19.989816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.208 qpair failed and we were unable to recover it. 00:28:51.208 [2024-07-15 16:10:19.990099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.208 [2024-07-15 16:10:19.990139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.208 qpair failed and we were unable to recover it. 00:28:51.208 [2024-07-15 16:10:19.990353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.208 [2024-07-15 16:10:19.990384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.208 qpair failed and we were unable to recover it. 00:28:51.208 [2024-07-15 16:10:19.990671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.208 [2024-07-15 16:10:19.990702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.208 qpair failed and we were unable to recover it. 00:28:51.208 [2024-07-15 16:10:19.990860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.208 [2024-07-15 16:10:19.990891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.208 qpair failed and we were unable to recover it. 00:28:51.208 [2024-07-15 16:10:19.991153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.208 [2024-07-15 16:10:19.991183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.208 qpair failed and we were unable to recover it. 00:28:51.208 [2024-07-15 16:10:19.991368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.208 [2024-07-15 16:10:19.991400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.208 qpair failed and we were unable to recover it. 00:28:51.208 [2024-07-15 16:10:19.991619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.208 [2024-07-15 16:10:19.991651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.208 qpair failed and we were unable to recover it. 00:28:51.208 [2024-07-15 16:10:19.991867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.208 [2024-07-15 16:10:19.991883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.208 qpair failed and we were unable to recover it. 00:28:51.208 [2024-07-15 16:10:19.991999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.208 [2024-07-15 16:10:19.992029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.208 qpair failed and we were unable to recover it. 00:28:51.208 [2024-07-15 16:10:19.992246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.208 [2024-07-15 16:10:19.992280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.208 qpair failed and we were unable to recover it. 00:28:51.208 [2024-07-15 16:10:19.992430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.208 [2024-07-15 16:10:19.992461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.208 qpair failed and we were unable to recover it. 00:28:51.208 [2024-07-15 16:10:19.992622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.208 [2024-07-15 16:10:19.992653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.208 qpair failed and we were unable to recover it. 00:28:51.208 [2024-07-15 16:10:19.992937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.208 [2024-07-15 16:10:19.992967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.208 qpair failed and we were unable to recover it. 00:28:51.208 [2024-07-15 16:10:19.993193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.208 [2024-07-15 16:10:19.993233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.208 qpair failed and we were unable to recover it. 00:28:51.208 [2024-07-15 16:10:19.993384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.208 [2024-07-15 16:10:19.993414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.208 qpair failed and we were unable to recover it. 00:28:51.208 [2024-07-15 16:10:19.993679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.208 [2024-07-15 16:10:19.993710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.208 qpair failed and we were unable to recover it. 00:28:51.208 [2024-07-15 16:10:19.993931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.208 [2024-07-15 16:10:19.993963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.208 qpair failed and we were unable to recover it. 00:28:51.208 [2024-07-15 16:10:19.994233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.208 [2024-07-15 16:10:19.994265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.208 qpair failed and we were unable to recover it. 00:28:51.208 [2024-07-15 16:10:19.994601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.208 [2024-07-15 16:10:19.994632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.208 qpair failed and we were unable to recover it. 00:28:51.208 [2024-07-15 16:10:19.994864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.208 [2024-07-15 16:10:19.994880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.208 qpair failed and we were unable to recover it. 00:28:51.208 [2024-07-15 16:10:19.995048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.208 [2024-07-15 16:10:19.995064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.208 qpair failed and we were unable to recover it. 00:28:51.208 [2024-07-15 16:10:19.995318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.208 [2024-07-15 16:10:19.995351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.208 qpair failed and we were unable to recover it. 00:28:51.208 [2024-07-15 16:10:19.995578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.208 [2024-07-15 16:10:19.995608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.208 qpair failed and we were unable to recover it. 00:28:51.208 [2024-07-15 16:10:19.995764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.208 [2024-07-15 16:10:19.995779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.208 qpair failed and we were unable to recover it. 00:28:51.208 [2024-07-15 16:10:19.995897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.208 [2024-07-15 16:10:19.995930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.208 qpair failed and we were unable to recover it. 00:28:51.208 [2024-07-15 16:10:19.996192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.208 [2024-07-15 16:10:19.996234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.208 qpair failed and we were unable to recover it. 00:28:51.208 [2024-07-15 16:10:19.996449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.208 [2024-07-15 16:10:19.996480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.208 qpair failed and we were unable to recover it. 00:28:51.208 [2024-07-15 16:10:19.996613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.208 [2024-07-15 16:10:19.996638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.208 qpair failed and we were unable to recover it. 00:28:51.208 [2024-07-15 16:10:19.996759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.208 [2024-07-15 16:10:19.996790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.208 qpair failed and we were unable to recover it. 00:28:51.208 [2024-07-15 16:10:19.996952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.208 [2024-07-15 16:10:19.996983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.208 qpair failed and we were unable to recover it. 00:28:51.208 [2024-07-15 16:10:19.997202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.208 [2024-07-15 16:10:19.997248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.208 qpair failed and we were unable to recover it. 00:28:51.208 [2024-07-15 16:10:19.997413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.208 [2024-07-15 16:10:19.997425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.208 qpair failed and we were unable to recover it. 00:28:51.208 [2024-07-15 16:10:19.997626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.208 [2024-07-15 16:10:19.997656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.208 qpair failed and we were unable to recover it. 00:28:51.208 [2024-07-15 16:10:19.997871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.208 [2024-07-15 16:10:19.997903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.208 qpair failed and we were unable to recover it. 00:28:51.208 [2024-07-15 16:10:19.998097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.208 [2024-07-15 16:10:19.998129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.208 qpair failed and we were unable to recover it. 00:28:51.208 [2024-07-15 16:10:19.998343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.208 [2024-07-15 16:10:19.998376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.208 qpair failed and we were unable to recover it. 00:28:51.208 [2024-07-15 16:10:19.998543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.209 [2024-07-15 16:10:19.998574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.209 qpair failed and we were unable to recover it. 00:28:51.209 [2024-07-15 16:10:19.998786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.209 [2024-07-15 16:10:19.998816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.209 qpair failed and we were unable to recover it. 00:28:51.209 [2024-07-15 16:10:19.998969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.209 [2024-07-15 16:10:19.998999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.209 qpair failed and we were unable to recover it. 00:28:51.209 [2024-07-15 16:10:19.999264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.209 [2024-07-15 16:10:19.999296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.209 qpair failed and we were unable to recover it. 00:28:51.209 [2024-07-15 16:10:19.999512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.209 [2024-07-15 16:10:19.999551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.209 qpair failed and we were unable to recover it. 00:28:51.209 [2024-07-15 16:10:19.999808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.209 [2024-07-15 16:10:19.999840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.209 qpair failed and we were unable to recover it. 00:28:51.209 [2024-07-15 16:10:20.000043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.209 [2024-07-15 16:10:20.000074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.209 qpair failed and we were unable to recover it. 00:28:51.209 [2024-07-15 16:10:20.000277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.209 [2024-07-15 16:10:20.000310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.209 qpair failed and we were unable to recover it. 00:28:51.209 [2024-07-15 16:10:20.000502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.209 [2024-07-15 16:10:20.000514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.209 qpair failed and we were unable to recover it. 00:28:51.209 [2024-07-15 16:10:20.000606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.209 [2024-07-15 16:10:20.000634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.209 qpair failed and we were unable to recover it. 00:28:51.209 [2024-07-15 16:10:20.000805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.209 [2024-07-15 16:10:20.000835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.209 qpair failed and we were unable to recover it. 00:28:51.209 [2024-07-15 16:10:20.000989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.209 [2024-07-15 16:10:20.001020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.209 qpair failed and we were unable to recover it. 00:28:51.209 [2024-07-15 16:10:20.001244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.209 [2024-07-15 16:10:20.001275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.209 qpair failed and we were unable to recover it. 00:28:51.209 [2024-07-15 16:10:20.001475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.209 [2024-07-15 16:10:20.001487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.209 qpair failed and we were unable to recover it. 00:28:51.209 [2024-07-15 16:10:20.001648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.209 [2024-07-15 16:10:20.001659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.209 qpair failed and we were unable to recover it. 00:28:51.209 [2024-07-15 16:10:20.001882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.209 [2024-07-15 16:10:20.001894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.209 qpair failed and we were unable to recover it. 00:28:51.209 [2024-07-15 16:10:20.002118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.209 [2024-07-15 16:10:20.002129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.209 qpair failed and we were unable to recover it. 00:28:51.209 [2024-07-15 16:10:20.002235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.209 [2024-07-15 16:10:20.002247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.209 qpair failed and we were unable to recover it. 00:28:51.209 [2024-07-15 16:10:20.002369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.209 [2024-07-15 16:10:20.002380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.209 qpair failed and we were unable to recover it. 00:28:51.209 [2024-07-15 16:10:20.002485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.209 [2024-07-15 16:10:20.002498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.209 qpair failed and we were unable to recover it. 00:28:51.209 [2024-07-15 16:10:20.002652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.209 [2024-07-15 16:10:20.002664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.209 qpair failed and we were unable to recover it. 00:28:51.209 [2024-07-15 16:10:20.002837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.209 [2024-07-15 16:10:20.002849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.209 qpair failed and we were unable to recover it. 00:28:51.209 [2024-07-15 16:10:20.003066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.209 [2024-07-15 16:10:20.003079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.209 qpair failed and we were unable to recover it. 00:28:51.209 [2024-07-15 16:10:20.003195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.209 [2024-07-15 16:10:20.003207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.209 qpair failed and we were unable to recover it. 00:28:51.209 [2024-07-15 16:10:20.003362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.209 [2024-07-15 16:10:20.003373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.209 qpair failed and we were unable to recover it. 00:28:51.209 [2024-07-15 16:10:20.003549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.209 [2024-07-15 16:10:20.003561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.209 qpair failed and we were unable to recover it. 00:28:51.209 [2024-07-15 16:10:20.003783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.209 [2024-07-15 16:10:20.003794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.209 qpair failed and we were unable to recover it. 00:28:51.209 [2024-07-15 16:10:20.003906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.209 [2024-07-15 16:10:20.003917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.209 qpair failed and we were unable to recover it. 00:28:51.209 [2024-07-15 16:10:20.004021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.209 [2024-07-15 16:10:20.004032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.209 qpair failed and we were unable to recover it. 00:28:51.209 [2024-07-15 16:10:20.004125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.209 [2024-07-15 16:10:20.004135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.209 qpair failed and we were unable to recover it. 00:28:51.209 [2024-07-15 16:10:20.004243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.209 [2024-07-15 16:10:20.004255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.209 qpair failed and we were unable to recover it. 00:28:51.209 [2024-07-15 16:10:20.004370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.209 [2024-07-15 16:10:20.004381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.209 qpair failed and we were unable to recover it. 00:28:51.209 [2024-07-15 16:10:20.004483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.209 [2024-07-15 16:10:20.004495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.210 qpair failed and we were unable to recover it. 00:28:51.210 [2024-07-15 16:10:20.004615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.210 [2024-07-15 16:10:20.004627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.210 qpair failed and we were unable to recover it. 00:28:51.210 [2024-07-15 16:10:20.004746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.210 [2024-07-15 16:10:20.004758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.210 qpair failed and we were unable to recover it. 00:28:51.210 [2024-07-15 16:10:20.004919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.210 [2024-07-15 16:10:20.004930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.210 qpair failed and we were unable to recover it. 00:28:51.210 [2024-07-15 16:10:20.005152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.210 [2024-07-15 16:10:20.005163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.210 qpair failed and we were unable to recover it. 00:28:51.210 [2024-07-15 16:10:20.005321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.210 [2024-07-15 16:10:20.005333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.210 qpair failed and we were unable to recover it. 00:28:51.210 [2024-07-15 16:10:20.005439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.210 [2024-07-15 16:10:20.005450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.210 qpair failed and we were unable to recover it. 00:28:51.210 [2024-07-15 16:10:20.005610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.210 [2024-07-15 16:10:20.005622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.210 qpair failed and we were unable to recover it. 00:28:51.210 [2024-07-15 16:10:20.005723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.210 [2024-07-15 16:10:20.005733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.210 qpair failed and we were unable to recover it. 00:28:51.210 [2024-07-15 16:10:20.005910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.210 [2024-07-15 16:10:20.005922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.210 qpair failed and we were unable to recover it. 00:28:51.210 [2024-07-15 16:10:20.006093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.210 [2024-07-15 16:10:20.006105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.210 qpair failed and we were unable to recover it. 00:28:51.210 [2024-07-15 16:10:20.006206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.210 [2024-07-15 16:10:20.006216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.210 qpair failed and we were unable to recover it. 00:28:51.210 [2024-07-15 16:10:20.006411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.210 [2024-07-15 16:10:20.006427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.210 qpair failed and we were unable to recover it. 00:28:51.210 [2024-07-15 16:10:20.006596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.210 [2024-07-15 16:10:20.006608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.210 qpair failed and we were unable to recover it. 00:28:51.210 [2024-07-15 16:10:20.006709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.210 [2024-07-15 16:10:20.006719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.210 qpair failed and we were unable to recover it. 00:28:51.210 [2024-07-15 16:10:20.006887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.210 [2024-07-15 16:10:20.006899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.210 qpair failed and we were unable to recover it. 00:28:51.210 [2024-07-15 16:10:20.007071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.210 [2024-07-15 16:10:20.007082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.210 qpair failed and we were unable to recover it. 00:28:51.210 [2024-07-15 16:10:20.007172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.210 [2024-07-15 16:10:20.007182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.210 qpair failed and we were unable to recover it. 00:28:51.210 [2024-07-15 16:10:20.007271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.210 [2024-07-15 16:10:20.007282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.210 qpair failed and we were unable to recover it. 00:28:51.210 [2024-07-15 16:10:20.007394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.210 [2024-07-15 16:10:20.007405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.210 qpair failed and we were unable to recover it. 00:28:51.210 [2024-07-15 16:10:20.007564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.210 [2024-07-15 16:10:20.007576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.210 qpair failed and we were unable to recover it. 00:28:51.210 [2024-07-15 16:10:20.007798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.210 [2024-07-15 16:10:20.007810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.210 qpair failed and we were unable to recover it. 00:28:51.210 [2024-07-15 16:10:20.007985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.210 [2024-07-15 16:10:20.007997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.210 qpair failed and we were unable to recover it. 00:28:51.210 [2024-07-15 16:10:20.008153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.210 [2024-07-15 16:10:20.008165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.210 qpair failed and we were unable to recover it. 00:28:51.210 [2024-07-15 16:10:20.008264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.210 [2024-07-15 16:10:20.008274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.210 qpair failed and we were unable to recover it. 00:28:51.210 [2024-07-15 16:10:20.008430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.210 [2024-07-15 16:10:20.008443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.210 qpair failed and we were unable to recover it. 00:28:51.210 [2024-07-15 16:10:20.008609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.210 [2024-07-15 16:10:20.008620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.210 qpair failed and we were unable to recover it. 00:28:51.210 [2024-07-15 16:10:20.008790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.210 [2024-07-15 16:10:20.008801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.210 qpair failed and we were unable to recover it. 00:28:51.210 [2024-07-15 16:10:20.008983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.210 [2024-07-15 16:10:20.008993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.210 qpair failed and we were unable to recover it. 00:28:51.210 [2024-07-15 16:10:20.009086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.210 [2024-07-15 16:10:20.009097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.210 qpair failed and we were unable to recover it. 00:28:51.210 [2024-07-15 16:10:20.009321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.210 [2024-07-15 16:10:20.009333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.210 qpair failed and we were unable to recover it. 00:28:51.210 [2024-07-15 16:10:20.009424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.211 [2024-07-15 16:10:20.009435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.211 qpair failed and we were unable to recover it. 00:28:51.211 [2024-07-15 16:10:20.009545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.211 [2024-07-15 16:10:20.009557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.211 qpair failed and we were unable to recover it. 00:28:51.211 [2024-07-15 16:10:20.009673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.211 [2024-07-15 16:10:20.009684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.211 qpair failed and we were unable to recover it. 00:28:51.211 [2024-07-15 16:10:20.009840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.211 [2024-07-15 16:10:20.009852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.211 qpair failed and we were unable to recover it. 00:28:51.211 [2024-07-15 16:10:20.009985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.211 [2024-07-15 16:10:20.009995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.211 qpair failed and we were unable to recover it. 00:28:51.211 [2024-07-15 16:10:20.010130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.211 [2024-07-15 16:10:20.010142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.211 qpair failed and we were unable to recover it. 00:28:51.211 [2024-07-15 16:10:20.010243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.211 [2024-07-15 16:10:20.010254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.211 qpair failed and we were unable to recover it. 00:28:51.211 [2024-07-15 16:10:20.010365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.211 [2024-07-15 16:10:20.010376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.211 qpair failed and we were unable to recover it. 00:28:51.211 [2024-07-15 16:10:20.010547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.211 [2024-07-15 16:10:20.010559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.211 qpair failed and we were unable to recover it. 00:28:51.211 [2024-07-15 16:10:20.010777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.211 [2024-07-15 16:10:20.010789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.211 qpair failed and we were unable to recover it. 00:28:51.211 [2024-07-15 16:10:20.010899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.211 [2024-07-15 16:10:20.010910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.211 qpair failed and we were unable to recover it. 00:28:51.211 [2024-07-15 16:10:20.011018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.211 [2024-07-15 16:10:20.011030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.211 qpair failed and we were unable to recover it. 00:28:51.211 [2024-07-15 16:10:20.011241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.211 [2024-07-15 16:10:20.011253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.211 qpair failed and we were unable to recover it. 00:28:51.211 [2024-07-15 16:10:20.011351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.211 [2024-07-15 16:10:20.011362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.211 qpair failed and we were unable to recover it. 00:28:51.211 [2024-07-15 16:10:20.011449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.211 [2024-07-15 16:10:20.011461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.211 qpair failed and we were unable to recover it. 00:28:51.211 [2024-07-15 16:10:20.011619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.211 [2024-07-15 16:10:20.011631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.211 qpair failed and we were unable to recover it. 00:28:51.211 [2024-07-15 16:10:20.011741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.211 [2024-07-15 16:10:20.011754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.211 qpair failed and we were unable to recover it. 00:28:51.211 [2024-07-15 16:10:20.011866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.211 [2024-07-15 16:10:20.011878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.211 qpair failed and we were unable to recover it. 00:28:51.211 [2024-07-15 16:10:20.012041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.211 [2024-07-15 16:10:20.012052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.211 qpair failed and we were unable to recover it. 00:28:51.211 [2024-07-15 16:10:20.012206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.211 [2024-07-15 16:10:20.012217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.211 qpair failed and we were unable to recover it. 00:28:51.211 [2024-07-15 16:10:20.012345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.211 [2024-07-15 16:10:20.012356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.211 qpair failed and we were unable to recover it. 00:28:51.211 [2024-07-15 16:10:20.012510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.211 [2024-07-15 16:10:20.012523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.211 qpair failed and we were unable to recover it. 00:28:51.211 [2024-07-15 16:10:20.012717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.211 [2024-07-15 16:10:20.012729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.211 qpair failed and we were unable to recover it. 00:28:51.211 [2024-07-15 16:10:20.012904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.211 [2024-07-15 16:10:20.012916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.211 qpair failed and we were unable to recover it. 00:28:51.211 [2024-07-15 16:10:20.013082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.211 [2024-07-15 16:10:20.013093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.211 qpair failed and we were unable to recover it. 00:28:51.211 [2024-07-15 16:10:20.013178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.211 [2024-07-15 16:10:20.013190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.211 qpair failed and we were unable to recover it. 00:28:51.211 [2024-07-15 16:10:20.013306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.211 [2024-07-15 16:10:20.013318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.211 qpair failed and we were unable to recover it. 00:28:51.211 [2024-07-15 16:10:20.013475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.211 [2024-07-15 16:10:20.013486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.211 qpair failed and we were unable to recover it. 00:28:51.211 [2024-07-15 16:10:20.013653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.211 [2024-07-15 16:10:20.013664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.211 qpair failed and we were unable to recover it. 00:28:51.211 [2024-07-15 16:10:20.013882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.211 [2024-07-15 16:10:20.013895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.211 qpair failed and we were unable to recover it. 00:28:51.211 [2024-07-15 16:10:20.013991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.211 [2024-07-15 16:10:20.014004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.211 qpair failed and we were unable to recover it. 00:28:51.211 [2024-07-15 16:10:20.014113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.211 [2024-07-15 16:10:20.014124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.211 qpair failed and we were unable to recover it. 00:28:51.211 [2024-07-15 16:10:20.014346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.211 [2024-07-15 16:10:20.014357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.211 qpair failed and we were unable to recover it. 00:28:51.211 [2024-07-15 16:10:20.014538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.211 [2024-07-15 16:10:20.014550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.211 qpair failed and we were unable to recover it. 00:28:51.211 [2024-07-15 16:10:20.014626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.211 [2024-07-15 16:10:20.014636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.211 qpair failed and we were unable to recover it. 00:28:51.211 [2024-07-15 16:10:20.014829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.211 [2024-07-15 16:10:20.014841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.211 qpair failed and we were unable to recover it. 00:28:51.211 [2024-07-15 16:10:20.015064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.211 [2024-07-15 16:10:20.015076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.211 qpair failed and we were unable to recover it. 00:28:51.211 [2024-07-15 16:10:20.015190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.211 [2024-07-15 16:10:20.015202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.211 qpair failed and we were unable to recover it. 00:28:51.211 [2024-07-15 16:10:20.015324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.211 [2024-07-15 16:10:20.015336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.211 qpair failed and we were unable to recover it. 00:28:51.211 [2024-07-15 16:10:20.015439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.212 [2024-07-15 16:10:20.015451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.212 qpair failed and we were unable to recover it. 00:28:51.212 [2024-07-15 16:10:20.015539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.212 [2024-07-15 16:10:20.015551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.212 qpair failed and we were unable to recover it. 00:28:51.212 [2024-07-15 16:10:20.015712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.212 [2024-07-15 16:10:20.015724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.212 qpair failed and we were unable to recover it. 00:28:51.212 [2024-07-15 16:10:20.015885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.212 [2024-07-15 16:10:20.015897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.212 qpair failed and we were unable to recover it. 00:28:51.212 [2024-07-15 16:10:20.016055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.212 [2024-07-15 16:10:20.016066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.212 qpair failed and we were unable to recover it. 00:28:51.212 [2024-07-15 16:10:20.016199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.212 [2024-07-15 16:10:20.016210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.212 qpair failed and we were unable to recover it. 00:28:51.212 [2024-07-15 16:10:20.016311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.212 [2024-07-15 16:10:20.016323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.212 qpair failed and we were unable to recover it. 00:28:51.212 [2024-07-15 16:10:20.016485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.212 [2024-07-15 16:10:20.016497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.212 qpair failed and we were unable to recover it. 00:28:51.212 [2024-07-15 16:10:20.016666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.212 [2024-07-15 16:10:20.016678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.212 qpair failed and we were unable to recover it. 00:28:51.212 [2024-07-15 16:10:20.016795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.212 [2024-07-15 16:10:20.016813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.212 qpair failed and we were unable to recover it. 00:28:51.212 [2024-07-15 16:10:20.016978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.212 [2024-07-15 16:10:20.016993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.212 qpair failed and we were unable to recover it. 00:28:51.212 [2024-07-15 16:10:20.017186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.212 [2024-07-15 16:10:20.017201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.212 qpair failed and we were unable to recover it. 00:28:51.212 [2024-07-15 16:10:20.017379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.212 [2024-07-15 16:10:20.017394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.212 qpair failed and we were unable to recover it. 00:28:51.212 [2024-07-15 16:10:20.017510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.212 [2024-07-15 16:10:20.017526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.212 qpair failed and we were unable to recover it. 00:28:51.212 [2024-07-15 16:10:20.017640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.212 [2024-07-15 16:10:20.017655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.212 qpair failed and we were unable to recover it. 00:28:51.212 [2024-07-15 16:10:20.017752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.212 [2024-07-15 16:10:20.017767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.212 qpair failed and we were unable to recover it. 00:28:51.212 [2024-07-15 16:10:20.017875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.212 [2024-07-15 16:10:20.017890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.212 qpair failed and we were unable to recover it. 00:28:51.212 [2024-07-15 16:10:20.018122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.212 [2024-07-15 16:10:20.018137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.212 qpair failed and we were unable to recover it. 00:28:51.212 [2024-07-15 16:10:20.018302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.212 [2024-07-15 16:10:20.018317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.212 qpair failed and we were unable to recover it. 00:28:51.212 [2024-07-15 16:10:20.018495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.212 [2024-07-15 16:10:20.018510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.212 qpair failed and we were unable to recover it. 00:28:51.212 [2024-07-15 16:10:20.018673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.212 [2024-07-15 16:10:20.018689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.212 qpair failed and we were unable to recover it. 00:28:51.212 [2024-07-15 16:10:20.018882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.212 [2024-07-15 16:10:20.018895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.212 qpair failed and we were unable to recover it. 00:28:51.212 [2024-07-15 16:10:20.018996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.212 [2024-07-15 16:10:20.019010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.212 qpair failed and we were unable to recover it. 00:28:51.212 [2024-07-15 16:10:20.019166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.212 [2024-07-15 16:10:20.019177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.212 qpair failed and we were unable to recover it. 00:28:51.212 [2024-07-15 16:10:20.019350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.212 [2024-07-15 16:10:20.019362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.212 qpair failed and we were unable to recover it. 00:28:51.212 [2024-07-15 16:10:20.019480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.212 [2024-07-15 16:10:20.019491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.212 qpair failed and we were unable to recover it. 00:28:51.212 [2024-07-15 16:10:20.019652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.212 [2024-07-15 16:10:20.019667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.212 qpair failed and we were unable to recover it. 00:28:51.212 [2024-07-15 16:10:20.019783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.212 [2024-07-15 16:10:20.019798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.212 qpair failed and we were unable to recover it. 00:28:51.212 [2024-07-15 16:10:20.019914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.212 [2024-07-15 16:10:20.019928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.212 qpair failed and we were unable to recover it. 00:28:51.212 [2024-07-15 16:10:20.020027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.212 [2024-07-15 16:10:20.020041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.212 qpair failed and we were unable to recover it. 00:28:51.212 [2024-07-15 16:10:20.020167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.212 [2024-07-15 16:10:20.020181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.212 qpair failed and we were unable to recover it. 00:28:51.212 [2024-07-15 16:10:20.020330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.212 [2024-07-15 16:10:20.020345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.212 qpair failed and we were unable to recover it. 00:28:51.212 [2024-07-15 16:10:20.020559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.212 [2024-07-15 16:10:20.020574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.212 qpair failed and we were unable to recover it. 00:28:51.212 [2024-07-15 16:10:20.020794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.212 [2024-07-15 16:10:20.020808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.212 qpair failed and we were unable to recover it. 00:28:51.212 [2024-07-15 16:10:20.020887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.212 [2024-07-15 16:10:20.020901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.212 qpair failed and we were unable to recover it. 00:28:51.212 [2024-07-15 16:10:20.021014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.212 [2024-07-15 16:10:20.021029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.212 qpair failed and we were unable to recover it. 00:28:51.212 [2024-07-15 16:10:20.021205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.212 [2024-07-15 16:10:20.021219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.212 qpair failed and we were unable to recover it. 00:28:51.212 [2024-07-15 16:10:20.021378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.212 [2024-07-15 16:10:20.021392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.212 qpair failed and we were unable to recover it. 00:28:51.212 [2024-07-15 16:10:20.021575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.212 [2024-07-15 16:10:20.021590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.212 qpair failed and we were unable to recover it. 00:28:51.212 [2024-07-15 16:10:20.021734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.212 [2024-07-15 16:10:20.021749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.212 qpair failed and we were unable to recover it. 00:28:51.212 [2024-07-15 16:10:20.021919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.212 [2024-07-15 16:10:20.021963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.212 qpair failed and we were unable to recover it. 00:28:51.213 [2024-07-15 16:10:20.022196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.213 [2024-07-15 16:10:20.022241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.213 qpair failed and we were unable to recover it. 00:28:51.213 [2024-07-15 16:10:20.022390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.213 [2024-07-15 16:10:20.022411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.213 qpair failed and we were unable to recover it. 00:28:51.213 [2024-07-15 16:10:20.022547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.213 [2024-07-15 16:10:20.022563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.213 qpair failed and we were unable to recover it. 00:28:51.213 [2024-07-15 16:10:20.022701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.213 [2024-07-15 16:10:20.022717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.213 qpair failed and we were unable to recover it. 00:28:51.213 [2024-07-15 16:10:20.022859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.213 [2024-07-15 16:10:20.022874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.213 qpair failed and we were unable to recover it. 00:28:51.213 [2024-07-15 16:10:20.023011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.213 [2024-07-15 16:10:20.023027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.213 qpair failed and we were unable to recover it. 00:28:51.213 [2024-07-15 16:10:20.023156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.213 [2024-07-15 16:10:20.023171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.213 qpair failed and we were unable to recover it. 00:28:51.213 [2024-07-15 16:10:20.023294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.213 [2024-07-15 16:10:20.023311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.213 qpair failed and we were unable to recover it. 00:28:51.213 [2024-07-15 16:10:20.023425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.213 [2024-07-15 16:10:20.023443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.213 qpair failed and we were unable to recover it. 00:28:51.213 [2024-07-15 16:10:20.023624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.213 [2024-07-15 16:10:20.023639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.213 qpair failed and we were unable to recover it. 00:28:51.213 [2024-07-15 16:10:20.023818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.213 [2024-07-15 16:10:20.023833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.213 qpair failed and we were unable to recover it. 00:28:51.213 [2024-07-15 16:10:20.023950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.213 [2024-07-15 16:10:20.023965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.213 qpair failed and we were unable to recover it. 00:28:51.213 [2024-07-15 16:10:20.024097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.213 [2024-07-15 16:10:20.024112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.213 qpair failed and we were unable to recover it. 00:28:51.213 [2024-07-15 16:10:20.024235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.213 [2024-07-15 16:10:20.024251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.213 qpair failed and we were unable to recover it. 00:28:51.213 [2024-07-15 16:10:20.024428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.213 [2024-07-15 16:10:20.024443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.213 qpair failed and we were unable to recover it. 00:28:51.213 [2024-07-15 16:10:20.024618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.213 [2024-07-15 16:10:20.024634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.213 qpair failed and we were unable to recover it. 00:28:51.213 [2024-07-15 16:10:20.024828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.213 [2024-07-15 16:10:20.024843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.213 qpair failed and we were unable to recover it. 00:28:51.213 [2024-07-15 16:10:20.024948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.213 [2024-07-15 16:10:20.024963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.213 qpair failed and we were unable to recover it. 00:28:51.213 [2024-07-15 16:10:20.025076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.213 [2024-07-15 16:10:20.025091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.213 qpair failed and we were unable to recover it. 00:28:51.213 [2024-07-15 16:10:20.025229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.213 [2024-07-15 16:10:20.025245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.213 qpair failed and we were unable to recover it. 00:28:51.213 [2024-07-15 16:10:20.025346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.213 [2024-07-15 16:10:20.025361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.213 qpair failed and we were unable to recover it. 00:28:51.213 [2024-07-15 16:10:20.025482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.213 [2024-07-15 16:10:20.025497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.213 qpair failed and we were unable to recover it. 00:28:51.213 [2024-07-15 16:10:20.025621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.213 [2024-07-15 16:10:20.025637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.213 qpair failed and we were unable to recover it. 00:28:51.213 [2024-07-15 16:10:20.025759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.213 [2024-07-15 16:10:20.025774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.213 qpair failed and we were unable to recover it. 00:28:51.213 [2024-07-15 16:10:20.026017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.213 [2024-07-15 16:10:20.026032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.213 qpair failed and we were unable to recover it. 00:28:51.213 [2024-07-15 16:10:20.026137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.213 [2024-07-15 16:10:20.026153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.213 qpair failed and we were unable to recover it. 00:28:51.213 [2024-07-15 16:10:20.026281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.213 [2024-07-15 16:10:20.026297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.213 qpair failed and we were unable to recover it. 00:28:51.213 [2024-07-15 16:10:20.026428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.213 [2024-07-15 16:10:20.026443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.213 qpair failed and we were unable to recover it. 00:28:51.213 [2024-07-15 16:10:20.026542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.213 [2024-07-15 16:10:20.026557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.213 qpair failed and we were unable to recover it. 00:28:51.213 [2024-07-15 16:10:20.026638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.213 [2024-07-15 16:10:20.026653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.213 qpair failed and we were unable to recover it. 00:28:51.213 [2024-07-15 16:10:20.026759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.213 [2024-07-15 16:10:20.026775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.213 qpair failed and we were unable to recover it. 00:28:51.213 [2024-07-15 16:10:20.026876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.214 [2024-07-15 16:10:20.026892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.214 qpair failed and we were unable to recover it. 00:28:51.214 [2024-07-15 16:10:20.027045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.214 [2024-07-15 16:10:20.027061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.214 qpair failed and we were unable to recover it. 00:28:51.214 [2024-07-15 16:10:20.027186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.214 [2024-07-15 16:10:20.027201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.214 qpair failed and we were unable to recover it. 00:28:51.214 [2024-07-15 16:10:20.027387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.214 [2024-07-15 16:10:20.027403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.214 qpair failed and we were unable to recover it. 00:28:51.214 [2024-07-15 16:10:20.027563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.214 [2024-07-15 16:10:20.027580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.214 qpair failed and we were unable to recover it. 00:28:51.214 [2024-07-15 16:10:20.027765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.214 [2024-07-15 16:10:20.027780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.214 qpair failed and we were unable to recover it. 00:28:51.214 [2024-07-15 16:10:20.027949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.214 [2024-07-15 16:10:20.027964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.214 qpair failed and we were unable to recover it. 00:28:51.214 [2024-07-15 16:10:20.028067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.214 [2024-07-15 16:10:20.028081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.214 qpair failed and we were unable to recover it. 00:28:51.214 [2024-07-15 16:10:20.028261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.214 [2024-07-15 16:10:20.028277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.214 qpair failed and we were unable to recover it. 00:28:51.214 [2024-07-15 16:10:20.028382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.214 [2024-07-15 16:10:20.028397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.214 qpair failed and we were unable to recover it. 00:28:51.214 [2024-07-15 16:10:20.028559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.214 [2024-07-15 16:10:20.028574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.214 qpair failed and we were unable to recover it. 00:28:51.214 [2024-07-15 16:10:20.028684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.214 [2024-07-15 16:10:20.028698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.214 qpair failed and we were unable to recover it. 00:28:51.214 [2024-07-15 16:10:20.028888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.214 [2024-07-15 16:10:20.028902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.214 qpair failed and we were unable to recover it. 00:28:51.214 [2024-07-15 16:10:20.029113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.214 [2024-07-15 16:10:20.029136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.214 qpair failed and we were unable to recover it. 00:28:51.214 [2024-07-15 16:10:20.029312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.214 [2024-07-15 16:10:20.029324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.214 qpair failed and we were unable to recover it. 00:28:51.214 [2024-07-15 16:10:20.029480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.214 [2024-07-15 16:10:20.029491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.214 qpair failed and we were unable to recover it. 00:28:51.214 [2024-07-15 16:10:20.029668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.214 [2024-07-15 16:10:20.029681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.214 qpair failed and we were unable to recover it. 00:28:51.214 [2024-07-15 16:10:20.029874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.214 [2024-07-15 16:10:20.029886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.214 qpair failed and we were unable to recover it. 00:28:51.214 [2024-07-15 16:10:20.029994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.214 [2024-07-15 16:10:20.030007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.214 qpair failed and we were unable to recover it. 00:28:51.214 [2024-07-15 16:10:20.030109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.214 [2024-07-15 16:10:20.030121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.214 qpair failed and we were unable to recover it. 00:28:51.214 [2024-07-15 16:10:20.030219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.214 [2024-07-15 16:10:20.030234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.214 qpair failed and we were unable to recover it. 00:28:51.214 [2024-07-15 16:10:20.030401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.214 [2024-07-15 16:10:20.030412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.214 qpair failed and we were unable to recover it. 00:28:51.214 [2024-07-15 16:10:20.030638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.214 [2024-07-15 16:10:20.030651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.214 qpair failed and we were unable to recover it. 00:28:51.214 [2024-07-15 16:10:20.030751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.214 [2024-07-15 16:10:20.030763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.214 qpair failed and we were unable to recover it. 00:28:51.214 [2024-07-15 16:10:20.030932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.214 [2024-07-15 16:10:20.030944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.214 qpair failed and we were unable to recover it. 00:28:51.214 [2024-07-15 16:10:20.031040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.214 [2024-07-15 16:10:20.031052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.214 qpair failed and we were unable to recover it. 00:28:51.214 [2024-07-15 16:10:20.031220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.214 [2024-07-15 16:10:20.031243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.214 qpair failed and we were unable to recover it. 00:28:51.214 [2024-07-15 16:10:20.031398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.214 [2024-07-15 16:10:20.031410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.214 qpair failed and we were unable to recover it. 00:28:51.214 [2024-07-15 16:10:20.031518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.214 [2024-07-15 16:10:20.031530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.214 qpair failed and we were unable to recover it. 00:28:51.214 [2024-07-15 16:10:20.031701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.215 [2024-07-15 16:10:20.031714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.215 qpair failed and we were unable to recover it. 00:28:51.215 [2024-07-15 16:10:20.031906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.215 [2024-07-15 16:10:20.031918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.215 qpair failed and we were unable to recover it. 00:28:51.215 [2024-07-15 16:10:20.032087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.215 [2024-07-15 16:10:20.032101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.215 qpair failed and we were unable to recover it. 00:28:51.215 [2024-07-15 16:10:20.032268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.215 [2024-07-15 16:10:20.032281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.215 qpair failed and we were unable to recover it. 00:28:51.215 [2024-07-15 16:10:20.032449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.215 [2024-07-15 16:10:20.032460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.215 qpair failed and we were unable to recover it. 00:28:51.215 [2024-07-15 16:10:20.032616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.215 [2024-07-15 16:10:20.032628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.215 qpair failed and we were unable to recover it. 00:28:51.215 [2024-07-15 16:10:20.032737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.215 [2024-07-15 16:10:20.032748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.215 qpair failed and we were unable to recover it. 00:28:51.215 [2024-07-15 16:10:20.032922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.215 [2024-07-15 16:10:20.032934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.215 qpair failed and we were unable to recover it. 00:28:51.215 [2024-07-15 16:10:20.033019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.215 [2024-07-15 16:10:20.033030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.215 qpair failed and we were unable to recover it. 00:28:51.215 [2024-07-15 16:10:20.033136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.215 [2024-07-15 16:10:20.033147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.215 qpair failed and we were unable to recover it. 00:28:51.215 [2024-07-15 16:10:20.033326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.215 [2024-07-15 16:10:20.033337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.215 qpair failed and we were unable to recover it. 00:28:51.215 [2024-07-15 16:10:20.033565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.215 [2024-07-15 16:10:20.033576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.215 qpair failed and we were unable to recover it. 00:28:51.215 [2024-07-15 16:10:20.033733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.215 [2024-07-15 16:10:20.033745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.215 qpair failed and we were unable to recover it. 00:28:51.215 [2024-07-15 16:10:20.033899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.215 [2024-07-15 16:10:20.033911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.215 qpair failed and we were unable to recover it. 00:28:51.215 [2024-07-15 16:10:20.034021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.215 [2024-07-15 16:10:20.034032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.215 qpair failed and we were unable to recover it. 00:28:51.215 [2024-07-15 16:10:20.034192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.215 [2024-07-15 16:10:20.034203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.215 qpair failed and we were unable to recover it. 00:28:51.215 [2024-07-15 16:10:20.034368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.215 [2024-07-15 16:10:20.034380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.215 qpair failed and we were unable to recover it. 00:28:51.215 [2024-07-15 16:10:20.034500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.215 [2024-07-15 16:10:20.034511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.215 qpair failed and we were unable to recover it. 00:28:51.215 [2024-07-15 16:10:20.034742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.215 [2024-07-15 16:10:20.034754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.215 qpair failed and we were unable to recover it. 00:28:51.215 [2024-07-15 16:10:20.034927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.215 [2024-07-15 16:10:20.034939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.215 qpair failed and we were unable to recover it. 00:28:51.215 [2024-07-15 16:10:20.035165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.215 [2024-07-15 16:10:20.035178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.215 qpair failed and we were unable to recover it. 00:28:51.215 [2024-07-15 16:10:20.035360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.215 [2024-07-15 16:10:20.035373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.215 qpair failed and we were unable to recover it. 00:28:51.215 [2024-07-15 16:10:20.035474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.216 [2024-07-15 16:10:20.035486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.216 qpair failed and we were unable to recover it. 00:28:51.216 [2024-07-15 16:10:20.035601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.216 [2024-07-15 16:10:20.035613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.216 qpair failed and we were unable to recover it. 00:28:51.216 [2024-07-15 16:10:20.035784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.216 [2024-07-15 16:10:20.035796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.216 qpair failed and we were unable to recover it. 00:28:51.216 [2024-07-15 16:10:20.035962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.216 [2024-07-15 16:10:20.035974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.216 qpair failed and we were unable to recover it. 00:28:51.216 [2024-07-15 16:10:20.036086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.216 [2024-07-15 16:10:20.036098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.216 qpair failed and we were unable to recover it. 00:28:51.216 [2024-07-15 16:10:20.036215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.216 [2024-07-15 16:10:20.036229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.216 qpair failed and we were unable to recover it. 00:28:51.216 [2024-07-15 16:10:20.036350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.216 [2024-07-15 16:10:20.036361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.216 qpair failed and we were unable to recover it. 00:28:51.216 [2024-07-15 16:10:20.036528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.216 [2024-07-15 16:10:20.036540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.216 qpair failed and we were unable to recover it. 00:28:51.216 [2024-07-15 16:10:20.036709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.216 [2024-07-15 16:10:20.036721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.216 qpair failed and we were unable to recover it. 00:28:51.216 [2024-07-15 16:10:20.036954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.216 [2024-07-15 16:10:20.036966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.216 qpair failed and we were unable to recover it. 00:28:51.216 [2024-07-15 16:10:20.037077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.216 [2024-07-15 16:10:20.037088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.216 qpair failed and we were unable to recover it. 00:28:51.216 [2024-07-15 16:10:20.037254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.216 [2024-07-15 16:10:20.037266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.216 qpair failed and we were unable to recover it. 00:28:51.216 [2024-07-15 16:10:20.037380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.216 [2024-07-15 16:10:20.037392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.216 qpair failed and we were unable to recover it. 00:28:51.216 [2024-07-15 16:10:20.037580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.216 [2024-07-15 16:10:20.037592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.216 qpair failed and we were unable to recover it. 00:28:51.216 [2024-07-15 16:10:20.037749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.216 [2024-07-15 16:10:20.037760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.216 qpair failed and we were unable to recover it. 00:28:51.216 [2024-07-15 16:10:20.037923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.216 [2024-07-15 16:10:20.037934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.216 qpair failed and we were unable to recover it. 00:28:51.216 [2024-07-15 16:10:20.038043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.216 [2024-07-15 16:10:20.038055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.216 qpair failed and we were unable to recover it. 00:28:51.216 [2024-07-15 16:10:20.038220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.216 [2024-07-15 16:10:20.038236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.216 qpair failed and we were unable to recover it. 00:28:51.216 [2024-07-15 16:10:20.038355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.216 [2024-07-15 16:10:20.038367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.216 qpair failed and we were unable to recover it. 00:28:51.216 [2024-07-15 16:10:20.038453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.216 [2024-07-15 16:10:20.038464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.216 qpair failed and we were unable to recover it. 00:28:51.216 [2024-07-15 16:10:20.038641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.216 [2024-07-15 16:10:20.038655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.216 qpair failed and we were unable to recover it. 00:28:51.216 [2024-07-15 16:10:20.038832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.216 [2024-07-15 16:10:20.038843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.216 qpair failed and we were unable to recover it. 00:28:51.216 [2024-07-15 16:10:20.038945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.216 [2024-07-15 16:10:20.038957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.216 qpair failed and we were unable to recover it. 00:28:51.216 [2024-07-15 16:10:20.039140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.216 [2024-07-15 16:10:20.039151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.216 qpair failed and we were unable to recover it. 00:28:51.216 [2024-07-15 16:10:20.039339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.216 [2024-07-15 16:10:20.039351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.216 qpair failed and we were unable to recover it. 00:28:51.216 [2024-07-15 16:10:20.039524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.216 [2024-07-15 16:10:20.039536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.216 qpair failed and we were unable to recover it. 00:28:51.216 [2024-07-15 16:10:20.039654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.216 [2024-07-15 16:10:20.039666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.216 qpair failed and we were unable to recover it. 00:28:51.216 [2024-07-15 16:10:20.039846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.216 [2024-07-15 16:10:20.039858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.216 qpair failed and we were unable to recover it. 00:28:51.216 [2024-07-15 16:10:20.040005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.216 [2024-07-15 16:10:20.040018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.217 qpair failed and we were unable to recover it. 00:28:51.217 [2024-07-15 16:10:20.040189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.217 [2024-07-15 16:10:20.040200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.217 qpair failed and we were unable to recover it. 00:28:51.217 [2024-07-15 16:10:20.040321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.217 [2024-07-15 16:10:20.040333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.217 qpair failed and we were unable to recover it. 00:28:51.217 [2024-07-15 16:10:20.040442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.217 [2024-07-15 16:10:20.040454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.217 qpair failed and we were unable to recover it. 00:28:51.217 [2024-07-15 16:10:20.040566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.217 [2024-07-15 16:10:20.040578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.217 qpair failed and we were unable to recover it. 00:28:51.217 [2024-07-15 16:10:20.040744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.217 [2024-07-15 16:10:20.040756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.217 qpair failed and we were unable to recover it. 00:28:51.217 [2024-07-15 16:10:20.040852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.217 [2024-07-15 16:10:20.040864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.217 qpair failed and we were unable to recover it. 00:28:51.217 [2024-07-15 16:10:20.040951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.217 [2024-07-15 16:10:20.040964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.217 qpair failed and we were unable to recover it. 00:28:51.217 [2024-07-15 16:10:20.041076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.217 [2024-07-15 16:10:20.041089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.217 qpair failed and we were unable to recover it. 00:28:51.217 [2024-07-15 16:10:20.041281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.217 [2024-07-15 16:10:20.041293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.217 qpair failed and we were unable to recover it. 00:28:51.217 [2024-07-15 16:10:20.041495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.217 [2024-07-15 16:10:20.041506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.217 qpair failed and we were unable to recover it. 00:28:51.217 [2024-07-15 16:10:20.041622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.217 [2024-07-15 16:10:20.041634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.217 qpair failed and we were unable to recover it. 00:28:51.217 [2024-07-15 16:10:20.041741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.217 [2024-07-15 16:10:20.041753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.217 qpair failed and we were unable to recover it. 00:28:51.217 [2024-07-15 16:10:20.041860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.217 [2024-07-15 16:10:20.041872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.217 qpair failed and we were unable to recover it. 00:28:51.217 [2024-07-15 16:10:20.042037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.217 [2024-07-15 16:10:20.042048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.217 qpair failed and we were unable to recover it. 00:28:51.217 [2024-07-15 16:10:20.042217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.217 [2024-07-15 16:10:20.042233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.217 qpair failed and we were unable to recover it. 00:28:51.217 [2024-07-15 16:10:20.042404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.217 [2024-07-15 16:10:20.042416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.217 qpair failed and we were unable to recover it. 00:28:51.217 [2024-07-15 16:10:20.042641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.217 [2024-07-15 16:10:20.042653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.217 qpair failed and we were unable to recover it. 00:28:51.217 [2024-07-15 16:10:20.042756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.217 [2024-07-15 16:10:20.042767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.217 qpair failed and we were unable to recover it. 00:28:51.217 [2024-07-15 16:10:20.042877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.217 [2024-07-15 16:10:20.042888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.217 qpair failed and we were unable to recover it. 00:28:51.217 [2024-07-15 16:10:20.043081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.217 [2024-07-15 16:10:20.043092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.217 qpair failed and we were unable to recover it. 00:28:51.217 [2024-07-15 16:10:20.043196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.217 [2024-07-15 16:10:20.043208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.217 qpair failed and we were unable to recover it. 00:28:51.217 [2024-07-15 16:10:20.043385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.217 [2024-07-15 16:10:20.043397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.217 qpair failed and we were unable to recover it. 00:28:51.217 [2024-07-15 16:10:20.043491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.217 [2024-07-15 16:10:20.043502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.217 qpair failed and we were unable to recover it. 00:28:51.217 [2024-07-15 16:10:20.043590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.217 [2024-07-15 16:10:20.043600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.217 qpair failed and we were unable to recover it. 00:28:51.217 [2024-07-15 16:10:20.043698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.217 [2024-07-15 16:10:20.043709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.217 qpair failed and we were unable to recover it. 00:28:51.217 [2024-07-15 16:10:20.043789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.217 [2024-07-15 16:10:20.043800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.217 qpair failed and we were unable to recover it. 00:28:51.217 [2024-07-15 16:10:20.044001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.217 [2024-07-15 16:10:20.044014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.217 qpair failed and we were unable to recover it. 00:28:51.217 [2024-07-15 16:10:20.044076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.217 [2024-07-15 16:10:20.044087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.217 qpair failed and we were unable to recover it. 00:28:51.218 [2024-07-15 16:10:20.044200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.218 [2024-07-15 16:10:20.044212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.218 qpair failed and we were unable to recover it. 00:28:51.218 [2024-07-15 16:10:20.044323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.218 [2024-07-15 16:10:20.044335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.218 qpair failed and we were unable to recover it. 00:28:51.218 [2024-07-15 16:10:20.044425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.218 [2024-07-15 16:10:20.044436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.218 qpair failed and we were unable to recover it. 00:28:51.218 [2024-07-15 16:10:20.044593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.218 [2024-07-15 16:10:20.044608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.218 qpair failed and we were unable to recover it. 00:28:51.218 [2024-07-15 16:10:20.044742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.218 [2024-07-15 16:10:20.044755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.218 qpair failed and we were unable to recover it. 00:28:51.218 [2024-07-15 16:10:20.044951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.218 [2024-07-15 16:10:20.044962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.218 qpair failed and we were unable to recover it. 00:28:51.218 [2024-07-15 16:10:20.045123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.218 [2024-07-15 16:10:20.045135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.218 qpair failed and we were unable to recover it. 00:28:51.218 [2024-07-15 16:10:20.045279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.218 [2024-07-15 16:10:20.045290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.218 qpair failed and we were unable to recover it. 00:28:51.218 [2024-07-15 16:10:20.045369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.218 [2024-07-15 16:10:20.045381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.218 qpair failed and we were unable to recover it. 00:28:51.218 [2024-07-15 16:10:20.045521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.218 [2024-07-15 16:10:20.045532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.218 qpair failed and we were unable to recover it. 00:28:51.218 [2024-07-15 16:10:20.045750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.218 [2024-07-15 16:10:20.045762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.218 qpair failed and we were unable to recover it. 00:28:51.218 [2024-07-15 16:10:20.045915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.218 [2024-07-15 16:10:20.045927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.218 qpair failed and we were unable to recover it. 00:28:51.218 [2024-07-15 16:10:20.046099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.218 [2024-07-15 16:10:20.046110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.218 qpair failed and we were unable to recover it. 00:28:51.218 [2024-07-15 16:10:20.046253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.218 [2024-07-15 16:10:20.046266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.218 qpair failed and we were unable to recover it. 00:28:51.218 [2024-07-15 16:10:20.046361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.218 [2024-07-15 16:10:20.046372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.218 qpair failed and we were unable to recover it. 00:28:51.218 [2024-07-15 16:10:20.046535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.218 [2024-07-15 16:10:20.046547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.218 qpair failed and we were unable to recover it. 00:28:51.218 [2024-07-15 16:10:20.046750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.218 [2024-07-15 16:10:20.046762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.218 qpair failed and we were unable to recover it. 00:28:51.218 [2024-07-15 16:10:20.046935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.218 [2024-07-15 16:10:20.046947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.218 qpair failed and we were unable to recover it. 00:28:51.218 [2024-07-15 16:10:20.047112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.218 [2024-07-15 16:10:20.047124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.218 qpair failed and we were unable to recover it. 00:28:51.218 [2024-07-15 16:10:20.047302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.218 [2024-07-15 16:10:20.047314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.218 qpair failed and we were unable to recover it. 00:28:51.218 [2024-07-15 16:10:20.047425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.218 [2024-07-15 16:10:20.047438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.218 qpair failed and we were unable to recover it. 00:28:51.218 [2024-07-15 16:10:20.047541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.218 [2024-07-15 16:10:20.047553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.218 qpair failed and we were unable to recover it. 00:28:51.218 [2024-07-15 16:10:20.047665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.218 [2024-07-15 16:10:20.047677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.218 qpair failed and we were unable to recover it. 00:28:51.218 [2024-07-15 16:10:20.047790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.218 [2024-07-15 16:10:20.047801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.218 qpair failed and we were unable to recover it. 00:28:51.218 [2024-07-15 16:10:20.048008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.218 [2024-07-15 16:10:20.048019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.218 qpair failed and we were unable to recover it. 00:28:51.218 [2024-07-15 16:10:20.048127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.218 [2024-07-15 16:10:20.048138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.218 qpair failed and we were unable to recover it. 00:28:51.218 [2024-07-15 16:10:20.048239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.218 [2024-07-15 16:10:20.048251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.218 qpair failed and we were unable to recover it. 00:28:51.218 [2024-07-15 16:10:20.048359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.218 [2024-07-15 16:10:20.048371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.218 qpair failed and we were unable to recover it. 00:28:51.218 [2024-07-15 16:10:20.048596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.219 [2024-07-15 16:10:20.048608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.219 qpair failed and we were unable to recover it. 00:28:51.219 [2024-07-15 16:10:20.048758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.219 [2024-07-15 16:10:20.048769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.219 qpair failed and we were unable to recover it. 00:28:51.219 [2024-07-15 16:10:20.048884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.219 [2024-07-15 16:10:20.048894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.219 qpair failed and we were unable to recover it. 00:28:51.219 [2024-07-15 16:10:20.049119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.219 [2024-07-15 16:10:20.049130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.219 qpair failed and we were unable to recover it. 00:28:51.219 [2024-07-15 16:10:20.049298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.219 [2024-07-15 16:10:20.049310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.219 qpair failed and we were unable to recover it. 00:28:51.219 [2024-07-15 16:10:20.049504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.219 [2024-07-15 16:10:20.049516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.219 qpair failed and we were unable to recover it. 00:28:51.219 [2024-07-15 16:10:20.049614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.219 [2024-07-15 16:10:20.049625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.219 qpair failed and we were unable to recover it. 00:28:51.219 [2024-07-15 16:10:20.049813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.219 [2024-07-15 16:10:20.049824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.219 qpair failed and we were unable to recover it. 00:28:51.219 [2024-07-15 16:10:20.049922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.219 [2024-07-15 16:10:20.049933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.219 qpair failed and we were unable to recover it. 00:28:51.219 [2024-07-15 16:10:20.050024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.219 [2024-07-15 16:10:20.050035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.219 qpair failed and we were unable to recover it. 00:28:51.219 [2024-07-15 16:10:20.050206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.219 [2024-07-15 16:10:20.050218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.219 qpair failed and we were unable to recover it. 00:28:51.219 [2024-07-15 16:10:20.050334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.219 [2024-07-15 16:10:20.050346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.219 qpair failed and we were unable to recover it. 00:28:51.219 [2024-07-15 16:10:20.050512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.219 [2024-07-15 16:10:20.050523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.219 qpair failed and we were unable to recover it. 00:28:51.219 [2024-07-15 16:10:20.050659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.219 [2024-07-15 16:10:20.050670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.219 qpair failed and we were unable to recover it. 00:28:51.219 [2024-07-15 16:10:20.050753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.219 [2024-07-15 16:10:20.050764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.219 qpair failed and we were unable to recover it. 00:28:51.219 [2024-07-15 16:10:20.050946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.219 [2024-07-15 16:10:20.050959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.219 qpair failed and we were unable to recover it. 00:28:51.219 [2024-07-15 16:10:20.051133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.219 [2024-07-15 16:10:20.051145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.219 qpair failed and we were unable to recover it. 00:28:51.219 [2024-07-15 16:10:20.051301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.219 [2024-07-15 16:10:20.051313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.219 qpair failed and we were unable to recover it. 00:28:51.219 [2024-07-15 16:10:20.051491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.219 [2024-07-15 16:10:20.051503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.219 qpair failed and we were unable to recover it. 00:28:51.219 [2024-07-15 16:10:20.051689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.219 [2024-07-15 16:10:20.051701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.219 qpair failed and we were unable to recover it. 00:28:51.219 [2024-07-15 16:10:20.051824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.219 [2024-07-15 16:10:20.051854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.219 qpair failed and we were unable to recover it. 00:28:51.219 [2024-07-15 16:10:20.052041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.219 [2024-07-15 16:10:20.052072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.219 qpair failed and we were unable to recover it. 00:28:51.219 [2024-07-15 16:10:20.052211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.219 [2024-07-15 16:10:20.052249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.219 qpair failed and we were unable to recover it. 00:28:51.219 [2024-07-15 16:10:20.052391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.219 [2024-07-15 16:10:20.052422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.219 qpair failed and we were unable to recover it. 00:28:51.219 [2024-07-15 16:10:20.052580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.219 [2024-07-15 16:10:20.052612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.219 qpair failed and we were unable to recover it. 00:28:51.219 [2024-07-15 16:10:20.052788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.219 [2024-07-15 16:10:20.052818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.219 qpair failed and we were unable to recover it. 00:28:51.219 [2024-07-15 16:10:20.053031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.219 [2024-07-15 16:10:20.053062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.219 qpair failed and we were unable to recover it. 00:28:51.219 [2024-07-15 16:10:20.053256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.219 [2024-07-15 16:10:20.053288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.219 qpair failed and we were unable to recover it. 00:28:51.219 [2024-07-15 16:10:20.053438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.219 [2024-07-15 16:10:20.053450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.219 qpair failed and we were unable to recover it. 00:28:51.219 [2024-07-15 16:10:20.053708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.219 [2024-07-15 16:10:20.053739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.219 qpair failed and we were unable to recover it. 00:28:51.219 [2024-07-15 16:10:20.053876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.219 [2024-07-15 16:10:20.053906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.219 qpair failed and we were unable to recover it. 00:28:51.219 [2024-07-15 16:10:20.054081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.219 [2024-07-15 16:10:20.054112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.219 qpair failed and we were unable to recover it. 00:28:51.219 [2024-07-15 16:10:20.054311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.219 [2024-07-15 16:10:20.054342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.219 qpair failed and we were unable to recover it. 00:28:51.219 [2024-07-15 16:10:20.054476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.219 [2024-07-15 16:10:20.054507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.219 qpair failed and we were unable to recover it. 00:28:51.220 [2024-07-15 16:10:20.054711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.220 [2024-07-15 16:10:20.054722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.220 qpair failed and we were unable to recover it. 00:28:51.220 [2024-07-15 16:10:20.054823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.220 [2024-07-15 16:10:20.054835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.220 qpair failed and we were unable to recover it. 00:28:51.220 [2024-07-15 16:10:20.054993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.220 [2024-07-15 16:10:20.055004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.220 qpair failed and we were unable to recover it. 00:28:51.220 [2024-07-15 16:10:20.055196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.220 [2024-07-15 16:10:20.055261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.220 qpair failed and we were unable to recover it. 00:28:51.220 [2024-07-15 16:10:20.055430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.220 [2024-07-15 16:10:20.055462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.220 qpair failed and we were unable to recover it. 00:28:51.220 [2024-07-15 16:10:20.055774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.220 [2024-07-15 16:10:20.055805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.220 qpair failed and we were unable to recover it. 00:28:51.220 [2024-07-15 16:10:20.056025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.220 [2024-07-15 16:10:20.056056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.220 qpair failed and we were unable to recover it. 00:28:51.220 [2024-07-15 16:10:20.056284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.220 [2024-07-15 16:10:20.056315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.220 qpair failed and we were unable to recover it. 00:28:51.220 [2024-07-15 16:10:20.056584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.220 [2024-07-15 16:10:20.056615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.220 qpair failed and we were unable to recover it. 00:28:51.220 [2024-07-15 16:10:20.056885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.220 [2024-07-15 16:10:20.056916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.220 qpair failed and we were unable to recover it. 00:28:51.220 [2024-07-15 16:10:20.057130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.220 [2024-07-15 16:10:20.057160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.220 qpair failed and we were unable to recover it. 00:28:51.220 [2024-07-15 16:10:20.057298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.220 [2024-07-15 16:10:20.057330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.220 qpair failed and we were unable to recover it. 00:28:51.220 [2024-07-15 16:10:20.057470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.220 [2024-07-15 16:10:20.057482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.220 qpair failed and we were unable to recover it. 00:28:51.220 [2024-07-15 16:10:20.057657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.220 [2024-07-15 16:10:20.057688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.220 qpair failed and we were unable to recover it. 00:28:51.220 [2024-07-15 16:10:20.057899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.220 [2024-07-15 16:10:20.057930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.220 qpair failed and we were unable to recover it. 00:28:51.220 [2024-07-15 16:10:20.058197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.220 [2024-07-15 16:10:20.058233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.220 qpair failed and we were unable to recover it. 00:28:51.220 [2024-07-15 16:10:20.058437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.220 [2024-07-15 16:10:20.058467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.220 qpair failed and we were unable to recover it. 00:28:51.220 [2024-07-15 16:10:20.058661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.220 [2024-07-15 16:10:20.058674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.220 qpair failed and we were unable to recover it. 00:28:51.220 [2024-07-15 16:10:20.058832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.220 [2024-07-15 16:10:20.058861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.220 qpair failed and we were unable to recover it. 00:28:51.220 [2024-07-15 16:10:20.059042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.220 [2024-07-15 16:10:20.059073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.220 qpair failed and we were unable to recover it. 00:28:51.220 [2024-07-15 16:10:20.059217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.220 [2024-07-15 16:10:20.059256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.220 qpair failed and we were unable to recover it. 00:28:51.220 [2024-07-15 16:10:20.059389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.220 [2024-07-15 16:10:20.059424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.220 qpair failed and we were unable to recover it. 00:28:51.220 [2024-07-15 16:10:20.059623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.220 [2024-07-15 16:10:20.059653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.220 qpair failed and we were unable to recover it. 00:28:51.220 [2024-07-15 16:10:20.059862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.220 [2024-07-15 16:10:20.059892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.220 qpair failed and we were unable to recover it. 00:28:51.220 [2024-07-15 16:10:20.060155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.220 [2024-07-15 16:10:20.060185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.220 qpair failed and we were unable to recover it. 00:28:51.220 [2024-07-15 16:10:20.060407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.220 [2024-07-15 16:10:20.060438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.220 qpair failed and we were unable to recover it. 00:28:51.220 [2024-07-15 16:10:20.060706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.220 [2024-07-15 16:10:20.060736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.220 qpair failed and we were unable to recover it. 00:28:51.220 [2024-07-15 16:10:20.061001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.220 [2024-07-15 16:10:20.061031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.220 qpair failed and we were unable to recover it. 00:28:51.220 [2024-07-15 16:10:20.061245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.220 [2024-07-15 16:10:20.061277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.220 qpair failed and we were unable to recover it. 00:28:51.220 [2024-07-15 16:10:20.061429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.220 [2024-07-15 16:10:20.061440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.220 qpair failed and we were unable to recover it. 00:28:51.220 [2024-07-15 16:10:20.061679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.220 [2024-07-15 16:10:20.061710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.220 qpair failed and we were unable to recover it. 00:28:51.220 [2024-07-15 16:10:20.061972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.220 [2024-07-15 16:10:20.062003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.220 qpair failed and we were unable to recover it. 00:28:51.220 [2024-07-15 16:10:20.062262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.220 [2024-07-15 16:10:20.062294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.220 qpair failed and we were unable to recover it. 00:28:51.220 [2024-07-15 16:10:20.062511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.220 [2024-07-15 16:10:20.062543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.220 qpair failed and we were unable to recover it. 00:28:51.220 [2024-07-15 16:10:20.062748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.220 [2024-07-15 16:10:20.062779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.220 qpair failed and we were unable to recover it. 00:28:51.220 [2024-07-15 16:10:20.062907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.220 [2024-07-15 16:10:20.062938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.220 qpair failed and we were unable to recover it. 00:28:51.220 [2024-07-15 16:10:20.063206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.220 [2024-07-15 16:10:20.063266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.220 qpair failed and we were unable to recover it. 00:28:51.220 [2024-07-15 16:10:20.063467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.220 [2024-07-15 16:10:20.063497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.220 qpair failed and we were unable to recover it. 00:28:51.220 [2024-07-15 16:10:20.063760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.220 [2024-07-15 16:10:20.063791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.220 qpair failed and we were unable to recover it. 00:28:51.220 [2024-07-15 16:10:20.064003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.064033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.064250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.064282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.064458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.064491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.064688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.064719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.065012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.065023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.065119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.065147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.065348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.065379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.065581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.065612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.065746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.065758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.065857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.065867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.066034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.066046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.066154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.066185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.066424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.066457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.066667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.066697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.066967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.066979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.067203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.067215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.067411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.067423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.067621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.067632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.067744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.067755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.067937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.067949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.068032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.068042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.068146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.068156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.068330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.068368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.068517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.068548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.068764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.068794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.069004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.069034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.069177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.069207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.069397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.069409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.069594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.069624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.069851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.069882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.070038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.070069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.070339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.070371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.070607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.070619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.070853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.070884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.071041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.071072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.071206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.071242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.071539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.071570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.071784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.071815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.072078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.072090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.072328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.072340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.072559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.072591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.072857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.072895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.073076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.073087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.073263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.073295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.073454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.073486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.073772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.073802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.074040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.074071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.074271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.074303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.074452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.074464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.074666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.074698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.074952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.074983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.075183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.075214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.075483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.221 [2024-07-15 16:10:20.075514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.221 qpair failed and we were unable to recover it. 00:28:51.221 [2024-07-15 16:10:20.075733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.222 [2024-07-15 16:10:20.075764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.222 qpair failed and we were unable to recover it. 00:28:51.222 [2024-07-15 16:10:20.075966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.222 [2024-07-15 16:10:20.075997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.222 qpair failed and we were unable to recover it. 00:28:51.222 [2024-07-15 16:10:20.076144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.222 [2024-07-15 16:10:20.076175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.222 qpair failed and we were unable to recover it. 00:28:51.222 [2024-07-15 16:10:20.076381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.222 [2024-07-15 16:10:20.076413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.222 qpair failed and we were unable to recover it. 00:28:51.222 [2024-07-15 16:10:20.076648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.222 [2024-07-15 16:10:20.076659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.222 qpair failed and we were unable to recover it. 00:28:51.222 [2024-07-15 16:10:20.076780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.222 [2024-07-15 16:10:20.076810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.222 qpair failed and we were unable to recover it. 00:28:51.222 [2024-07-15 16:10:20.077113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.222 [2024-07-15 16:10:20.077143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.222 qpair failed and we were unable to recover it. 00:28:51.222 [2024-07-15 16:10:20.077341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.222 [2024-07-15 16:10:20.077373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.222 qpair failed and we were unable to recover it. 00:28:51.222 [2024-07-15 16:10:20.077520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.222 [2024-07-15 16:10:20.077551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.222 qpair failed and we were unable to recover it. 00:28:51.222 [2024-07-15 16:10:20.077818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.222 [2024-07-15 16:10:20.077854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.222 qpair failed and we were unable to recover it. 00:28:51.222 [2024-07-15 16:10:20.078095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.222 [2024-07-15 16:10:20.078127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.222 qpair failed and we were unable to recover it. 00:28:51.222 [2024-07-15 16:10:20.078332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.222 [2024-07-15 16:10:20.078364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.222 qpair failed and we were unable to recover it. 00:28:51.222 [2024-07-15 16:10:20.078510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.222 [2024-07-15 16:10:20.078521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.222 qpair failed and we were unable to recover it. 00:28:51.222 [2024-07-15 16:10:20.078730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.222 [2024-07-15 16:10:20.078762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.222 qpair failed and we were unable to recover it. 00:28:51.222 [2024-07-15 16:10:20.078966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.222 [2024-07-15 16:10:20.078996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.222 qpair failed and we were unable to recover it. 00:28:51.222 [2024-07-15 16:10:20.079267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.222 [2024-07-15 16:10:20.079299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.222 qpair failed and we were unable to recover it. 00:28:51.222 [2024-07-15 16:10:20.079444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.222 [2024-07-15 16:10:20.079475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.222 qpair failed and we were unable to recover it. 00:28:51.222 [2024-07-15 16:10:20.079609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.222 [2024-07-15 16:10:20.079639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.222 qpair failed and we were unable to recover it. 00:28:51.222 [2024-07-15 16:10:20.079789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.222 [2024-07-15 16:10:20.079820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.222 qpair failed and we were unable to recover it. 00:28:51.222 [2024-07-15 16:10:20.080105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.222 [2024-07-15 16:10:20.080136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.222 qpair failed and we were unable to recover it. 00:28:51.222 [2024-07-15 16:10:20.080360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.222 [2024-07-15 16:10:20.080392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.222 qpair failed and we were unable to recover it. 00:28:51.222 [2024-07-15 16:10:20.080657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.222 [2024-07-15 16:10:20.080688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.222 qpair failed and we were unable to recover it. 00:28:51.222 [2024-07-15 16:10:20.080853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.222 [2024-07-15 16:10:20.080884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.222 qpair failed and we were unable to recover it. 00:28:51.222 [2024-07-15 16:10:20.081083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.222 [2024-07-15 16:10:20.081094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.222 qpair failed and we were unable to recover it. 00:28:51.222 [2024-07-15 16:10:20.081186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.222 [2024-07-15 16:10:20.081196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.222 qpair failed and we were unable to recover it. 00:28:51.222 [2024-07-15 16:10:20.081358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.222 [2024-07-15 16:10:20.081371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.222 qpair failed and we were unable to recover it. 00:28:51.222 [2024-07-15 16:10:20.081512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.222 [2024-07-15 16:10:20.081524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.222 qpair failed and we were unable to recover it. 00:28:51.222 [2024-07-15 16:10:20.081587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.222 [2024-07-15 16:10:20.081598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.222 qpair failed and we were unable to recover it. 00:28:51.222 [2024-07-15 16:10:20.081705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.222 [2024-07-15 16:10:20.081716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.222 qpair failed and we were unable to recover it. 00:28:51.222 [2024-07-15 16:10:20.081883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.222 [2024-07-15 16:10:20.081895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.222 qpair failed and we were unable to recover it. 00:28:51.222 [2024-07-15 16:10:20.082104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.222 [2024-07-15 16:10:20.082136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.222 qpair failed and we were unable to recover it. 00:28:51.222 [2024-07-15 16:10:20.082277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.222 [2024-07-15 16:10:20.082309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.222 qpair failed and we were unable to recover it. 00:28:51.222 [2024-07-15 16:10:20.082473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.222 [2024-07-15 16:10:20.082504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.222 qpair failed and we were unable to recover it. 00:28:51.222 [2024-07-15 16:10:20.082769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.222 [2024-07-15 16:10:20.082801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.222 qpair failed and we were unable to recover it. 00:28:51.222 [2024-07-15 16:10:20.083032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.222 [2024-07-15 16:10:20.083063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.222 qpair failed and we were unable to recover it. 00:28:51.222 [2024-07-15 16:10:20.083331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.222 [2024-07-15 16:10:20.083363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.222 qpair failed and we were unable to recover it. 00:28:51.222 [2024-07-15 16:10:20.083526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.222 [2024-07-15 16:10:20.083557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.222 qpair failed and we were unable to recover it. 00:28:51.222 [2024-07-15 16:10:20.083743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.222 [2024-07-15 16:10:20.083773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.222 qpair failed and we were unable to recover it. 00:28:51.222 [2024-07-15 16:10:20.083988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.222 [2024-07-15 16:10:20.084000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.222 qpair failed and we were unable to recover it. 00:28:51.222 [2024-07-15 16:10:20.084201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.222 [2024-07-15 16:10:20.084240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.222 qpair failed and we were unable to recover it. 00:28:51.222 [2024-07-15 16:10:20.084440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.222 [2024-07-15 16:10:20.084471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.222 qpair failed and we were unable to recover it. 00:28:51.222 [2024-07-15 16:10:20.084782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.222 [2024-07-15 16:10:20.084813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.222 qpair failed and we were unable to recover it. 00:28:51.222 [2024-07-15 16:10:20.084959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.222 [2024-07-15 16:10:20.084990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.222 qpair failed and we were unable to recover it. 00:28:51.222 [2024-07-15 16:10:20.085254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.222 [2024-07-15 16:10:20.085286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.222 qpair failed and we were unable to recover it. 00:28:51.222 [2024-07-15 16:10:20.085518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.222 [2024-07-15 16:10:20.085549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.222 qpair failed and we were unable to recover it. 00:28:51.222 [2024-07-15 16:10:20.085762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.222 [2024-07-15 16:10:20.085792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.222 qpair failed and we were unable to recover it. 00:28:51.222 [2024-07-15 16:10:20.085938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.222 [2024-07-15 16:10:20.085969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.222 qpair failed and we were unable to recover it. 00:28:51.222 [2024-07-15 16:10:20.086117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.222 [2024-07-15 16:10:20.086147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.222 qpair failed and we were unable to recover it. 00:28:51.222 [2024-07-15 16:10:20.086413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.222 [2024-07-15 16:10:20.086444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.222 qpair failed and we were unable to recover it. 00:28:51.222 [2024-07-15 16:10:20.086657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.222 [2024-07-15 16:10:20.086692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.222 qpair failed and we were unable to recover it. 00:28:51.223 [2024-07-15 16:10:20.086844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.223 [2024-07-15 16:10:20.086875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.223 qpair failed and we were unable to recover it. 00:28:51.223 [2024-07-15 16:10:20.087034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.223 [2024-07-15 16:10:20.087065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.223 qpair failed and we were unable to recover it. 00:28:51.223 [2024-07-15 16:10:20.087197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.223 [2024-07-15 16:10:20.087323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.223 qpair failed and we were unable to recover it. 00:28:51.223 [2024-07-15 16:10:20.087535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.223 [2024-07-15 16:10:20.087567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.223 qpair failed and we were unable to recover it. 00:28:51.223 [2024-07-15 16:10:20.087713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.223 [2024-07-15 16:10:20.087744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.223 qpair failed and we were unable to recover it. 00:28:51.223 [2024-07-15 16:10:20.087978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.223 [2024-07-15 16:10:20.088010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.223 qpair failed and we were unable to recover it. 00:28:51.223 [2024-07-15 16:10:20.088245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.223 [2024-07-15 16:10:20.088278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.223 qpair failed and we were unable to recover it. 00:28:51.223 [2024-07-15 16:10:20.088488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.223 [2024-07-15 16:10:20.088519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.223 qpair failed and we were unable to recover it. 00:28:51.223 [2024-07-15 16:10:20.088729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.223 [2024-07-15 16:10:20.088741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.223 qpair failed and we were unable to recover it. 00:28:51.223 [2024-07-15 16:10:20.088808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.223 [2024-07-15 16:10:20.088818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.223 qpair failed and we were unable to recover it. 00:28:51.223 [2024-07-15 16:10:20.088979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.223 [2024-07-15 16:10:20.088991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.223 qpair failed and we were unable to recover it. 00:28:51.223 [2024-07-15 16:10:20.089107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.223 [2024-07-15 16:10:20.089137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.223 qpair failed and we were unable to recover it. 00:28:51.223 [2024-07-15 16:10:20.089346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.223 [2024-07-15 16:10:20.089379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.223 qpair failed and we were unable to recover it. 00:28:51.223 [2024-07-15 16:10:20.089536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.223 [2024-07-15 16:10:20.089567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.223 qpair failed and we were unable to recover it. 00:28:51.223 [2024-07-15 16:10:20.089777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.223 [2024-07-15 16:10:20.089789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.223 qpair failed and we were unable to recover it. 00:28:51.223 [2024-07-15 16:10:20.089950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.223 [2024-07-15 16:10:20.089981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.223 qpair failed and we were unable to recover it. 00:28:51.223 [2024-07-15 16:10:20.090257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.223 [2024-07-15 16:10:20.090288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.223 qpair failed and we were unable to recover it. 00:28:51.223 [2024-07-15 16:10:20.090489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.223 [2024-07-15 16:10:20.090520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.223 qpair failed and we were unable to recover it. 00:28:51.223 [2024-07-15 16:10:20.090689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.223 [2024-07-15 16:10:20.090721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.223 qpair failed and we were unable to recover it. 00:28:51.223 [2024-07-15 16:10:20.090873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.223 [2024-07-15 16:10:20.090904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.223 qpair failed and we were unable to recover it. 00:28:51.223 [2024-07-15 16:10:20.091028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.223 [2024-07-15 16:10:20.091059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.223 qpair failed and we were unable to recover it. 00:28:51.223 [2024-07-15 16:10:20.091299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.223 [2024-07-15 16:10:20.091331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.223 qpair failed and we were unable to recover it. 00:28:51.223 [2024-07-15 16:10:20.091595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.223 [2024-07-15 16:10:20.091625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.223 qpair failed and we were unable to recover it. 00:28:51.223 [2024-07-15 16:10:20.091943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.223 [2024-07-15 16:10:20.091975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.223 qpair failed and we were unable to recover it. 00:28:51.223 [2024-07-15 16:10:20.092124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.223 [2024-07-15 16:10:20.092155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.223 qpair failed and we were unable to recover it. 00:28:51.223 [2024-07-15 16:10:20.092313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.223 [2024-07-15 16:10:20.092345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.223 qpair failed and we were unable to recover it. 00:28:51.223 [2024-07-15 16:10:20.092568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.223 [2024-07-15 16:10:20.092599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.223 qpair failed and we were unable to recover it. 00:28:51.223 [2024-07-15 16:10:20.092763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.223 [2024-07-15 16:10:20.092774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.223 qpair failed and we were unable to recover it. 00:28:51.223 [2024-07-15 16:10:20.092932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.223 [2024-07-15 16:10:20.092944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.223 qpair failed and we were unable to recover it. 00:28:51.223 [2024-07-15 16:10:20.093120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.223 [2024-07-15 16:10:20.093151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.223 qpair failed and we were unable to recover it. 00:28:51.223 [2024-07-15 16:10:20.093345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.223 [2024-07-15 16:10:20.093376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.223 qpair failed and we were unable to recover it. 00:28:51.223 [2024-07-15 16:10:20.093577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.223 [2024-07-15 16:10:20.093607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.223 qpair failed and we were unable to recover it. 00:28:51.223 [2024-07-15 16:10:20.093857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.223 [2024-07-15 16:10:20.093869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.223 qpair failed and we were unable to recover it. 00:28:51.223 [2024-07-15 16:10:20.093984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.223 [2024-07-15 16:10:20.093996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.223 qpair failed and we were unable to recover it. 00:28:51.223 [2024-07-15 16:10:20.094100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.223 [2024-07-15 16:10:20.094110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.223 qpair failed and we were unable to recover it. 00:28:51.223 [2024-07-15 16:10:20.094238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.223 [2024-07-15 16:10:20.094268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.223 qpair failed and we were unable to recover it. 00:28:51.223 [2024-07-15 16:10:20.094485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.223 [2024-07-15 16:10:20.094516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.223 qpair failed and we were unable to recover it. 00:28:51.223 [2024-07-15 16:10:20.094707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.223 [2024-07-15 16:10:20.094737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.223 qpair failed and we were unable to recover it. 00:28:51.223 [2024-07-15 16:10:20.094871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.223 [2024-07-15 16:10:20.094901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.223 qpair failed and we were unable to recover it. 00:28:51.223 [2024-07-15 16:10:20.095056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.223 [2024-07-15 16:10:20.095092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.223 qpair failed and we were unable to recover it. 00:28:51.223 [2024-07-15 16:10:20.095266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.223 [2024-07-15 16:10:20.095299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.223 qpair failed and we were unable to recover it. 00:28:51.223 [2024-07-15 16:10:20.095583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.223 [2024-07-15 16:10:20.095613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.224 qpair failed and we were unable to recover it. 00:28:51.224 [2024-07-15 16:10:20.095772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.224 [2024-07-15 16:10:20.095812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.224 qpair failed and we were unable to recover it. 00:28:51.224 [2024-07-15 16:10:20.095992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.224 [2024-07-15 16:10:20.096004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.224 qpair failed and we were unable to recover it. 00:28:51.224 [2024-07-15 16:10:20.096116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.224 [2024-07-15 16:10:20.096129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.224 qpair failed and we were unable to recover it. 00:28:51.224 [2024-07-15 16:10:20.096262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.224 [2024-07-15 16:10:20.096275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.224 qpair failed and we were unable to recover it. 00:28:51.224 [2024-07-15 16:10:20.096517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.224 [2024-07-15 16:10:20.096529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.224 qpair failed and we were unable to recover it. 00:28:51.224 [2024-07-15 16:10:20.096703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.224 [2024-07-15 16:10:20.096715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.224 qpair failed and we were unable to recover it. 00:28:51.224 [2024-07-15 16:10:20.096887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.224 [2024-07-15 16:10:20.096920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.224 qpair failed and we were unable to recover it. 00:28:51.224 [2024-07-15 16:10:20.097210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.224 [2024-07-15 16:10:20.097250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.224 qpair failed and we were unable to recover it. 00:28:51.224 [2024-07-15 16:10:20.097463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.224 [2024-07-15 16:10:20.097495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.224 qpair failed and we were unable to recover it. 00:28:51.224 [2024-07-15 16:10:20.097750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.224 [2024-07-15 16:10:20.097762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.224 qpair failed and we were unable to recover it. 00:28:51.224 [2024-07-15 16:10:20.097929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.224 [2024-07-15 16:10:20.097940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.224 qpair failed and we were unable to recover it. 00:28:51.224 [2024-07-15 16:10:20.098120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.224 [2024-07-15 16:10:20.098132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.224 qpair failed and we were unable to recover it. 00:28:51.224 [2024-07-15 16:10:20.098235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.224 [2024-07-15 16:10:20.098247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.224 qpair failed and we were unable to recover it. 00:28:51.224 [2024-07-15 16:10:20.098326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.224 [2024-07-15 16:10:20.098337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.224 qpair failed and we were unable to recover it. 00:28:51.224 [2024-07-15 16:10:20.098509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.224 [2024-07-15 16:10:20.098521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.224 qpair failed and we were unable to recover it. 00:28:51.224 [2024-07-15 16:10:20.098641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.224 [2024-07-15 16:10:20.098653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.224 qpair failed and we were unable to recover it. 00:28:51.224 [2024-07-15 16:10:20.098787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.224 [2024-07-15 16:10:20.098798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.224 qpair failed and we were unable to recover it. 00:28:51.224 [2024-07-15 16:10:20.098986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.224 [2024-07-15 16:10:20.099016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.224 qpair failed and we were unable to recover it. 00:28:51.224 [2024-07-15 16:10:20.099251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.224 [2024-07-15 16:10:20.099283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.224 qpair failed and we were unable to recover it. 00:28:51.224 [2024-07-15 16:10:20.099443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.224 [2024-07-15 16:10:20.099473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.224 qpair failed and we were unable to recover it. 00:28:51.224 [2024-07-15 16:10:20.099675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.224 [2024-07-15 16:10:20.099705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.224 qpair failed and we were unable to recover it. 00:28:51.224 [2024-07-15 16:10:20.099892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.224 [2024-07-15 16:10:20.099923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.224 qpair failed and we were unable to recover it. 00:28:51.224 [2024-07-15 16:10:20.100054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.224 [2024-07-15 16:10:20.100066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.224 qpair failed and we were unable to recover it. 00:28:51.224 [2024-07-15 16:10:20.100221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.224 [2024-07-15 16:10:20.100236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.224 qpair failed and we were unable to recover it. 00:28:51.224 [2024-07-15 16:10:20.100428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.224 [2024-07-15 16:10:20.100439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.224 qpair failed and we were unable to recover it. 00:28:51.224 [2024-07-15 16:10:20.100542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.224 [2024-07-15 16:10:20.100552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.224 qpair failed and we were unable to recover it. 00:28:51.224 [2024-07-15 16:10:20.100652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.224 [2024-07-15 16:10:20.100664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.224 qpair failed and we were unable to recover it. 00:28:51.519 [2024-07-15 16:10:20.100912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.519 [2024-07-15 16:10:20.100923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.519 qpair failed and we were unable to recover it. 00:28:51.519 [2024-07-15 16:10:20.101024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.519 [2024-07-15 16:10:20.101034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.519 qpair failed and we were unable to recover it. 00:28:51.519 [2024-07-15 16:10:20.101147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.519 [2024-07-15 16:10:20.101159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.519 qpair failed and we were unable to recover it. 00:28:51.519 [2024-07-15 16:10:20.101260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.519 [2024-07-15 16:10:20.101271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.519 qpair failed and we were unable to recover it. 00:28:51.519 [2024-07-15 16:10:20.101627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.519 [2024-07-15 16:10:20.101639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.519 qpair failed and we were unable to recover it. 00:28:51.519 [2024-07-15 16:10:20.101808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.519 [2024-07-15 16:10:20.101831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.519 qpair failed and we were unable to recover it. 00:28:51.519 [2024-07-15 16:10:20.102073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.519 [2024-07-15 16:10:20.102085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.519 qpair failed and we were unable to recover it. 00:28:51.519 [2024-07-15 16:10:20.102258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.519 [2024-07-15 16:10:20.102270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.519 qpair failed and we were unable to recover it. 00:28:51.519 [2024-07-15 16:10:20.102382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.519 [2024-07-15 16:10:20.102393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.519 qpair failed and we were unable to recover it. 00:28:51.519 [2024-07-15 16:10:20.102568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.519 [2024-07-15 16:10:20.102580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.519 qpair failed and we were unable to recover it. 00:28:51.519 [2024-07-15 16:10:20.102805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.519 [2024-07-15 16:10:20.102818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.519 qpair failed and we were unable to recover it. 00:28:51.519 [2024-07-15 16:10:20.102930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.519 [2024-07-15 16:10:20.102940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.519 qpair failed and we were unable to recover it. 00:28:51.519 [2024-07-15 16:10:20.103059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.519 [2024-07-15 16:10:20.103071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.519 qpair failed and we were unable to recover it. 00:28:51.519 [2024-07-15 16:10:20.103238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.519 [2024-07-15 16:10:20.103252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.519 qpair failed and we were unable to recover it. 00:28:51.519 [2024-07-15 16:10:20.103392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.519 [2024-07-15 16:10:20.103403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.519 qpair failed and we were unable to recover it. 00:28:51.519 [2024-07-15 16:10:20.103655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.519 [2024-07-15 16:10:20.103667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.519 qpair failed and we were unable to recover it. 00:28:51.519 [2024-07-15 16:10:20.103763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.519 [2024-07-15 16:10:20.103774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.519 qpair failed and we were unable to recover it. 00:28:51.519 [2024-07-15 16:10:20.103946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.519 [2024-07-15 16:10:20.103958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.519 qpair failed and we were unable to recover it. 00:28:51.519 [2024-07-15 16:10:20.104117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.519 [2024-07-15 16:10:20.104128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.519 qpair failed and we were unable to recover it. 00:28:51.519 [2024-07-15 16:10:20.104299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.519 [2024-07-15 16:10:20.104311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.519 qpair failed and we were unable to recover it. 00:28:51.519 [2024-07-15 16:10:20.104485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.519 [2024-07-15 16:10:20.104496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.519 qpair failed and we were unable to recover it. 00:28:51.519 [2024-07-15 16:10:20.104675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.519 [2024-07-15 16:10:20.104705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.519 qpair failed and we were unable to recover it. 00:28:51.519 [2024-07-15 16:10:20.104848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.519 [2024-07-15 16:10:20.104878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.519 qpair failed and we were unable to recover it. 00:28:51.519 [2024-07-15 16:10:20.105094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.519 [2024-07-15 16:10:20.105124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.519 qpair failed and we were unable to recover it. 00:28:51.519 [2024-07-15 16:10:20.105332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.519 [2024-07-15 16:10:20.105364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.519 qpair failed and we were unable to recover it. 00:28:51.519 [2024-07-15 16:10:20.105572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.519 [2024-07-15 16:10:20.105584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.519 qpair failed and we were unable to recover it. 00:28:51.519 [2024-07-15 16:10:20.105748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.519 [2024-07-15 16:10:20.105778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.519 qpair failed and we were unable to recover it. 00:28:51.519 [2024-07-15 16:10:20.106006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.519 [2024-07-15 16:10:20.106037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.519 qpair failed and we were unable to recover it. 00:28:51.519 [2024-07-15 16:10:20.106244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.519 [2024-07-15 16:10:20.106276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.519 qpair failed and we were unable to recover it. 00:28:51.519 [2024-07-15 16:10:20.106424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.519 [2024-07-15 16:10:20.106454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.519 qpair failed and we were unable to recover it. 00:28:51.519 [2024-07-15 16:10:20.106582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.519 [2024-07-15 16:10:20.106612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.519 qpair failed and we were unable to recover it. 00:28:51.519 [2024-07-15 16:10:20.106745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.519 [2024-07-15 16:10:20.106776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.519 qpair failed and we were unable to recover it. 00:28:51.519 [2024-07-15 16:10:20.106954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.519 [2024-07-15 16:10:20.106967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.519 qpair failed and we were unable to recover it. 00:28:51.519 [2024-07-15 16:10:20.107108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.519 [2024-07-15 16:10:20.107119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.519 qpair failed and we were unable to recover it. 00:28:51.519 [2024-07-15 16:10:20.107240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.519 [2024-07-15 16:10:20.107251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.519 qpair failed and we were unable to recover it. 00:28:51.519 [2024-07-15 16:10:20.107410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.520 [2024-07-15 16:10:20.107422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.520 qpair failed and we were unable to recover it. 00:28:51.520 [2024-07-15 16:10:20.107502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.520 [2024-07-15 16:10:20.107512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.520 qpair failed and we were unable to recover it. 00:28:51.520 [2024-07-15 16:10:20.107692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.520 [2024-07-15 16:10:20.107705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.520 qpair failed and we were unable to recover it. 00:28:51.520 [2024-07-15 16:10:20.107950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.520 [2024-07-15 16:10:20.107981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.520 qpair failed and we were unable to recover it. 00:28:51.520 [2024-07-15 16:10:20.108195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.520 [2024-07-15 16:10:20.108235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.520 qpair failed and we were unable to recover it. 00:28:51.520 [2024-07-15 16:10:20.108369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.520 [2024-07-15 16:10:20.108400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.520 qpair failed and we were unable to recover it. 00:28:51.520 [2024-07-15 16:10:20.108663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.520 [2024-07-15 16:10:20.108694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.520 qpair failed and we were unable to recover it. 00:28:51.520 [2024-07-15 16:10:20.108994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.520 [2024-07-15 16:10:20.109025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.520 qpair failed and we were unable to recover it. 00:28:51.520 [2024-07-15 16:10:20.109304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.520 [2024-07-15 16:10:20.109335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.520 qpair failed and we were unable to recover it. 00:28:51.520 [2024-07-15 16:10:20.109600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.520 [2024-07-15 16:10:20.109631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.520 qpair failed and we were unable to recover it. 00:28:51.520 [2024-07-15 16:10:20.109814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.520 [2024-07-15 16:10:20.109845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.520 qpair failed and we were unable to recover it. 00:28:51.520 [2024-07-15 16:10:20.110041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.520 [2024-07-15 16:10:20.110072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.520 qpair failed and we were unable to recover it. 00:28:51.520 [2024-07-15 16:10:20.110363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.520 [2024-07-15 16:10:20.110395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.520 qpair failed and we were unable to recover it. 00:28:51.520 [2024-07-15 16:10:20.110522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.520 [2024-07-15 16:10:20.110553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.520 qpair failed and we were unable to recover it. 00:28:51.520 [2024-07-15 16:10:20.110675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.520 [2024-07-15 16:10:20.110705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.520 qpair failed and we were unable to recover it. 00:28:51.520 [2024-07-15 16:10:20.110915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.520 [2024-07-15 16:10:20.110959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.520 qpair failed and we were unable to recover it. 00:28:51.520 [2024-07-15 16:10:20.111204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.520 [2024-07-15 16:10:20.111240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.520 qpair failed and we were unable to recover it. 00:28:51.520 [2024-07-15 16:10:20.111452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.520 [2024-07-15 16:10:20.111483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.520 qpair failed and we were unable to recover it. 00:28:51.520 [2024-07-15 16:10:20.111682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.520 [2024-07-15 16:10:20.111712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.520 qpair failed and we were unable to recover it. 00:28:51.520 [2024-07-15 16:10:20.111849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.520 [2024-07-15 16:10:20.111860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.520 qpair failed and we were unable to recover it. 00:28:51.520 [2024-07-15 16:10:20.111986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.520 [2024-07-15 16:10:20.112018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.520 qpair failed and we were unable to recover it. 00:28:51.520 [2024-07-15 16:10:20.112222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.520 [2024-07-15 16:10:20.112259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.520 qpair failed and we were unable to recover it. 00:28:51.520 [2024-07-15 16:10:20.112501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.520 [2024-07-15 16:10:20.112531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.520 qpair failed and we were unable to recover it. 00:28:51.520 [2024-07-15 16:10:20.112821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.520 [2024-07-15 16:10:20.112852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.520 qpair failed and we were unable to recover it. 00:28:51.520 [2024-07-15 16:10:20.113065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.520 [2024-07-15 16:10:20.113095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.520 qpair failed and we were unable to recover it. 00:28:51.520 [2024-07-15 16:10:20.113312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.520 [2024-07-15 16:10:20.113344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.520 qpair failed and we were unable to recover it. 00:28:51.520 [2024-07-15 16:10:20.113501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.520 [2024-07-15 16:10:20.113531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.520 qpair failed and we were unable to recover it. 00:28:51.520 [2024-07-15 16:10:20.113737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.520 [2024-07-15 16:10:20.113768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.520 qpair failed and we were unable to recover it. 00:28:51.520 [2024-07-15 16:10:20.114006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.520 [2024-07-15 16:10:20.114036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.520 qpair failed and we were unable to recover it. 00:28:51.520 [2024-07-15 16:10:20.114244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.520 [2024-07-15 16:10:20.114276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.520 qpair failed and we were unable to recover it. 00:28:51.520 [2024-07-15 16:10:20.114431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.520 [2024-07-15 16:10:20.114461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.520 qpair failed and we were unable to recover it. 00:28:51.520 [2024-07-15 16:10:20.114596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.520 [2024-07-15 16:10:20.114608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.520 qpair failed and we were unable to recover it. 00:28:51.520 [2024-07-15 16:10:20.114746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.520 [2024-07-15 16:10:20.114757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.520 qpair failed and we were unable to recover it. 00:28:51.520 [2024-07-15 16:10:20.114864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.520 [2024-07-15 16:10:20.114875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.520 qpair failed and we were unable to recover it. 00:28:51.520 [2024-07-15 16:10:20.115030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.520 [2024-07-15 16:10:20.115062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.520 qpair failed and we were unable to recover it. 00:28:51.520 [2024-07-15 16:10:20.115201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.520 [2024-07-15 16:10:20.115261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.521 qpair failed and we were unable to recover it. 00:28:51.521 [2024-07-15 16:10:20.115421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.521 [2024-07-15 16:10:20.115451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.521 qpair failed and we were unable to recover it. 00:28:51.521 [2024-07-15 16:10:20.115626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.521 [2024-07-15 16:10:20.115638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.521 qpair failed and we were unable to recover it. 00:28:51.521 [2024-07-15 16:10:20.115833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.521 [2024-07-15 16:10:20.115863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.521 qpair failed and we were unable to recover it. 00:28:51.521 [2024-07-15 16:10:20.116080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.521 [2024-07-15 16:10:20.116110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.521 qpair failed and we were unable to recover it. 00:28:51.521 [2024-07-15 16:10:20.116279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.521 [2024-07-15 16:10:20.116310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.521 qpair failed and we were unable to recover it. 00:28:51.521 [2024-07-15 16:10:20.116553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.521 [2024-07-15 16:10:20.116583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.521 qpair failed and we were unable to recover it. 00:28:51.521 [2024-07-15 16:10:20.116795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.521 [2024-07-15 16:10:20.116832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.521 qpair failed and we were unable to recover it. 00:28:51.521 [2024-07-15 16:10:20.117098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.521 [2024-07-15 16:10:20.117109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.521 qpair failed and we were unable to recover it. 00:28:51.521 [2024-07-15 16:10:20.117345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.521 [2024-07-15 16:10:20.117356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.521 qpair failed and we were unable to recover it. 00:28:51.521 [2024-07-15 16:10:20.117542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.521 [2024-07-15 16:10:20.117553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.521 qpair failed and we were unable to recover it. 00:28:51.521 [2024-07-15 16:10:20.117773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.521 [2024-07-15 16:10:20.117784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.521 qpair failed and we were unable to recover it. 00:28:51.521 [2024-07-15 16:10:20.117985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.521 [2024-07-15 16:10:20.118016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.521 qpair failed and we were unable to recover it. 00:28:51.521 [2024-07-15 16:10:20.118166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.521 [2024-07-15 16:10:20.118196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.521 qpair failed and we were unable to recover it. 00:28:51.521 [2024-07-15 16:10:20.118492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.521 [2024-07-15 16:10:20.118524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.521 qpair failed and we were unable to recover it. 00:28:51.521 [2024-07-15 16:10:20.118678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.521 [2024-07-15 16:10:20.118689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.521 qpair failed and we were unable to recover it. 00:28:51.521 [2024-07-15 16:10:20.118851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.521 [2024-07-15 16:10:20.118881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.521 qpair failed and we were unable to recover it. 00:28:51.521 [2024-07-15 16:10:20.119090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.521 [2024-07-15 16:10:20.119120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.521 qpair failed and we were unable to recover it. 00:28:51.521 [2024-07-15 16:10:20.119244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.521 [2024-07-15 16:10:20.119276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.521 qpair failed and we were unable to recover it. 00:28:51.521 [2024-07-15 16:10:20.119482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.521 [2024-07-15 16:10:20.119514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.521 qpair failed and we were unable to recover it. 00:28:51.521 [2024-07-15 16:10:20.119635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.521 [2024-07-15 16:10:20.119665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.521 qpair failed and we were unable to recover it. 00:28:51.521 [2024-07-15 16:10:20.119867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.521 [2024-07-15 16:10:20.119899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.521 qpair failed and we were unable to recover it. 00:28:51.521 [2024-07-15 16:10:20.120118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.521 [2024-07-15 16:10:20.120129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.521 qpair failed and we were unable to recover it. 00:28:51.521 [2024-07-15 16:10:20.120237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.521 [2024-07-15 16:10:20.120249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.521 qpair failed and we were unable to recover it. 00:28:51.521 [2024-07-15 16:10:20.120422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.521 [2024-07-15 16:10:20.120452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.521 qpair failed and we were unable to recover it. 00:28:51.521 [2024-07-15 16:10:20.120665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.521 [2024-07-15 16:10:20.120695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.521 qpair failed and we were unable to recover it. 00:28:51.521 [2024-07-15 16:10:20.120892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.521 [2024-07-15 16:10:20.120923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.521 qpair failed and we were unable to recover it. 00:28:51.521 [2024-07-15 16:10:20.121105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.521 [2024-07-15 16:10:20.121116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.521 qpair failed and we were unable to recover it. 00:28:51.521 [2024-07-15 16:10:20.121209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.521 [2024-07-15 16:10:20.121219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.521 qpair failed and we were unable to recover it. 00:28:51.521 [2024-07-15 16:10:20.121374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.521 [2024-07-15 16:10:20.121406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.521 qpair failed and we were unable to recover it. 00:28:51.521 [2024-07-15 16:10:20.121617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.521 [2024-07-15 16:10:20.121647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.521 qpair failed and we were unable to recover it. 00:28:51.521 [2024-07-15 16:10:20.121870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.521 [2024-07-15 16:10:20.121900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.521 qpair failed and we were unable to recover it. 00:28:51.521 [2024-07-15 16:10:20.122103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.521 [2024-07-15 16:10:20.122134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.521 qpair failed and we were unable to recover it. 00:28:51.521 [2024-07-15 16:10:20.122405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.521 [2024-07-15 16:10:20.122436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.521 qpair failed and we were unable to recover it. 00:28:51.521 [2024-07-15 16:10:20.122671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.521 [2024-07-15 16:10:20.122683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.521 qpair failed and we were unable to recover it. 00:28:51.521 [2024-07-15 16:10:20.122853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.521 [2024-07-15 16:10:20.122874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.521 qpair failed and we were unable to recover it. 00:28:51.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3924997 Killed "${NVMF_APP[@]}" "$@" 00:28:51.521 [2024-07-15 16:10:20.123022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.521 [2024-07-15 16:10:20.123034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.521 qpair failed and we were unable to recover it. 00:28:51.521 [2024-07-15 16:10:20.123133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.521 [2024-07-15 16:10:20.123144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.521 qpair failed and we were unable to recover it. 00:28:51.521 [2024-07-15 16:10:20.123336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.521 [2024-07-15 16:10:20.123348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.521 qpair failed and we were unable to recover it. 00:28:51.521 [2024-07-15 16:10:20.123522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.521 [2024-07-15 16:10:20.123534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.521 qpair failed and we were unable to recover it. 00:28:51.521 16:10:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:28:51.522 [2024-07-15 16:10:20.123728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.522 [2024-07-15 16:10:20.123740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.522 qpair failed and we were unable to recover it. 00:28:51.522 [2024-07-15 16:10:20.123909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.522 [2024-07-15 16:10:20.123921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.522 qpair failed and we were unable to recover it. 00:28:51.522 16:10:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:51.522 [2024-07-15 16:10:20.124016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.522 [2024-07-15 16:10:20.124027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.522 qpair failed and we were unable to recover it. 00:28:51.522 [2024-07-15 16:10:20.124116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.522 [2024-07-15 16:10:20.124126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.522 qpair failed and we were unable to recover it. 00:28:51.522 16:10:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:51.522 [2024-07-15 16:10:20.124231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.522 [2024-07-15 16:10:20.124245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.522 qpair failed and we were unable to recover it. 00:28:51.522 [2024-07-15 16:10:20.124409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.522 [2024-07-15 16:10:20.124424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.522 qpair failed and we were unable to recover it. 00:28:51.522 16:10:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:51.522 [2024-07-15 16:10:20.124649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.522 [2024-07-15 16:10:20.124661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.522 qpair failed and we were unable to recover it. 00:28:51.522 [2024-07-15 16:10:20.124748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.522 [2024-07-15 16:10:20.124758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.522 qpair failed and we were unable to recover it. 00:28:51.522 16:10:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:51.522 [2024-07-15 16:10:20.125025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.522 [2024-07-15 16:10:20.125037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.522 qpair failed and we were unable to recover it. 00:28:51.522 [2024-07-15 16:10:20.125192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.522 [2024-07-15 16:10:20.125204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.522 qpair failed and we were unable to recover it. 00:28:51.522 [2024-07-15 16:10:20.125319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.522 [2024-07-15 16:10:20.125330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.522 qpair failed and we were unable to recover it. 00:28:51.522 [2024-07-15 16:10:20.125496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.522 [2024-07-15 16:10:20.125507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.522 qpair failed and we were unable to recover it. 00:28:51.522 [2024-07-15 16:10:20.125617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.522 [2024-07-15 16:10:20.125628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.522 qpair failed and we were unable to recover it. 00:28:51.522 [2024-07-15 16:10:20.125784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.522 [2024-07-15 16:10:20.125795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.522 qpair failed and we were unable to recover it. 00:28:51.522 [2024-07-15 16:10:20.125966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.522 [2024-07-15 16:10:20.125978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.522 qpair failed and we were unable to recover it. 00:28:51.522 [2024-07-15 16:10:20.126143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.522 [2024-07-15 16:10:20.126155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.522 qpair failed and we were unable to recover it. 00:28:51.522 [2024-07-15 16:10:20.126329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.522 [2024-07-15 16:10:20.126341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.522 qpair failed and we were unable to recover it. 00:28:51.522 [2024-07-15 16:10:20.126533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.522 [2024-07-15 16:10:20.126545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.522 qpair failed and we were unable to recover it. 00:28:51.522 [2024-07-15 16:10:20.126734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.522 [2024-07-15 16:10:20.126770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.522 qpair failed and we were unable to recover it. 00:28:51.522 [2024-07-15 16:10:20.126914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.522 [2024-07-15 16:10:20.126951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.522 qpair failed and we were unable to recover it. 00:28:51.522 [2024-07-15 16:10:20.127182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.522 [2024-07-15 16:10:20.127198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.522 qpair failed and we were unable to recover it. 00:28:51.522 [2024-07-15 16:10:20.127367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.522 [2024-07-15 16:10:20.127384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.522 qpair failed and we were unable to recover it. 00:28:51.522 [2024-07-15 16:10:20.127527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.522 [2024-07-15 16:10:20.127542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.522 qpair failed and we were unable to recover it. 00:28:51.522 [2024-07-15 16:10:20.127770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.522 [2024-07-15 16:10:20.127785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.522 qpair failed and we were unable to recover it. 00:28:51.522 [2024-07-15 16:10:20.127980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.522 [2024-07-15 16:10:20.127995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.522 qpair failed and we were unable to recover it. 00:28:51.522 [2024-07-15 16:10:20.128180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.522 [2024-07-15 16:10:20.128195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.522 qpair failed and we were unable to recover it. 00:28:51.522 [2024-07-15 16:10:20.128379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.522 [2024-07-15 16:10:20.128396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.522 qpair failed and we were unable to recover it. 00:28:51.522 [2024-07-15 16:10:20.128529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.522 [2024-07-15 16:10:20.128543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.522 qpair failed and we were unable to recover it. 00:28:51.522 [2024-07-15 16:10:20.128735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.522 [2024-07-15 16:10:20.128751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.522 qpair failed and we were unable to recover it. 00:28:51.522 [2024-07-15 16:10:20.128929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.522 [2024-07-15 16:10:20.128944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.522 qpair failed and we were unable to recover it. 00:28:51.522 [2024-07-15 16:10:20.129038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.522 [2024-07-15 16:10:20.129052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.522 qpair failed and we were unable to recover it. 00:28:51.522 [2024-07-15 16:10:20.129176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.522 [2024-07-15 16:10:20.129196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.522 qpair failed and we were unable to recover it. 00:28:51.522 [2024-07-15 16:10:20.129392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.522 [2024-07-15 16:10:20.129408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.522 qpair failed and we were unable to recover it. 00:28:51.522 [2024-07-15 16:10:20.129519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.522 [2024-07-15 16:10:20.129533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.522 qpair failed and we were unable to recover it. 00:28:51.522 [2024-07-15 16:10:20.129714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.522 [2024-07-15 16:10:20.129730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.522 qpair failed and we were unable to recover it. 00:28:51.522 [2024-07-15 16:10:20.129842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.522 [2024-07-15 16:10:20.129856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.522 qpair failed and we were unable to recover it. 00:28:51.522 [2024-07-15 16:10:20.129957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.522 [2024-07-15 16:10:20.129972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.522 qpair failed and we were unable to recover it. 00:28:51.522 [2024-07-15 16:10:20.130088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.522 [2024-07-15 16:10:20.130102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.522 qpair failed and we were unable to recover it. 00:28:51.523 [2024-07-15 16:10:20.130212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.523 [2024-07-15 16:10:20.130233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.523 qpair failed and we were unable to recover it. 00:28:51.523 [2024-07-15 16:10:20.130354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.523 [2024-07-15 16:10:20.130369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.523 qpair failed and we were unable to recover it. 00:28:51.523 [2024-07-15 16:10:20.130546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.523 [2024-07-15 16:10:20.130562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.523 qpair failed and we were unable to recover it. 00:28:51.523 [2024-07-15 16:10:20.130755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.523 [2024-07-15 16:10:20.130770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.523 qpair failed and we were unable to recover it. 00:28:51.523 [2024-07-15 16:10:20.130865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.523 [2024-07-15 16:10:20.130879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.523 qpair failed and we were unable to recover it. 00:28:51.523 [2024-07-15 16:10:20.130994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.523 [2024-07-15 16:10:20.131010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.523 qpair failed and we were unable to recover it. 00:28:51.523 [2024-07-15 16:10:20.131122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.523 [2024-07-15 16:10:20.131138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.523 qpair failed and we were unable to recover it. 00:28:51.523 [2024-07-15 16:10:20.131251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.523 [2024-07-15 16:10:20.131265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.523 qpair failed and we were unable to recover it. 00:28:51.523 16:10:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3925724 00:28:51.523 [2024-07-15 16:10:20.131439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.523 [2024-07-15 16:10:20.131451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.523 qpair failed and we were unable to recover it. 00:28:51.523 16:10:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3925724 00:28:51.523 [2024-07-15 16:10:20.131674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.523 [2024-07-15 16:10:20.131686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.523 qpair failed and we were unable to recover it. 00:28:51.523 16:10:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:51.523 [2024-07-15 16:10:20.131904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.523 [2024-07-15 16:10:20.131917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.523 qpair failed and we were unable to recover it. 00:28:51.523 16:10:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3925724 ']' 00:28:51.523 [2024-07-15 16:10:20.132025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.523 [2024-07-15 16:10:20.132036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.523 qpair failed and we were unable to recover it. 00:28:51.523 [2024-07-15 16:10:20.132149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.523 [2024-07-15 16:10:20.132160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.523 qpair failed and we were unable to recover it. 00:28:51.523 16:10:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:51.523 [2024-07-15 16:10:20.132260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.523 [2024-07-15 16:10:20.132272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.523 qpair failed and we were unable to recover it. 00:28:51.523 [2024-07-15 16:10:20.132431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.523 [2024-07-15 16:10:20.132442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.523 qpair failed and we were unable to recover it. 00:28:51.523 16:10:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:51.523 [2024-07-15 16:10:20.132533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.523 [2024-07-15 16:10:20.132544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.523 qpair failed and we were unable to recover it. 00:28:51.523 [2024-07-15 16:10:20.132631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.523 [2024-07-15 16:10:20.132642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.523 qpair failed and we were unable to recover it. 00:28:51.523 [2024-07-15 16:10:20.132799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.523 16:10:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:51.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:51.523 [2024-07-15 16:10:20.132813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.523 qpair failed and we were unable to recover it. 00:28:51.523 [2024-07-15 16:10:20.132918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.523 [2024-07-15 16:10:20.132931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.523 qpair failed and we were unable to recover it. 00:28:51.523 16:10:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:51.523 [2024-07-15 16:10:20.133098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.523 [2024-07-15 16:10:20.133111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.523 qpair failed and we were unable to recover it. 00:28:51.523 [2024-07-15 16:10:20.133216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.523 [2024-07-15 16:10:20.133231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.523 qpair failed and we were unable to recover it. 00:28:51.523 16:10:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:51.523 [2024-07-15 16:10:20.133392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.523 [2024-07-15 16:10:20.133404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.523 qpair failed and we were unable to recover it. 00:28:51.523 [2024-07-15 16:10:20.133561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.523 [2024-07-15 16:10:20.133573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.524 qpair failed and we were unable to recover it. 00:28:51.524 [2024-07-15 16:10:20.133749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.524 [2024-07-15 16:10:20.133760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.524 qpair failed and we were unable to recover it. 00:28:51.524 [2024-07-15 16:10:20.133858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.524 [2024-07-15 16:10:20.133871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.524 qpair failed and we were unable to recover it. 00:28:51.524 [2024-07-15 16:10:20.134048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.524 [2024-07-15 16:10:20.134060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.524 qpair failed and we were unable to recover it. 00:28:51.524 [2024-07-15 16:10:20.134310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.524 [2024-07-15 16:10:20.134323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.524 qpair failed and we were unable to recover it. 00:28:51.524 [2024-07-15 16:10:20.134425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.524 [2024-07-15 16:10:20.134436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.524 qpair failed and we were unable to recover it. 00:28:51.524 [2024-07-15 16:10:20.134534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.524 [2024-07-15 16:10:20.134545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.524 qpair failed and we were unable to recover it. 00:28:51.524 [2024-07-15 16:10:20.134701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.524 [2024-07-15 16:10:20.134711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.524 qpair failed and we were unable to recover it. 00:28:51.524 [2024-07-15 16:10:20.134829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.524 [2024-07-15 16:10:20.134839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.524 qpair failed and we were unable to recover it. 00:28:51.524 [2024-07-15 16:10:20.135007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.524 [2024-07-15 16:10:20.135018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.524 qpair failed and we were unable to recover it. 00:28:51.524 [2024-07-15 16:10:20.135239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.524 [2024-07-15 16:10:20.135252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.524 qpair failed and we were unable to recover it. 00:28:51.524 [2024-07-15 16:10:20.135415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.524 [2024-07-15 16:10:20.135427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.524 qpair failed and we were unable to recover it. 00:28:51.524 [2024-07-15 16:10:20.135601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.524 [2024-07-15 16:10:20.135613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.524 qpair failed and we were unable to recover it. 00:28:51.524 [2024-07-15 16:10:20.135722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.524 [2024-07-15 16:10:20.135733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.524 qpair failed and we were unable to recover it. 00:28:51.524 [2024-07-15 16:10:20.135960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.524 [2024-07-15 16:10:20.135971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.524 qpair failed and we were unable to recover it. 00:28:51.524 [2024-07-15 16:10:20.136124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.524 [2024-07-15 16:10:20.136135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.524 qpair failed and we were unable to recover it. 00:28:51.524 [2024-07-15 16:10:20.136290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.524 [2024-07-15 16:10:20.136303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.524 qpair failed and we were unable to recover it. 00:28:51.524 [2024-07-15 16:10:20.136385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.524 [2024-07-15 16:10:20.136396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.524 qpair failed and we were unable to recover it. 00:28:51.524 [2024-07-15 16:10:20.136569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.524 [2024-07-15 16:10:20.136580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.524 qpair failed and we were unable to recover it. 00:28:51.524 [2024-07-15 16:10:20.136758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.524 [2024-07-15 16:10:20.136769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.524 qpair failed and we were unable to recover it. 00:28:51.524 [2024-07-15 16:10:20.136943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.524 [2024-07-15 16:10:20.136957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.524 qpair failed and we were unable to recover it. 00:28:51.524 [2024-07-15 16:10:20.137130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.524 [2024-07-15 16:10:20.137141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.524 qpair failed and we were unable to recover it. 00:28:51.524 [2024-07-15 16:10:20.137251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.524 [2024-07-15 16:10:20.137263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.524 qpair failed and we were unable to recover it. 00:28:51.524 [2024-07-15 16:10:20.137485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.524 [2024-07-15 16:10:20.137496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.524 qpair failed and we were unable to recover it. 00:28:51.524 [2024-07-15 16:10:20.137721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.524 [2024-07-15 16:10:20.137733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.524 qpair failed and we were unable to recover it. 00:28:51.524 [2024-07-15 16:10:20.137898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.524 [2024-07-15 16:10:20.137910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.524 qpair failed and we were unable to recover it. 00:28:51.524 [2024-07-15 16:10:20.137998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.524 [2024-07-15 16:10:20.138007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.524 qpair failed and we were unable to recover it. 00:28:51.524 [2024-07-15 16:10:20.138227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.524 [2024-07-15 16:10:20.138240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.524 qpair failed and we were unable to recover it. 00:28:51.524 [2024-07-15 16:10:20.138494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.524 [2024-07-15 16:10:20.138506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.524 qpair failed and we were unable to recover it. 00:28:51.524 [2024-07-15 16:10:20.138626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.524 [2024-07-15 16:10:20.138637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.524 qpair failed and we were unable to recover it. 00:28:51.524 [2024-07-15 16:10:20.138801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.524 [2024-07-15 16:10:20.138813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.524 qpair failed and we were unable to recover it. 00:28:51.524 [2024-07-15 16:10:20.138923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.524 [2024-07-15 16:10:20.138934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.524 qpair failed and we were unable to recover it. 00:28:51.524 [2024-07-15 16:10:20.139044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.524 [2024-07-15 16:10:20.139056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.524 qpair failed and we were unable to recover it. 00:28:51.524 [2024-07-15 16:10:20.139161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.524 [2024-07-15 16:10:20.139172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.524 qpair failed and we were unable to recover it. 00:28:51.524 [2024-07-15 16:10:20.139309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.524 [2024-07-15 16:10:20.139321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.524 qpair failed and we were unable to recover it. 00:28:51.524 [2024-07-15 16:10:20.139546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.524 [2024-07-15 16:10:20.139558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.524 qpair failed and we were unable to recover it. 00:28:51.524 [2024-07-15 16:10:20.139663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.524 [2024-07-15 16:10:20.139674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.525 qpair failed and we were unable to recover it. 00:28:51.525 [2024-07-15 16:10:20.139782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.525 [2024-07-15 16:10:20.139793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.525 qpair failed and we were unable to recover it. 00:28:51.525 [2024-07-15 16:10:20.139896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.525 [2024-07-15 16:10:20.139907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.525 qpair failed and we were unable to recover it. 00:28:51.525 [2024-07-15 16:10:20.140020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.525 [2024-07-15 16:10:20.140031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.525 qpair failed and we were unable to recover it. 00:28:51.525 [2024-07-15 16:10:20.140150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.525 [2024-07-15 16:10:20.140162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.525 qpair failed and we were unable to recover it. 00:28:51.525 [2024-07-15 16:10:20.140378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.525 [2024-07-15 16:10:20.140390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.525 qpair failed and we were unable to recover it. 00:28:51.525 [2024-07-15 16:10:20.140481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.525 [2024-07-15 16:10:20.140491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.525 qpair failed and we were unable to recover it. 00:28:51.525 [2024-07-15 16:10:20.140586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.525 [2024-07-15 16:10:20.140598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.525 qpair failed and we were unable to recover it. 00:28:51.525 [2024-07-15 16:10:20.140696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.525 [2024-07-15 16:10:20.140707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.525 qpair failed and we were unable to recover it. 00:28:51.525 [2024-07-15 16:10:20.140901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.525 [2024-07-15 16:10:20.140913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.525 qpair failed and we were unable to recover it. 00:28:51.525 [2024-07-15 16:10:20.141005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.525 [2024-07-15 16:10:20.141017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.525 qpair failed and we were unable to recover it. 00:28:51.525 [2024-07-15 16:10:20.141216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.525 [2024-07-15 16:10:20.141232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.525 qpair failed and we were unable to recover it. 00:28:51.525 [2024-07-15 16:10:20.141321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.525 [2024-07-15 16:10:20.141331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.525 qpair failed and we were unable to recover it. 00:28:51.525 [2024-07-15 16:10:20.141580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.525 [2024-07-15 16:10:20.141592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.525 qpair failed and we were unable to recover it. 00:28:51.525 [2024-07-15 16:10:20.141696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.525 [2024-07-15 16:10:20.141706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.525 qpair failed and we were unable to recover it. 00:28:51.525 [2024-07-15 16:10:20.141796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.525 [2024-07-15 16:10:20.141807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.525 qpair failed and we were unable to recover it. 00:28:51.525 [2024-07-15 16:10:20.141915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.525 [2024-07-15 16:10:20.141927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.525 qpair failed and we were unable to recover it. 00:28:51.525 [2024-07-15 16:10:20.142103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.525 [2024-07-15 16:10:20.142115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.525 qpair failed and we were unable to recover it. 00:28:51.525 [2024-07-15 16:10:20.142220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.525 [2024-07-15 16:10:20.142239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.525 qpair failed and we were unable to recover it. 00:28:51.525 [2024-07-15 16:10:20.142393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.525 [2024-07-15 16:10:20.142404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.525 qpair failed and we were unable to recover it. 00:28:51.525 [2024-07-15 16:10:20.142510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.525 [2024-07-15 16:10:20.142521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.525 qpair failed and we were unable to recover it. 00:28:51.525 [2024-07-15 16:10:20.142683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.525 [2024-07-15 16:10:20.142694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.525 qpair failed and we were unable to recover it. 00:28:51.525 [2024-07-15 16:10:20.142891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.525 [2024-07-15 16:10:20.142903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.525 qpair failed and we were unable to recover it. 00:28:51.525 [2024-07-15 16:10:20.143073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.525 [2024-07-15 16:10:20.143084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.525 qpair failed and we were unable to recover it. 00:28:51.525 [2024-07-15 16:10:20.143237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.525 [2024-07-15 16:10:20.143251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.525 qpair failed and we were unable to recover it. 00:28:51.525 [2024-07-15 16:10:20.143371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.525 [2024-07-15 16:10:20.143383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.525 qpair failed and we were unable to recover it. 00:28:51.525 [2024-07-15 16:10:20.143559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.525 [2024-07-15 16:10:20.143570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.525 qpair failed and we were unable to recover it. 00:28:51.525 [2024-07-15 16:10:20.143802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.525 [2024-07-15 16:10:20.143814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.525 qpair failed and we were unable to recover it. 00:28:51.525 [2024-07-15 16:10:20.144049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.525 [2024-07-15 16:10:20.144061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.525 qpair failed and we were unable to recover it. 00:28:51.525 [2024-07-15 16:10:20.144215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.525 [2024-07-15 16:10:20.144230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.525 qpair failed and we were unable to recover it. 00:28:51.525 [2024-07-15 16:10:20.144328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.525 [2024-07-15 16:10:20.144341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.525 qpair failed and we were unable to recover it. 00:28:51.525 [2024-07-15 16:10:20.144583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.525 [2024-07-15 16:10:20.144595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.525 qpair failed and we were unable to recover it. 00:28:51.525 [2024-07-15 16:10:20.144700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.525 [2024-07-15 16:10:20.144712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.525 qpair failed and we were unable to recover it. 00:28:51.525 [2024-07-15 16:10:20.144892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.525 [2024-07-15 16:10:20.144904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.525 qpair failed and we were unable to recover it. 00:28:51.525 [2024-07-15 16:10:20.145025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.525 [2024-07-15 16:10:20.145036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.525 qpair failed and we were unable to recover it. 00:28:51.525 [2024-07-15 16:10:20.145209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.525 [2024-07-15 16:10:20.145220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.525 qpair failed and we were unable to recover it. 00:28:51.525 [2024-07-15 16:10:20.145332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.525 [2024-07-15 16:10:20.145344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.525 qpair failed and we were unable to recover it. 00:28:51.525 [2024-07-15 16:10:20.145424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.526 [2024-07-15 16:10:20.145435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.526 qpair failed and we were unable to recover it. 00:28:51.526 [2024-07-15 16:10:20.145612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.526 [2024-07-15 16:10:20.145624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.526 qpair failed and we were unable to recover it. 00:28:51.526 [2024-07-15 16:10:20.145733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.526 [2024-07-15 16:10:20.145745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.526 qpair failed and we were unable to recover it. 00:28:51.526 [2024-07-15 16:10:20.145832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.526 [2024-07-15 16:10:20.145843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.526 qpair failed and we were unable to recover it. 00:28:51.526 [2024-07-15 16:10:20.146012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.526 [2024-07-15 16:10:20.146023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.526 qpair failed and we were unable to recover it. 00:28:51.526 [2024-07-15 16:10:20.146089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.526 [2024-07-15 16:10:20.146099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.526 qpair failed and we were unable to recover it. 00:28:51.526 [2024-07-15 16:10:20.146251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.526 [2024-07-15 16:10:20.146263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.526 qpair failed and we were unable to recover it. 00:28:51.526 [2024-07-15 16:10:20.146358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.526 [2024-07-15 16:10:20.146369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.526 qpair failed and we were unable to recover it. 00:28:51.526 [2024-07-15 16:10:20.146534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.526 [2024-07-15 16:10:20.146546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.526 qpair failed and we were unable to recover it. 00:28:51.526 [2024-07-15 16:10:20.146738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.526 [2024-07-15 16:10:20.146749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.526 qpair failed and we were unable to recover it. 00:28:51.526 [2024-07-15 16:10:20.146862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.526 [2024-07-15 16:10:20.146874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.526 qpair failed and we were unable to recover it. 00:28:51.526 [2024-07-15 16:10:20.146987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.526 [2024-07-15 16:10:20.146999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.526 qpair failed and we were unable to recover it. 00:28:51.526 [2024-07-15 16:10:20.147115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.526 [2024-07-15 16:10:20.147126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.526 qpair failed and we were unable to recover it. 00:28:51.526 [2024-07-15 16:10:20.147273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.526 [2024-07-15 16:10:20.147284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.526 qpair failed and we were unable to recover it. 00:28:51.526 [2024-07-15 16:10:20.147389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.526 [2024-07-15 16:10:20.147400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.526 qpair failed and we were unable to recover it. 00:28:51.526 [2024-07-15 16:10:20.147503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.526 [2024-07-15 16:10:20.147516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.526 qpair failed and we were unable to recover it. 00:28:51.526 [2024-07-15 16:10:20.147614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.526 [2024-07-15 16:10:20.147626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.526 qpair failed and we were unable to recover it. 00:28:51.526 [2024-07-15 16:10:20.147738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.526 [2024-07-15 16:10:20.147750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.526 qpair failed and we were unable to recover it. 00:28:51.526 [2024-07-15 16:10:20.147881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.526 [2024-07-15 16:10:20.147892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.526 qpair failed and we were unable to recover it. 00:28:51.526 [2024-07-15 16:10:20.148000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.526 [2024-07-15 16:10:20.148012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.526 qpair failed and we were unable to recover it. 00:28:51.526 [2024-07-15 16:10:20.148166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.526 [2024-07-15 16:10:20.148179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.526 qpair failed and we were unable to recover it. 00:28:51.526 [2024-07-15 16:10:20.148291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.526 [2024-07-15 16:10:20.148303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.526 qpair failed and we were unable to recover it. 00:28:51.526 [2024-07-15 16:10:20.148397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.526 [2024-07-15 16:10:20.148409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.526 qpair failed and we were unable to recover it. 00:28:51.526 [2024-07-15 16:10:20.148505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.526 [2024-07-15 16:10:20.148516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.526 qpair failed and we were unable to recover it. 00:28:51.526 [2024-07-15 16:10:20.148682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.526 [2024-07-15 16:10:20.148693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.526 qpair failed and we were unable to recover it. 00:28:51.526 [2024-07-15 16:10:20.148839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.526 [2024-07-15 16:10:20.148850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.526 qpair failed and we were unable to recover it. 00:28:51.526 [2024-07-15 16:10:20.148984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.526 [2024-07-15 16:10:20.148995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.526 qpair failed and we were unable to recover it. 00:28:51.526 [2024-07-15 16:10:20.149101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.526 [2024-07-15 16:10:20.149116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.526 qpair failed and we were unable to recover it. 00:28:51.526 [2024-07-15 16:10:20.149257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.526 [2024-07-15 16:10:20.149269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.526 qpair failed and we were unable to recover it. 00:28:51.526 [2024-07-15 16:10:20.149367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.526 [2024-07-15 16:10:20.149379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.526 qpair failed and we were unable to recover it. 00:28:51.526 [2024-07-15 16:10:20.149548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.526 [2024-07-15 16:10:20.149560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.526 qpair failed and we were unable to recover it. 00:28:51.526 [2024-07-15 16:10:20.149668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.526 [2024-07-15 16:10:20.149680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.526 qpair failed and we were unable to recover it. 00:28:51.526 [2024-07-15 16:10:20.149792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.526 [2024-07-15 16:10:20.149803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.526 qpair failed and we were unable to recover it. 00:28:51.526 [2024-07-15 16:10:20.149981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.526 [2024-07-15 16:10:20.149992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.526 qpair failed and we were unable to recover it. 00:28:51.527 [2024-07-15 16:10:20.150090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.527 [2024-07-15 16:10:20.150101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.527 qpair failed and we were unable to recover it. 00:28:51.527 [2024-07-15 16:10:20.150196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.527 [2024-07-15 16:10:20.150208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.527 qpair failed and we were unable to recover it. 00:28:51.527 [2024-07-15 16:10:20.150325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.527 [2024-07-15 16:10:20.150337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.527 qpair failed and we were unable to recover it. 00:28:51.527 [2024-07-15 16:10:20.150518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.527 [2024-07-15 16:10:20.150534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.527 qpair failed and we were unable to recover it. 00:28:51.527 [2024-07-15 16:10:20.150634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.527 [2024-07-15 16:10:20.150646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.527 qpair failed and we were unable to recover it. 00:28:51.527 [2024-07-15 16:10:20.150805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.527 [2024-07-15 16:10:20.150816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.527 qpair failed and we were unable to recover it. 00:28:51.527 [2024-07-15 16:10:20.150989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.527 [2024-07-15 16:10:20.151001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.527 qpair failed and we were unable to recover it. 00:28:51.527 [2024-07-15 16:10:20.151174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.527 [2024-07-15 16:10:20.151185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.527 qpair failed and we were unable to recover it. 00:28:51.527 [2024-07-15 16:10:20.151362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.527 [2024-07-15 16:10:20.151374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.527 qpair failed and we were unable to recover it. 00:28:51.527 [2024-07-15 16:10:20.151597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.527 [2024-07-15 16:10:20.151608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.527 qpair failed and we were unable to recover it. 00:28:51.527 [2024-07-15 16:10:20.151762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.527 [2024-07-15 16:10:20.151773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.527 qpair failed and we were unable to recover it. 00:28:51.527 [2024-07-15 16:10:20.151940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.527 [2024-07-15 16:10:20.151952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.527 qpair failed and we were unable to recover it. 00:28:51.527 [2024-07-15 16:10:20.152070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.527 [2024-07-15 16:10:20.152082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.527 qpair failed and we were unable to recover it. 00:28:51.527 [2024-07-15 16:10:20.152254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.527 [2024-07-15 16:10:20.152266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.527 qpair failed and we were unable to recover it. 00:28:51.527 [2024-07-15 16:10:20.152365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.527 [2024-07-15 16:10:20.152376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.527 qpair failed and we were unable to recover it. 00:28:51.527 [2024-07-15 16:10:20.152482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.527 [2024-07-15 16:10:20.152493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.527 qpair failed and we were unable to recover it. 00:28:51.527 [2024-07-15 16:10:20.152615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.527 [2024-07-15 16:10:20.152626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.527 qpair failed and we were unable to recover it. 00:28:51.527 [2024-07-15 16:10:20.152779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.527 [2024-07-15 16:10:20.152790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.527 qpair failed and we were unable to recover it. 00:28:51.527 [2024-07-15 16:10:20.152857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.527 [2024-07-15 16:10:20.152869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.527 qpair failed and we were unable to recover it. 00:28:51.527 [2024-07-15 16:10:20.152960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.527 [2024-07-15 16:10:20.152970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.527 qpair failed and we were unable to recover it. 00:28:51.527 [2024-07-15 16:10:20.153087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.527 [2024-07-15 16:10:20.153097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.527 qpair failed and we were unable to recover it. 00:28:51.527 [2024-07-15 16:10:20.153323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.527 [2024-07-15 16:10:20.153334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.527 qpair failed and we were unable to recover it. 00:28:51.527 [2024-07-15 16:10:20.153485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.527 [2024-07-15 16:10:20.153496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.527 qpair failed and we were unable to recover it. 00:28:51.527 [2024-07-15 16:10:20.153585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.527 [2024-07-15 16:10:20.153597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.527 qpair failed and we were unable to recover it. 00:28:51.527 [2024-07-15 16:10:20.153761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.527 [2024-07-15 16:10:20.153773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.527 qpair failed and we were unable to recover it. 00:28:51.527 [2024-07-15 16:10:20.153941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.527 [2024-07-15 16:10:20.153952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.527 qpair failed and we were unable to recover it. 00:28:51.527 [2024-07-15 16:10:20.154139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.527 [2024-07-15 16:10:20.154150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.527 qpair failed and we were unable to recover it. 00:28:51.527 [2024-07-15 16:10:20.154303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.527 [2024-07-15 16:10:20.154314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.527 qpair failed and we were unable to recover it. 00:28:51.527 [2024-07-15 16:10:20.154513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.527 [2024-07-15 16:10:20.154524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.527 qpair failed and we were unable to recover it. 00:28:51.527 [2024-07-15 16:10:20.154639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.527 [2024-07-15 16:10:20.154651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.527 qpair failed and we were unable to recover it. 00:28:51.527 [2024-07-15 16:10:20.154884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.527 [2024-07-15 16:10:20.154896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.527 qpair failed and we were unable to recover it. 00:28:51.527 [2024-07-15 16:10:20.155062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.527 [2024-07-15 16:10:20.155073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.527 qpair failed and we were unable to recover it. 00:28:51.527 [2024-07-15 16:10:20.155243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.527 [2024-07-15 16:10:20.155254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.527 qpair failed and we were unable to recover it. 00:28:51.527 [2024-07-15 16:10:20.155351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.527 [2024-07-15 16:10:20.155364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.527 qpair failed and we were unable to recover it. 00:28:51.527 [2024-07-15 16:10:20.155555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.527 [2024-07-15 16:10:20.155566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.527 qpair failed and we were unable to recover it. 00:28:51.527 [2024-07-15 16:10:20.155670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.527 [2024-07-15 16:10:20.155682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.527 qpair failed and we were unable to recover it. 00:28:51.527 [2024-07-15 16:10:20.155865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.528 [2024-07-15 16:10:20.155876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.528 qpair failed and we were unable to recover it. 00:28:51.528 [2024-07-15 16:10:20.155985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.528 [2024-07-15 16:10:20.155996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.528 qpair failed and we were unable to recover it. 00:28:51.528 [2024-07-15 16:10:20.156169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.528 [2024-07-15 16:10:20.156180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.528 qpair failed and we were unable to recover it. 00:28:51.528 [2024-07-15 16:10:20.156436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.528 [2024-07-15 16:10:20.156447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.528 qpair failed and we were unable to recover it. 00:28:51.528 [2024-07-15 16:10:20.156561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.528 [2024-07-15 16:10:20.156572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.528 qpair failed and we were unable to recover it. 00:28:51.528 [2024-07-15 16:10:20.156711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.528 [2024-07-15 16:10:20.156723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.528 qpair failed and we were unable to recover it. 00:28:51.528 [2024-07-15 16:10:20.156879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.528 [2024-07-15 16:10:20.156890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.528 qpair failed and we were unable to recover it. 00:28:51.528 [2024-07-15 16:10:20.156990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.528 [2024-07-15 16:10:20.157001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.528 qpair failed and we were unable to recover it. 00:28:51.528 [2024-07-15 16:10:20.157240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.528 [2024-07-15 16:10:20.157251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.528 qpair failed and we were unable to recover it. 00:28:51.528 [2024-07-15 16:10:20.157474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.528 [2024-07-15 16:10:20.157486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.528 qpair failed and we were unable to recover it. 00:28:51.528 [2024-07-15 16:10:20.157663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.528 [2024-07-15 16:10:20.157674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.528 qpair failed and we were unable to recover it. 00:28:51.528 [2024-07-15 16:10:20.157760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.528 [2024-07-15 16:10:20.157771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.528 qpair failed and we were unable to recover it. 00:28:51.528 [2024-07-15 16:10:20.157929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.528 [2024-07-15 16:10:20.157940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.528 qpair failed and we were unable to recover it. 00:28:51.528 [2024-07-15 16:10:20.158043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.528 [2024-07-15 16:10:20.158055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.528 qpair failed and we were unable to recover it. 00:28:51.528 [2024-07-15 16:10:20.158229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.528 [2024-07-15 16:10:20.158240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.528 qpair failed and we were unable to recover it. 00:28:51.528 [2024-07-15 16:10:20.158339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.528 [2024-07-15 16:10:20.158350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.528 qpair failed and we were unable to recover it. 00:28:51.528 [2024-07-15 16:10:20.158533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.528 [2024-07-15 16:10:20.158544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.528 qpair failed and we were unable to recover it. 00:28:51.528 [2024-07-15 16:10:20.158767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.528 [2024-07-15 16:10:20.158778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.528 qpair failed and we were unable to recover it. 00:28:51.528 [2024-07-15 16:10:20.158881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.528 [2024-07-15 16:10:20.158892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.528 qpair failed and we were unable to recover it. 00:28:51.528 [2024-07-15 16:10:20.159057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.528 [2024-07-15 16:10:20.159068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.528 qpair failed and we were unable to recover it. 00:28:51.528 [2024-07-15 16:10:20.159159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.528 [2024-07-15 16:10:20.159171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.528 qpair failed and we were unable to recover it. 00:28:51.528 [2024-07-15 16:10:20.159330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.528 [2024-07-15 16:10:20.159342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.528 qpair failed and we were unable to recover it. 00:28:51.528 [2024-07-15 16:10:20.159443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.528 [2024-07-15 16:10:20.159454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.528 qpair failed and we were unable to recover it. 00:28:51.528 [2024-07-15 16:10:20.159626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.528 [2024-07-15 16:10:20.159637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.528 qpair failed and we were unable to recover it. 00:28:51.529 [2024-07-15 16:10:20.159855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.529 [2024-07-15 16:10:20.159890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.529 qpair failed and we were unable to recover it. 00:28:51.529 [2024-07-15 16:10:20.160062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.529 [2024-07-15 16:10:20.160080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.529 qpair failed and we were unable to recover it. 00:28:51.529 [2024-07-15 16:10:20.160200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.529 [2024-07-15 16:10:20.160215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.529 qpair failed and we were unable to recover it. 00:28:51.529 [2024-07-15 16:10:20.160320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.529 [2024-07-15 16:10:20.160334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.529 qpair failed and we were unable to recover it. 00:28:51.529 [2024-07-15 16:10:20.160533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.529 [2024-07-15 16:10:20.160548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.529 qpair failed and we were unable to recover it. 00:28:51.529 [2024-07-15 16:10:20.160781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.529 [2024-07-15 16:10:20.160796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.529 qpair failed and we were unable to recover it. 00:28:51.529 [2024-07-15 16:10:20.160912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.529 [2024-07-15 16:10:20.160925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.529 qpair failed and we were unable to recover it. 00:28:51.529 [2024-07-15 16:10:20.161102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.529 [2024-07-15 16:10:20.161113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.529 qpair failed and we were unable to recover it. 00:28:51.529 [2024-07-15 16:10:20.161277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.529 [2024-07-15 16:10:20.161289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.529 qpair failed and we were unable to recover it. 00:28:51.529 [2024-07-15 16:10:20.161487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.529 [2024-07-15 16:10:20.161499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.529 qpair failed and we were unable to recover it. 00:28:51.529 [2024-07-15 16:10:20.161686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.529 [2024-07-15 16:10:20.161699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.529 qpair failed and we were unable to recover it. 00:28:51.529 [2024-07-15 16:10:20.161808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.529 [2024-07-15 16:10:20.161820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.529 qpair failed and we were unable to recover it. 00:28:51.529 [2024-07-15 16:10:20.161935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.529 [2024-07-15 16:10:20.161946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.529 qpair failed and we were unable to recover it. 00:28:51.529 [2024-07-15 16:10:20.162057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.529 [2024-07-15 16:10:20.162071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.529 qpair failed and we were unable to recover it. 00:28:51.529 [2024-07-15 16:10:20.162220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.529 [2024-07-15 16:10:20.162234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.529 qpair failed and we were unable to recover it. 00:28:51.529 [2024-07-15 16:10:20.162413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.529 [2024-07-15 16:10:20.162425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.529 qpair failed and we were unable to recover it. 00:28:51.529 [2024-07-15 16:10:20.162539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.529 [2024-07-15 16:10:20.162551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.529 qpair failed and we were unable to recover it. 00:28:51.529 [2024-07-15 16:10:20.162719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.529 [2024-07-15 16:10:20.162730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.529 qpair failed and we were unable to recover it. 00:28:51.529 [2024-07-15 16:10:20.162905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.529 [2024-07-15 16:10:20.162916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.529 qpair failed and we were unable to recover it. 00:28:51.529 [2024-07-15 16:10:20.163018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.529 [2024-07-15 16:10:20.163030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.529 qpair failed and we were unable to recover it. 00:28:51.529 [2024-07-15 16:10:20.163232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.529 [2024-07-15 16:10:20.163244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.529 qpair failed and we were unable to recover it. 00:28:51.529 [2024-07-15 16:10:20.163344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.529 [2024-07-15 16:10:20.163355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.529 qpair failed and we were unable to recover it. 00:28:51.529 [2024-07-15 16:10:20.163440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.529 [2024-07-15 16:10:20.163451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.529 qpair failed and we were unable to recover it. 00:28:51.529 [2024-07-15 16:10:20.163630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.529 [2024-07-15 16:10:20.163641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.529 qpair failed and we were unable to recover it. 00:28:51.529 [2024-07-15 16:10:20.163824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.529 [2024-07-15 16:10:20.163835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.529 qpair failed and we were unable to recover it. 00:28:51.529 [2024-07-15 16:10:20.163959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.529 [2024-07-15 16:10:20.163969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.529 qpair failed and we were unable to recover it. 00:28:51.529 [2024-07-15 16:10:20.164085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.529 [2024-07-15 16:10:20.164096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.529 qpair failed and we were unable to recover it. 00:28:51.529 [2024-07-15 16:10:20.164297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.529 [2024-07-15 16:10:20.164307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.529 qpair failed and we were unable to recover it. 00:28:51.529 [2024-07-15 16:10:20.164405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.529 [2024-07-15 16:10:20.164414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.529 qpair failed and we were unable to recover it. 00:28:51.529 [2024-07-15 16:10:20.164523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.529 [2024-07-15 16:10:20.164533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.529 qpair failed and we were unable to recover it. 00:28:51.529 [2024-07-15 16:10:20.164716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.529 [2024-07-15 16:10:20.164727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.529 qpair failed and we were unable to recover it. 00:28:51.529 [2024-07-15 16:10:20.164838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.529 [2024-07-15 16:10:20.164849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.529 qpair failed and we were unable to recover it. 00:28:51.529 [2024-07-15 16:10:20.165006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.529 [2024-07-15 16:10:20.165017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.529 qpair failed and we were unable to recover it. 00:28:51.529 [2024-07-15 16:10:20.165105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.529 [2024-07-15 16:10:20.165116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.529 qpair failed and we were unable to recover it. 00:28:51.529 [2024-07-15 16:10:20.165280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.529 [2024-07-15 16:10:20.165292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.529 qpair failed and we were unable to recover it. 00:28:51.529 [2024-07-15 16:10:20.165386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.529 [2024-07-15 16:10:20.165398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.529 qpair failed and we were unable to recover it. 00:28:51.529 [2024-07-15 16:10:20.165498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.530 [2024-07-15 16:10:20.165509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.530 qpair failed and we were unable to recover it. 00:28:51.530 [2024-07-15 16:10:20.165694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.530 [2024-07-15 16:10:20.165706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.530 qpair failed and we were unable to recover it. 00:28:51.530 [2024-07-15 16:10:20.165806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.530 [2024-07-15 16:10:20.165816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.530 qpair failed and we were unable to recover it. 00:28:51.530 [2024-07-15 16:10:20.165932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.530 [2024-07-15 16:10:20.165942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.530 qpair failed and we were unable to recover it. 00:28:51.530 [2024-07-15 16:10:20.166070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.530 [2024-07-15 16:10:20.166091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.530 qpair failed and we were unable to recover it. 00:28:51.530 [2024-07-15 16:10:20.166199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.530 [2024-07-15 16:10:20.166215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.530 qpair failed and we were unable to recover it. 00:28:51.530 [2024-07-15 16:10:20.166340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.530 [2024-07-15 16:10:20.166356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.530 qpair failed and we were unable to recover it. 00:28:51.530 [2024-07-15 16:10:20.166536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.530 [2024-07-15 16:10:20.166551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.530 qpair failed and we were unable to recover it. 00:28:51.530 [2024-07-15 16:10:20.166734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.530 [2024-07-15 16:10:20.166749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.530 qpair failed and we were unable to recover it. 00:28:51.530 [2024-07-15 16:10:20.166876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.530 [2024-07-15 16:10:20.166891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.530 qpair failed and we were unable to recover it. 00:28:51.530 [2024-07-15 16:10:20.167002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.530 [2024-07-15 16:10:20.167017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.530 qpair failed and we were unable to recover it. 00:28:51.530 [2024-07-15 16:10:20.167202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.530 [2024-07-15 16:10:20.167213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.530 qpair failed and we were unable to recover it. 00:28:51.530 [2024-07-15 16:10:20.167377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.530 [2024-07-15 16:10:20.167388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.530 qpair failed and we were unable to recover it. 00:28:51.530 [2024-07-15 16:10:20.167542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.530 [2024-07-15 16:10:20.167553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.530 qpair failed and we were unable to recover it. 00:28:51.530 [2024-07-15 16:10:20.167718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.530 [2024-07-15 16:10:20.167729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.530 qpair failed and we were unable to recover it. 00:28:51.530 [2024-07-15 16:10:20.167964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.530 [2024-07-15 16:10:20.167976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.530 qpair failed and we were unable to recover it. 00:28:51.530 [2024-07-15 16:10:20.168141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.530 [2024-07-15 16:10:20.168152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.530 qpair failed and we were unable to recover it. 00:28:51.530 [2024-07-15 16:10:20.168243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.530 [2024-07-15 16:10:20.168254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.530 qpair failed and we were unable to recover it. 00:28:51.530 [2024-07-15 16:10:20.168350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.530 [2024-07-15 16:10:20.168362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.530 qpair failed and we were unable to recover it. 00:28:51.530 [2024-07-15 16:10:20.168464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.530 [2024-07-15 16:10:20.168477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.530 qpair failed and we were unable to recover it. 00:28:51.530 [2024-07-15 16:10:20.168653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.530 [2024-07-15 16:10:20.168665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.530 qpair failed and we were unable to recover it. 00:28:51.530 [2024-07-15 16:10:20.168756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.530 [2024-07-15 16:10:20.168767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.530 qpair failed and we were unable to recover it. 00:28:51.530 [2024-07-15 16:10:20.168872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.530 [2024-07-15 16:10:20.168882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.530 qpair failed and we were unable to recover it. 00:28:51.530 [2024-07-15 16:10:20.169038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.530 [2024-07-15 16:10:20.169049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.530 qpair failed and we were unable to recover it. 00:28:51.530 [2024-07-15 16:10:20.169234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.530 [2024-07-15 16:10:20.169246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.530 qpair failed and we were unable to recover it. 00:28:51.530 [2024-07-15 16:10:20.169377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.530 [2024-07-15 16:10:20.169388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.530 qpair failed and we were unable to recover it. 00:28:51.530 [2024-07-15 16:10:20.169541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.530 [2024-07-15 16:10:20.169552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.530 qpair failed and we were unable to recover it. 00:28:51.530 [2024-07-15 16:10:20.169665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.530 [2024-07-15 16:10:20.169677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.530 qpair failed and we were unable to recover it. 00:28:51.530 [2024-07-15 16:10:20.169845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.530 [2024-07-15 16:10:20.169857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.530 qpair failed and we were unable to recover it. 00:28:51.530 [2024-07-15 16:10:20.170050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.530 [2024-07-15 16:10:20.170061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.530 qpair failed and we were unable to recover it. 00:28:51.530 [2024-07-15 16:10:20.170233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.530 [2024-07-15 16:10:20.170245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.530 qpair failed and we were unable to recover it. 00:28:51.530 [2024-07-15 16:10:20.170314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.530 [2024-07-15 16:10:20.170324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.530 qpair failed and we were unable to recover it. 00:28:51.530 [2024-07-15 16:10:20.170478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.530 [2024-07-15 16:10:20.170490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.530 qpair failed and we were unable to recover it. 00:28:51.530 [2024-07-15 16:10:20.170739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.530 [2024-07-15 16:10:20.170751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.530 qpair failed and we were unable to recover it. 00:28:51.530 [2024-07-15 16:10:20.170983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.530 [2024-07-15 16:10:20.170994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.530 qpair failed and we were unable to recover it. 00:28:51.530 [2024-07-15 16:10:20.171090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.530 [2024-07-15 16:10:20.171102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.530 qpair failed and we were unable to recover it. 00:28:51.530 [2024-07-15 16:10:20.171294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.530 [2024-07-15 16:10:20.171305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.530 qpair failed and we were unable to recover it. 00:28:51.531 [2024-07-15 16:10:20.171391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.531 [2024-07-15 16:10:20.171403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.531 qpair failed and we were unable to recover it. 00:28:51.531 [2024-07-15 16:10:20.171619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.531 [2024-07-15 16:10:20.171630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.531 qpair failed and we were unable to recover it. 00:28:51.531 [2024-07-15 16:10:20.171872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.531 [2024-07-15 16:10:20.171883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.531 qpair failed and we were unable to recover it. 00:28:51.531 [2024-07-15 16:10:20.172055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.531 [2024-07-15 16:10:20.172066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.531 qpair failed and we were unable to recover it. 00:28:51.531 [2024-07-15 16:10:20.172203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.531 [2024-07-15 16:10:20.172216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.531 qpair failed and we were unable to recover it. 00:28:51.531 [2024-07-15 16:10:20.172490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.531 [2024-07-15 16:10:20.172501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.531 qpair failed and we were unable to recover it. 00:28:51.531 [2024-07-15 16:10:20.172666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.531 [2024-07-15 16:10:20.172677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.531 qpair failed and we were unable to recover it. 00:28:51.531 [2024-07-15 16:10:20.172837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.531 [2024-07-15 16:10:20.172850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.531 qpair failed and we were unable to recover it. 00:28:51.531 [2024-07-15 16:10:20.173015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.531 [2024-07-15 16:10:20.173026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.531 qpair failed and we were unable to recover it. 00:28:51.531 [2024-07-15 16:10:20.173214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.531 [2024-07-15 16:10:20.173236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.531 qpair failed and we were unable to recover it. 00:28:51.531 [2024-07-15 16:10:20.173341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.531 [2024-07-15 16:10:20.173353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.531 qpair failed and we were unable to recover it. 00:28:51.531 [2024-07-15 16:10:20.173456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.531 [2024-07-15 16:10:20.173467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.531 qpair failed and we were unable to recover it. 00:28:51.531 [2024-07-15 16:10:20.173633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.531 [2024-07-15 16:10:20.173644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.531 qpair failed and we were unable to recover it. 00:28:51.531 [2024-07-15 16:10:20.173754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.531 [2024-07-15 16:10:20.173765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.531 qpair failed and we were unable to recover it. 00:28:51.531 [2024-07-15 16:10:20.173942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.531 [2024-07-15 16:10:20.173953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.531 qpair failed and we were unable to recover it. 00:28:51.531 [2024-07-15 16:10:20.174068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.531 [2024-07-15 16:10:20.174079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.531 qpair failed and we were unable to recover it. 00:28:51.531 [2024-07-15 16:10:20.174186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.531 [2024-07-15 16:10:20.174198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.531 qpair failed and we were unable to recover it. 00:28:51.531 [2024-07-15 16:10:20.174318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.531 [2024-07-15 16:10:20.174330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.531 qpair failed and we were unable to recover it. 00:28:51.531 [2024-07-15 16:10:20.174436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.531 [2024-07-15 16:10:20.174447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.531 qpair failed and we were unable to recover it. 00:28:51.531 [2024-07-15 16:10:20.174737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.531 [2024-07-15 16:10:20.174748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.531 qpair failed and we were unable to recover it. 00:28:51.531 [2024-07-15 16:10:20.174834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.531 [2024-07-15 16:10:20.174845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.531 qpair failed and we were unable to recover it. 00:28:51.531 [2024-07-15 16:10:20.175071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.531 [2024-07-15 16:10:20.175083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.531 qpair failed and we were unable to recover it. 00:28:51.531 [2024-07-15 16:10:20.175270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.531 [2024-07-15 16:10:20.175282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.531 qpair failed and we were unable to recover it. 00:28:51.531 [2024-07-15 16:10:20.175469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.531 [2024-07-15 16:10:20.175480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.531 qpair failed and we were unable to recover it. 00:28:51.531 [2024-07-15 16:10:20.175590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.531 [2024-07-15 16:10:20.175602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.531 qpair failed and we were unable to recover it. 00:28:51.531 [2024-07-15 16:10:20.175691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.531 [2024-07-15 16:10:20.175703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.531 qpair failed and we were unable to recover it. 00:28:51.531 [2024-07-15 16:10:20.175854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.531 [2024-07-15 16:10:20.175866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.531 qpair failed and we were unable to recover it. 00:28:51.531 [2024-07-15 16:10:20.176033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.531 [2024-07-15 16:10:20.176045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.531 qpair failed and we were unable to recover it. 00:28:51.531 [2024-07-15 16:10:20.176223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.531 [2024-07-15 16:10:20.176238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.531 qpair failed and we were unable to recover it. 00:28:51.531 [2024-07-15 16:10:20.176405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.531 [2024-07-15 16:10:20.176417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.531 qpair failed and we were unable to recover it. 00:28:51.531 [2024-07-15 16:10:20.176584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.531 [2024-07-15 16:10:20.176596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.531 qpair failed and we were unable to recover it. 00:28:51.531 [2024-07-15 16:10:20.176674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.531 [2024-07-15 16:10:20.176686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.531 qpair failed and we were unable to recover it. 00:28:51.531 [2024-07-15 16:10:20.176845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.531 [2024-07-15 16:10:20.176857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.531 qpair failed and we were unable to recover it. 00:28:51.531 [2024-07-15 16:10:20.176967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.531 [2024-07-15 16:10:20.176978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.531 qpair failed and we were unable to recover it. 00:28:51.531 [2024-07-15 16:10:20.177081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.531 [2024-07-15 16:10:20.177092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.531 qpair failed and we were unable to recover it. 00:28:51.531 [2024-07-15 16:10:20.177294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.531 [2024-07-15 16:10:20.177306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.531 qpair failed and we were unable to recover it. 00:28:51.531 [2024-07-15 16:10:20.177427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.531 [2024-07-15 16:10:20.177439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.531 qpair failed and we were unable to recover it. 00:28:51.531 [2024-07-15 16:10:20.177632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.531 [2024-07-15 16:10:20.177643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.531 qpair failed and we were unable to recover it. 00:28:51.531 [2024-07-15 16:10:20.177739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.531 [2024-07-15 16:10:20.177751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.531 qpair failed and we were unable to recover it. 00:28:51.531 [2024-07-15 16:10:20.177927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.531 [2024-07-15 16:10:20.177939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.532 qpair failed and we were unable to recover it. 00:28:51.532 [2024-07-15 16:10:20.178028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.532 [2024-07-15 16:10:20.178040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.532 qpair failed and we were unable to recover it. 00:28:51.532 [2024-07-15 16:10:20.178149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.532 [2024-07-15 16:10:20.178160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.532 qpair failed and we were unable to recover it. 00:28:51.532 [2024-07-15 16:10:20.178258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.532 [2024-07-15 16:10:20.178271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.532 qpair failed and we were unable to recover it. 00:28:51.532 [2024-07-15 16:10:20.178428] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:28:51.532 [2024-07-15 16:10:20.178468] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:51.532 [2024-07-15 16:10:20.178496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.532 [2024-07-15 16:10:20.178506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.532 qpair failed and we were unable to recover it. 00:28:51.532 [2024-07-15 16:10:20.178628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.532 [2024-07-15 16:10:20.178637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.532 qpair failed and we were unable to recover it. 00:28:51.532 [2024-07-15 16:10:20.178702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.532 [2024-07-15 16:10:20.178711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.532 qpair failed and we were unable to recover it. 00:28:51.532 [2024-07-15 16:10:20.178869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.532 [2024-07-15 16:10:20.178879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.532 qpair failed and we were unable to recover it. 00:28:51.532 [2024-07-15 16:10:20.178991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.532 [2024-07-15 16:10:20.179002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.532 qpair failed and we were unable to recover it. 00:28:51.532 [2024-07-15 16:10:20.179097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.532 [2024-07-15 16:10:20.179108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.532 qpair failed and we were unable to recover it. 00:28:51.532 [2024-07-15 16:10:20.179290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.532 [2024-07-15 16:10:20.179300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.532 qpair failed and we were unable to recover it. 00:28:51.532 [2024-07-15 16:10:20.179414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.532 [2024-07-15 16:10:20.179426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.532 qpair failed and we were unable to recover it. 00:28:51.532 [2024-07-15 16:10:20.179594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.532 [2024-07-15 16:10:20.179605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.532 qpair failed and we were unable to recover it. 00:28:51.532 [2024-07-15 16:10:20.179777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.532 [2024-07-15 16:10:20.179789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.532 qpair failed and we were unable to recover it. 00:28:51.532 [2024-07-15 16:10:20.179891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.532 [2024-07-15 16:10:20.179902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.532 qpair failed and we were unable to recover it. 00:28:51.532 [2024-07-15 16:10:20.180069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.532 [2024-07-15 16:10:20.180081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.532 qpair failed and we were unable to recover it. 00:28:51.532 [2024-07-15 16:10:20.180334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.532 [2024-07-15 16:10:20.180346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.532 qpair failed and we were unable to recover it. 00:28:51.532 [2024-07-15 16:10:20.180517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.532 [2024-07-15 16:10:20.180528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.532 qpair failed and we were unable to recover it. 00:28:51.532 [2024-07-15 16:10:20.180700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.532 [2024-07-15 16:10:20.180711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.532 qpair failed and we were unable to recover it. 00:28:51.532 [2024-07-15 16:10:20.180827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.532 [2024-07-15 16:10:20.180838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.532 qpair failed and we were unable to recover it. 00:28:51.532 [2024-07-15 16:10:20.180998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.532 [2024-07-15 16:10:20.181011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.532 qpair failed and we were unable to recover it. 00:28:51.532 [2024-07-15 16:10:20.181184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.532 [2024-07-15 16:10:20.181196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.532 qpair failed and we were unable to recover it. 00:28:51.532 [2024-07-15 16:10:20.181284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.532 [2024-07-15 16:10:20.181296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.532 qpair failed and we were unable to recover it. 00:28:51.532 [2024-07-15 16:10:20.181389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.532 [2024-07-15 16:10:20.181401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.532 qpair failed and we were unable to recover it. 00:28:51.532 [2024-07-15 16:10:20.181558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.532 [2024-07-15 16:10:20.181569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.532 qpair failed and we were unable to recover it. 00:28:51.532 [2024-07-15 16:10:20.181673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.532 [2024-07-15 16:10:20.181685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.532 qpair failed and we were unable to recover it. 00:28:51.532 [2024-07-15 16:10:20.181908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.532 [2024-07-15 16:10:20.181920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.532 qpair failed and we were unable to recover it. 00:28:51.532 [2024-07-15 16:10:20.182010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.532 [2024-07-15 16:10:20.182021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.532 qpair failed and we were unable to recover it. 00:28:51.532 [2024-07-15 16:10:20.182151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.532 [2024-07-15 16:10:20.182163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.532 qpair failed and we were unable to recover it. 00:28:51.532 [2024-07-15 16:10:20.182260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.532 [2024-07-15 16:10:20.182272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.532 qpair failed and we were unable to recover it. 00:28:51.532 [2024-07-15 16:10:20.182358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.532 [2024-07-15 16:10:20.182369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.532 qpair failed and we were unable to recover it. 00:28:51.532 [2024-07-15 16:10:20.182480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.532 [2024-07-15 16:10:20.182492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.532 qpair failed and we were unable to recover it. 00:28:51.532 [2024-07-15 16:10:20.182653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.532 [2024-07-15 16:10:20.182664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.532 qpair failed and we were unable to recover it. 00:28:51.532 [2024-07-15 16:10:20.182757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.532 [2024-07-15 16:10:20.182768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.532 qpair failed and we were unable to recover it. 00:28:51.532 [2024-07-15 16:10:20.182990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.532 [2024-07-15 16:10:20.183001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.532 qpair failed and we were unable to recover it. 00:28:51.532 [2024-07-15 16:10:20.183110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.532 [2024-07-15 16:10:20.183122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.532 qpair failed and we were unable to recover it. 00:28:51.532 [2024-07-15 16:10:20.183297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.532 [2024-07-15 16:10:20.183309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.532 qpair failed and we were unable to recover it. 00:28:51.532 [2024-07-15 16:10:20.183490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.532 [2024-07-15 16:10:20.183501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.532 qpair failed and we were unable to recover it. 00:28:51.532 [2024-07-15 16:10:20.183661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.532 [2024-07-15 16:10:20.183672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.532 qpair failed and we were unable to recover it. 00:28:51.532 [2024-07-15 16:10:20.183795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.532 [2024-07-15 16:10:20.183806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.533 qpair failed and we were unable to recover it. 00:28:51.533 [2024-07-15 16:10:20.183967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.533 [2024-07-15 16:10:20.183979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.533 qpair failed and we were unable to recover it. 00:28:51.533 [2024-07-15 16:10:20.184143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.533 [2024-07-15 16:10:20.184155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.533 qpair failed and we were unable to recover it. 00:28:51.533 [2024-07-15 16:10:20.184279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.533 [2024-07-15 16:10:20.184291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.533 qpair failed and we were unable to recover it. 00:28:51.533 [2024-07-15 16:10:20.184411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.533 [2024-07-15 16:10:20.184423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.533 qpair failed and we were unable to recover it. 00:28:51.533 [2024-07-15 16:10:20.184522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.533 [2024-07-15 16:10:20.184533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.533 qpair failed and we were unable to recover it. 00:28:51.533 [2024-07-15 16:10:20.184631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.533 [2024-07-15 16:10:20.184642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.533 qpair failed and we were unable to recover it. 00:28:51.533 [2024-07-15 16:10:20.184753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.533 [2024-07-15 16:10:20.184764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.533 qpair failed and we were unable to recover it. 00:28:51.533 [2024-07-15 16:10:20.184959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.533 [2024-07-15 16:10:20.184970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.533 qpair failed and we were unable to recover it. 00:28:51.533 [2024-07-15 16:10:20.185144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.533 [2024-07-15 16:10:20.185155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.533 qpair failed and we were unable to recover it. 00:28:51.533 [2024-07-15 16:10:20.185316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.533 [2024-07-15 16:10:20.185328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.533 qpair failed and we were unable to recover it. 00:28:51.533 [2024-07-15 16:10:20.185507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.533 [2024-07-15 16:10:20.185518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.533 qpair failed and we were unable to recover it. 00:28:51.533 [2024-07-15 16:10:20.185685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.533 [2024-07-15 16:10:20.185696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.533 qpair failed and we were unable to recover it. 00:28:51.533 [2024-07-15 16:10:20.185796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.533 [2024-07-15 16:10:20.185808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.533 qpair failed and we were unable to recover it. 00:28:51.533 [2024-07-15 16:10:20.185969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.533 [2024-07-15 16:10:20.185981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.533 qpair failed and we were unable to recover it. 00:28:51.533 [2024-07-15 16:10:20.186088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.533 [2024-07-15 16:10:20.186099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.533 qpair failed and we were unable to recover it. 00:28:51.533 [2024-07-15 16:10:20.186205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.533 [2024-07-15 16:10:20.186216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.533 qpair failed and we were unable to recover it. 00:28:51.533 [2024-07-15 16:10:20.186417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.533 [2024-07-15 16:10:20.186429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.533 qpair failed and we were unable to recover it. 00:28:51.533 [2024-07-15 16:10:20.186536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.533 [2024-07-15 16:10:20.186547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.533 qpair failed and we were unable to recover it. 00:28:51.533 [2024-07-15 16:10:20.186707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.533 [2024-07-15 16:10:20.186719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.533 qpair failed and we were unable to recover it. 00:28:51.533 [2024-07-15 16:10:20.186889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.533 [2024-07-15 16:10:20.186900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.533 qpair failed and we were unable to recover it. 00:28:51.533 [2024-07-15 16:10:20.187111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.533 [2024-07-15 16:10:20.187124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.533 qpair failed and we were unable to recover it. 00:28:51.533 [2024-07-15 16:10:20.187299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.533 [2024-07-15 16:10:20.187311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.533 qpair failed and we were unable to recover it. 00:28:51.533 [2024-07-15 16:10:20.187418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.533 [2024-07-15 16:10:20.187429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.533 qpair failed and we were unable to recover it. 00:28:51.533 [2024-07-15 16:10:20.187592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.533 [2024-07-15 16:10:20.187604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.533 qpair failed and we were unable to recover it. 00:28:51.533 [2024-07-15 16:10:20.187761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.533 [2024-07-15 16:10:20.187772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.533 qpair failed and we were unable to recover it. 00:28:51.533 [2024-07-15 16:10:20.187937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.533 [2024-07-15 16:10:20.187948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.533 qpair failed and we were unable to recover it. 00:28:51.533 [2024-07-15 16:10:20.188137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.533 [2024-07-15 16:10:20.188149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.533 qpair failed and we were unable to recover it. 00:28:51.533 [2024-07-15 16:10:20.188243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.533 [2024-07-15 16:10:20.188255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.533 qpair failed and we were unable to recover it. 00:28:51.533 [2024-07-15 16:10:20.188429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.533 [2024-07-15 16:10:20.188440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.533 qpair failed and we were unable to recover it. 00:28:51.533 [2024-07-15 16:10:20.188517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.533 [2024-07-15 16:10:20.188527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.533 qpair failed and we were unable to recover it. 00:28:51.533 [2024-07-15 16:10:20.188713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.533 [2024-07-15 16:10:20.188724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.533 qpair failed and we were unable to recover it. 00:28:51.533 [2024-07-15 16:10:20.188882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.533 [2024-07-15 16:10:20.188893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.533 qpair failed and we were unable to recover it. 00:28:51.533 [2024-07-15 16:10:20.189059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.533 [2024-07-15 16:10:20.189071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.533 qpair failed and we were unable to recover it. 00:28:51.533 [2024-07-15 16:10:20.189182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.534 [2024-07-15 16:10:20.189193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.534 qpair failed and we were unable to recover it. 00:28:51.534 [2024-07-15 16:10:20.189416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.534 [2024-07-15 16:10:20.189428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.534 qpair failed and we were unable to recover it. 00:28:51.534 [2024-07-15 16:10:20.189659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.534 [2024-07-15 16:10:20.189671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.534 qpair failed and we were unable to recover it. 00:28:51.534 [2024-07-15 16:10:20.189868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.534 [2024-07-15 16:10:20.189880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.534 qpair failed and we were unable to recover it. 00:28:51.534 [2024-07-15 16:10:20.189976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.534 [2024-07-15 16:10:20.189988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.534 qpair failed and we were unable to recover it. 00:28:51.534 [2024-07-15 16:10:20.190232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.534 [2024-07-15 16:10:20.190244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.534 qpair failed and we were unable to recover it. 00:28:51.534 [2024-07-15 16:10:20.190490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.534 [2024-07-15 16:10:20.190501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.534 qpair failed and we were unable to recover it. 00:28:51.534 [2024-07-15 16:10:20.190688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.534 [2024-07-15 16:10:20.190700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.534 qpair failed and we were unable to recover it. 00:28:51.534 [2024-07-15 16:10:20.190945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.534 [2024-07-15 16:10:20.190957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.534 qpair failed and we were unable to recover it. 00:28:51.534 [2024-07-15 16:10:20.191049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.534 [2024-07-15 16:10:20.191059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.534 qpair failed and we were unable to recover it. 00:28:51.534 [2024-07-15 16:10:20.191234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.534 [2024-07-15 16:10:20.191246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.534 qpair failed and we were unable to recover it. 00:28:51.534 [2024-07-15 16:10:20.191418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.534 [2024-07-15 16:10:20.191429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.534 qpair failed and we were unable to recover it. 00:28:51.534 [2024-07-15 16:10:20.191603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.534 [2024-07-15 16:10:20.191615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.534 qpair failed and we were unable to recover it. 00:28:51.534 [2024-07-15 16:10:20.191725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.534 [2024-07-15 16:10:20.191736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.534 qpair failed and we were unable to recover it. 00:28:51.534 [2024-07-15 16:10:20.191856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.534 [2024-07-15 16:10:20.191875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.534 qpair failed and we were unable to recover it. 00:28:51.534 [2024-07-15 16:10:20.191992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.534 [2024-07-15 16:10:20.192014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.534 qpair failed and we were unable to recover it. 00:28:51.534 [2024-07-15 16:10:20.192275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.534 [2024-07-15 16:10:20.192293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.534 qpair failed and we were unable to recover it. 00:28:51.534 [2024-07-15 16:10:20.192422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.534 [2024-07-15 16:10:20.192437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.534 qpair failed and we were unable to recover it. 00:28:51.534 [2024-07-15 16:10:20.192555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.534 [2024-07-15 16:10:20.192570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.534 qpair failed and we were unable to recover it. 00:28:51.534 [2024-07-15 16:10:20.192691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.534 [2024-07-15 16:10:20.192706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.534 qpair failed and we were unable to recover it. 00:28:51.534 [2024-07-15 16:10:20.192811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.534 [2024-07-15 16:10:20.192826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.534 qpair failed and we were unable to recover it. 00:28:51.534 [2024-07-15 16:10:20.192996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.534 [2024-07-15 16:10:20.193010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.534 qpair failed and we were unable to recover it. 00:28:51.534 [2024-07-15 16:10:20.193090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.534 [2024-07-15 16:10:20.193105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.534 qpair failed and we were unable to recover it. 00:28:51.534 [2024-07-15 16:10:20.193287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.534 [2024-07-15 16:10:20.193300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.534 qpair failed and we were unable to recover it. 00:28:51.534 [2024-07-15 16:10:20.193411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.534 [2024-07-15 16:10:20.193422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.534 qpair failed and we were unable to recover it. 00:28:51.534 [2024-07-15 16:10:20.193579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.534 [2024-07-15 16:10:20.193591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.534 qpair failed and we were unable to recover it. 00:28:51.534 [2024-07-15 16:10:20.193783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.534 [2024-07-15 16:10:20.193795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.534 qpair failed and we were unable to recover it. 00:28:51.534 [2024-07-15 16:10:20.193890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.534 [2024-07-15 16:10:20.193903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.534 qpair failed and we were unable to recover it. 00:28:51.534 [2024-07-15 16:10:20.193992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.534 [2024-07-15 16:10:20.194004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.534 qpair failed and we were unable to recover it. 00:28:51.534 [2024-07-15 16:10:20.194106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.534 [2024-07-15 16:10:20.194117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.534 qpair failed and we were unable to recover it. 00:28:51.534 [2024-07-15 16:10:20.194342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.534 [2024-07-15 16:10:20.194354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.534 qpair failed and we were unable to recover it. 00:28:51.534 [2024-07-15 16:10:20.194444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.534 [2024-07-15 16:10:20.194455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.534 qpair failed and we were unable to recover it. 00:28:51.534 [2024-07-15 16:10:20.194614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.534 [2024-07-15 16:10:20.194626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.534 qpair failed and we were unable to recover it. 00:28:51.534 [2024-07-15 16:10:20.194724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.534 [2024-07-15 16:10:20.194735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.534 qpair failed and we were unable to recover it. 00:28:51.534 [2024-07-15 16:10:20.194905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.534 [2024-07-15 16:10:20.194916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.534 qpair failed and we were unable to recover it. 00:28:51.534 [2024-07-15 16:10:20.195070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.534 [2024-07-15 16:10:20.195080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.534 qpair failed and we were unable to recover it. 00:28:51.534 [2024-07-15 16:10:20.195190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.534 [2024-07-15 16:10:20.195201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.534 qpair failed and we were unable to recover it. 00:28:51.534 [2024-07-15 16:10:20.195337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.534 [2024-07-15 16:10:20.195349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.534 qpair failed and we were unable to recover it. 00:28:51.534 [2024-07-15 16:10:20.195441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.534 [2024-07-15 16:10:20.195452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.534 qpair failed and we were unable to recover it. 00:28:51.534 [2024-07-15 16:10:20.195584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.534 [2024-07-15 16:10:20.195596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.534 qpair failed and we were unable to recover it. 00:28:51.535 [2024-07-15 16:10:20.195760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.535 [2024-07-15 16:10:20.195771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.535 qpair failed and we were unable to recover it. 00:28:51.535 [2024-07-15 16:10:20.195947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.535 [2024-07-15 16:10:20.195958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.535 qpair failed and we were unable to recover it. 00:28:51.535 [2024-07-15 16:10:20.196135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.535 [2024-07-15 16:10:20.196146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.535 qpair failed and we were unable to recover it. 00:28:51.535 [2024-07-15 16:10:20.196269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.535 [2024-07-15 16:10:20.196281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.535 qpair failed and we were unable to recover it. 00:28:51.535 [2024-07-15 16:10:20.196374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.535 [2024-07-15 16:10:20.196384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.535 qpair failed and we were unable to recover it. 00:28:51.535 [2024-07-15 16:10:20.196553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.535 [2024-07-15 16:10:20.196564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.535 qpair failed and we were unable to recover it. 00:28:51.535 [2024-07-15 16:10:20.196641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.535 [2024-07-15 16:10:20.196651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.535 qpair failed and we were unable to recover it. 00:28:51.535 [2024-07-15 16:10:20.196894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.535 [2024-07-15 16:10:20.196906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.535 qpair failed and we were unable to recover it. 00:28:51.535 [2024-07-15 16:10:20.197021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.535 [2024-07-15 16:10:20.197034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.535 qpair failed and we were unable to recover it. 00:28:51.535 [2024-07-15 16:10:20.197194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.535 [2024-07-15 16:10:20.197206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.535 qpair failed and we were unable to recover it. 00:28:51.535 [2024-07-15 16:10:20.197398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.535 [2024-07-15 16:10:20.197410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.535 qpair failed and we were unable to recover it. 00:28:51.535 [2024-07-15 16:10:20.197636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.535 [2024-07-15 16:10:20.197647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.535 qpair failed and we were unable to recover it. 00:28:51.535 [2024-07-15 16:10:20.197822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.535 [2024-07-15 16:10:20.197832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.535 qpair failed and we were unable to recover it. 00:28:51.535 [2024-07-15 16:10:20.197931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.535 [2024-07-15 16:10:20.197942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.535 qpair failed and we were unable to recover it. 00:28:51.535 [2024-07-15 16:10:20.198140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.535 [2024-07-15 16:10:20.198159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.535 qpair failed and we were unable to recover it. 00:28:51.535 [2024-07-15 16:10:20.198380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.535 [2024-07-15 16:10:20.198398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.535 qpair failed and we were unable to recover it. 00:28:51.535 [2024-07-15 16:10:20.198595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.535 [2024-07-15 16:10:20.198611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.535 qpair failed and we were unable to recover it. 00:28:51.535 [2024-07-15 16:10:20.198728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.535 [2024-07-15 16:10:20.198743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.535 qpair failed and we were unable to recover it. 00:28:51.535 [2024-07-15 16:10:20.198919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.535 [2024-07-15 16:10:20.198933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.535 qpair failed and we were unable to recover it. 00:28:51.535 [2024-07-15 16:10:20.199136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.535 [2024-07-15 16:10:20.199152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.535 qpair failed and we were unable to recover it. 00:28:51.535 [2024-07-15 16:10:20.199408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.535 [2024-07-15 16:10:20.199424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.535 qpair failed and we were unable to recover it. 00:28:51.535 [2024-07-15 16:10:20.199532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.535 [2024-07-15 16:10:20.199547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.535 qpair failed and we were unable to recover it. 00:28:51.535 [2024-07-15 16:10:20.199728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.535 [2024-07-15 16:10:20.199743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.535 qpair failed and we were unable to recover it. 00:28:51.535 [2024-07-15 16:10:20.199857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.535 [2024-07-15 16:10:20.199872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.535 qpair failed and we were unable to recover it. 00:28:51.535 [2024-07-15 16:10:20.200048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.535 [2024-07-15 16:10:20.200064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.535 qpair failed and we were unable to recover it. 00:28:51.535 [2024-07-15 16:10:20.200250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.535 [2024-07-15 16:10:20.200265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.535 qpair failed and we were unable to recover it. 00:28:51.535 [2024-07-15 16:10:20.200378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.535 [2024-07-15 16:10:20.200393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.535 qpair failed and we were unable to recover it. 00:28:51.535 [2024-07-15 16:10:20.200500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.535 [2024-07-15 16:10:20.200514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.535 qpair failed and we were unable to recover it. 00:28:51.535 [2024-07-15 16:10:20.200725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.535 [2024-07-15 16:10:20.200740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.535 qpair failed and we were unable to recover it. 00:28:51.535 [2024-07-15 16:10:20.200913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.535 [2024-07-15 16:10:20.200928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.535 qpair failed and we were unable to recover it. 00:28:51.535 [2024-07-15 16:10:20.201042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.535 [2024-07-15 16:10:20.201057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.535 qpair failed and we were unable to recover it. 00:28:51.535 [2024-07-15 16:10:20.201164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.535 [2024-07-15 16:10:20.201179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.535 qpair failed and we were unable to recover it. 00:28:51.535 [2024-07-15 16:10:20.201294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.535 [2024-07-15 16:10:20.201311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.535 qpair failed and we were unable to recover it. 00:28:51.535 [2024-07-15 16:10:20.201493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.535 [2024-07-15 16:10:20.201508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.535 qpair failed and we were unable to recover it. 00:28:51.535 [2024-07-15 16:10:20.201642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.535 [2024-07-15 16:10:20.201658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.535 qpair failed and we were unable to recover it. 00:28:51.535 [2024-07-15 16:10:20.201855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.535 [2024-07-15 16:10:20.201870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.535 qpair failed and we were unable to recover it. 00:28:51.535 [2024-07-15 16:10:20.202071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.535 [2024-07-15 16:10:20.202086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.535 qpair failed and we were unable to recover it. 00:28:51.535 [2024-07-15 16:10:20.202188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.535 [2024-07-15 16:10:20.202202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.535 qpair failed and we were unable to recover it. 00:28:51.535 [2024-07-15 16:10:20.202386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.535 [2024-07-15 16:10:20.202401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.535 qpair failed and we were unable to recover it. 00:28:51.535 [2024-07-15 16:10:20.202518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.535 [2024-07-15 16:10:20.202533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.535 qpair failed and we were unable to recover it. 00:28:51.536 [2024-07-15 16:10:20.202768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.536 [2024-07-15 16:10:20.202783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.536 qpair failed and we were unable to recover it. 00:28:51.536 [2024-07-15 16:10:20.203024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.536 [2024-07-15 16:10:20.203040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.536 qpair failed and we were unable to recover it. 00:28:51.536 [2024-07-15 16:10:20.203201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.536 [2024-07-15 16:10:20.203216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.536 qpair failed and we were unable to recover it. 00:28:51.536 [2024-07-15 16:10:20.203389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.536 [2024-07-15 16:10:20.203405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.536 qpair failed and we were unable to recover it. 00:28:51.536 [2024-07-15 16:10:20.203567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.536 [2024-07-15 16:10:20.203582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.536 qpair failed and we were unable to recover it. 00:28:51.536 [2024-07-15 16:10:20.203757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.536 [2024-07-15 16:10:20.203772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.536 qpair failed and we were unable to recover it. 00:28:51.536 [2024-07-15 16:10:20.203891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.536 [2024-07-15 16:10:20.203906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.536 qpair failed and we were unable to recover it. 00:28:51.536 [2024-07-15 16:10:20.204076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.536 [2024-07-15 16:10:20.204091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.536 qpair failed and we were unable to recover it. 00:28:51.536 [2024-07-15 16:10:20.204264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.536 [2024-07-15 16:10:20.204280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.536 qpair failed and we were unable to recover it. 00:28:51.536 [2024-07-15 16:10:20.204403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.536 [2024-07-15 16:10:20.204418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.536 qpair failed and we were unable to recover it. 00:28:51.536 [2024-07-15 16:10:20.204602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.536 [2024-07-15 16:10:20.204617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.536 qpair failed and we were unable to recover it. 00:28:51.536 [2024-07-15 16:10:20.204853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.536 [2024-07-15 16:10:20.204869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.536 qpair failed and we were unable to recover it. 00:28:51.536 [2024-07-15 16:10:20.204984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.536 [2024-07-15 16:10:20.204998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.536 qpair failed and we were unable to recover it. 00:28:51.536 [2024-07-15 16:10:20.205166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.536 [2024-07-15 16:10:20.205182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.536 qpair failed and we were unable to recover it. 00:28:51.536 [2024-07-15 16:10:20.205356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.536 [2024-07-15 16:10:20.205374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.536 qpair failed and we were unable to recover it. 00:28:51.536 [2024-07-15 16:10:20.205564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.536 [2024-07-15 16:10:20.205580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.536 qpair failed and we were unable to recover it. 00:28:51.536 [2024-07-15 16:10:20.205685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.536 [2024-07-15 16:10:20.205700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.536 qpair failed and we were unable to recover it. 00:28:51.536 [2024-07-15 16:10:20.205872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.536 [2024-07-15 16:10:20.205887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.536 qpair failed and we were unable to recover it. 00:28:51.536 [2024-07-15 16:10:20.206070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.536 [2024-07-15 16:10:20.206086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.536 qpair failed and we were unable to recover it. 00:28:51.536 EAL: No free 2048 kB hugepages reported on node 1 00:28:51.536 [2024-07-15 16:10:20.206269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.536 [2024-07-15 16:10:20.206286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.536 qpair failed and we were unable to recover it. 00:28:51.536 [2024-07-15 16:10:20.206481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.536 [2024-07-15 16:10:20.206496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.536 qpair failed and we were unable to recover it. 00:28:51.536 [2024-07-15 16:10:20.206620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.536 [2024-07-15 16:10:20.206635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.536 qpair failed and we were unable to recover it. 00:28:51.536 [2024-07-15 16:10:20.206840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.536 [2024-07-15 16:10:20.206855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.536 qpair failed and we were unable to recover it. 00:28:51.536 [2024-07-15 16:10:20.207039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.536 [2024-07-15 16:10:20.207054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.536 qpair failed and we were unable to recover it. 00:28:51.536 [2024-07-15 16:10:20.207359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.536 [2024-07-15 16:10:20.207374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.536 qpair failed and we were unable to recover it. 00:28:51.536 [2024-07-15 16:10:20.207605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.536 [2024-07-15 16:10:20.207620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.536 qpair failed and we were unable to recover it. 00:28:51.536 [2024-07-15 16:10:20.207719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.536 [2024-07-15 16:10:20.207734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.536 qpair failed and we were unable to recover it. 00:28:51.536 [2024-07-15 16:10:20.207860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.536 [2024-07-15 16:10:20.207878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.536 qpair failed and we were unable to recover it. 00:28:51.536 [2024-07-15 16:10:20.208075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.536 [2024-07-15 16:10:20.208090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.536 qpair failed and we were unable to recover it. 00:28:51.536 [2024-07-15 16:10:20.208215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.536 [2024-07-15 16:10:20.208235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.536 qpair failed and we were unable to recover it. 00:28:51.536 [2024-07-15 16:10:20.208417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.536 [2024-07-15 16:10:20.208433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.536 qpair failed and we were unable to recover it. 00:28:51.536 [2024-07-15 16:10:20.208539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.536 [2024-07-15 16:10:20.208555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.536 qpair failed and we were unable to recover it. 00:28:51.536 [2024-07-15 16:10:20.208733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.536 [2024-07-15 16:10:20.208748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.536 qpair failed and we were unable to recover it. 00:28:51.536 [2024-07-15 16:10:20.208860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.536 [2024-07-15 16:10:20.208875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.536 qpair failed and we were unable to recover it. 00:28:51.536 [2024-07-15 16:10:20.209067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.536 [2024-07-15 16:10:20.209082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.536 qpair failed and we were unable to recover it. 00:28:51.536 [2024-07-15 16:10:20.209187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.536 [2024-07-15 16:10:20.209202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.536 qpair failed and we were unable to recover it. 00:28:51.536 [2024-07-15 16:10:20.209321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.536 [2024-07-15 16:10:20.209336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.536 qpair failed and we were unable to recover it. 00:28:51.536 [2024-07-15 16:10:20.209509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.536 [2024-07-15 16:10:20.209524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.536 qpair failed and we were unable to recover it. 00:28:51.536 [2024-07-15 16:10:20.209645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.536 [2024-07-15 16:10:20.209661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.536 qpair failed and we were unable to recover it. 00:28:51.536 [2024-07-15 16:10:20.209757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.536 [2024-07-15 16:10:20.209773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.536 qpair failed and we were unable to recover it. 00:28:51.537 [2024-07-15 16:10:20.210035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.537 [2024-07-15 16:10:20.210050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.537 qpair failed and we were unable to recover it. 00:28:51.537 [2024-07-15 16:10:20.210234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.537 [2024-07-15 16:10:20.210251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.537 qpair failed and we were unable to recover it. 00:28:51.537 [2024-07-15 16:10:20.210356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.537 [2024-07-15 16:10:20.210371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.537 qpair failed and we were unable to recover it. 00:28:51.537 [2024-07-15 16:10:20.210461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.537 [2024-07-15 16:10:20.210475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.537 qpair failed and we were unable to recover it. 00:28:51.537 [2024-07-15 16:10:20.210645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.537 [2024-07-15 16:10:20.210657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.537 qpair failed and we were unable to recover it. 00:28:51.537 [2024-07-15 16:10:20.210826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.537 [2024-07-15 16:10:20.210837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.537 qpair failed and we were unable to recover it. 00:28:51.537 [2024-07-15 16:10:20.211021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.537 [2024-07-15 16:10:20.211032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.537 qpair failed and we were unable to recover it. 00:28:51.537 [2024-07-15 16:10:20.211293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.537 [2024-07-15 16:10:20.211304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.537 qpair failed and we were unable to recover it. 00:28:51.537 [2024-07-15 16:10:20.211464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.537 [2024-07-15 16:10:20.211476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.537 qpair failed and we were unable to recover it. 00:28:51.537 [2024-07-15 16:10:20.211654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.537 [2024-07-15 16:10:20.211665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.537 qpair failed and we were unable to recover it. 00:28:51.537 [2024-07-15 16:10:20.211812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.537 [2024-07-15 16:10:20.211823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.537 qpair failed and we were unable to recover it. 00:28:51.537 [2024-07-15 16:10:20.212011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.537 [2024-07-15 16:10:20.212022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.537 qpair failed and we were unable to recover it. 00:28:51.537 [2024-07-15 16:10:20.212122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.537 [2024-07-15 16:10:20.212133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.537 qpair failed and we were unable to recover it. 00:28:51.537 [2024-07-15 16:10:20.212307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.537 [2024-07-15 16:10:20.212321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.537 qpair failed and we were unable to recover it. 00:28:51.537 [2024-07-15 16:10:20.212505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.537 [2024-07-15 16:10:20.212522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.537 qpair failed and we were unable to recover it. 00:28:51.537 [2024-07-15 16:10:20.212642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.537 [2024-07-15 16:10:20.212657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.537 qpair failed and we were unable to recover it. 00:28:51.537 [2024-07-15 16:10:20.212914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.537 [2024-07-15 16:10:20.212929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.537 qpair failed and we were unable to recover it. 00:28:51.537 [2024-07-15 16:10:20.213091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.537 [2024-07-15 16:10:20.213106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.537 qpair failed and we were unable to recover it. 00:28:51.537 [2024-07-15 16:10:20.213297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.537 [2024-07-15 16:10:20.213312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.537 qpair failed and we were unable to recover it. 00:28:51.537 [2024-07-15 16:10:20.213571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.537 [2024-07-15 16:10:20.213586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.537 qpair failed and we were unable to recover it. 00:28:51.537 [2024-07-15 16:10:20.213759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.537 [2024-07-15 16:10:20.213774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.537 qpair failed and we were unable to recover it. 00:28:51.537 [2024-07-15 16:10:20.213894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.537 [2024-07-15 16:10:20.213909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.537 qpair failed and we were unable to recover it. 00:28:51.537 [2024-07-15 16:10:20.214019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.537 [2024-07-15 16:10:20.214034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.537 qpair failed and we were unable to recover it. 00:28:51.537 [2024-07-15 16:10:20.214135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.537 [2024-07-15 16:10:20.214150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.537 qpair failed and we were unable to recover it. 00:28:51.537 [2024-07-15 16:10:20.214386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.537 [2024-07-15 16:10:20.214401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.537 qpair failed and we were unable to recover it. 00:28:51.537 [2024-07-15 16:10:20.214520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.537 [2024-07-15 16:10:20.214534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.537 qpair failed and we were unable to recover it. 00:28:51.537 [2024-07-15 16:10:20.214707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.537 [2024-07-15 16:10:20.214722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.537 qpair failed and we were unable to recover it. 00:28:51.537 [2024-07-15 16:10:20.214988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.537 [2024-07-15 16:10:20.215006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.537 qpair failed and we were unable to recover it. 00:28:51.537 [2024-07-15 16:10:20.215172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.537 [2024-07-15 16:10:20.215187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.537 qpair failed and we were unable to recover it. 00:28:51.537 [2024-07-15 16:10:20.215408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.537 [2024-07-15 16:10:20.215423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.537 qpair failed and we were unable to recover it. 00:28:51.538 [2024-07-15 16:10:20.215681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.538 [2024-07-15 16:10:20.215696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.538 qpair failed and we were unable to recover it. 00:28:51.538 [2024-07-15 16:10:20.215818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.538 [2024-07-15 16:10:20.215832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.538 qpair failed and we were unable to recover it. 00:28:51.538 [2024-07-15 16:10:20.215966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.538 [2024-07-15 16:10:20.215981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.538 qpair failed and we were unable to recover it. 00:28:51.538 [2024-07-15 16:10:20.216086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.538 [2024-07-15 16:10:20.216101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.538 qpair failed and we were unable to recover it. 00:28:51.538 [2024-07-15 16:10:20.216191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.538 [2024-07-15 16:10:20.216206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.538 qpair failed and we were unable to recover it. 00:28:51.538 [2024-07-15 16:10:20.216323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.538 [2024-07-15 16:10:20.216337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.538 qpair failed and we were unable to recover it. 00:28:51.538 [2024-07-15 16:10:20.216525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.538 [2024-07-15 16:10:20.216536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.538 qpair failed and we were unable to recover it. 00:28:51.538 [2024-07-15 16:10:20.216642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.538 [2024-07-15 16:10:20.216653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.538 qpair failed and we were unable to recover it. 00:28:51.538 [2024-07-15 16:10:20.216882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.538 [2024-07-15 16:10:20.216894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.538 qpair failed and we were unable to recover it. 00:28:51.538 [2024-07-15 16:10:20.217114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.538 [2024-07-15 16:10:20.217125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.538 qpair failed and we were unable to recover it. 00:28:51.538 [2024-07-15 16:10:20.217236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.538 [2024-07-15 16:10:20.217248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.538 qpair failed and we were unable to recover it. 00:28:51.538 [2024-07-15 16:10:20.217469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.538 [2024-07-15 16:10:20.217480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.538 qpair failed and we were unable to recover it. 00:28:51.538 [2024-07-15 16:10:20.217572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.538 [2024-07-15 16:10:20.217583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.538 qpair failed and we were unable to recover it. 00:28:51.538 [2024-07-15 16:10:20.217742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.538 [2024-07-15 16:10:20.217754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.538 qpair failed and we were unable to recover it. 00:28:51.538 [2024-07-15 16:10:20.217935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.538 [2024-07-15 16:10:20.217946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.538 qpair failed and we were unable to recover it. 00:28:51.538 [2024-07-15 16:10:20.218198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.538 [2024-07-15 16:10:20.218209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.538 qpair failed and we were unable to recover it. 00:28:51.538 [2024-07-15 16:10:20.218302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.538 [2024-07-15 16:10:20.218312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.538 qpair failed and we were unable to recover it. 00:28:51.538 [2024-07-15 16:10:20.218391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.538 [2024-07-15 16:10:20.218402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.538 qpair failed and we were unable to recover it. 00:28:51.538 [2024-07-15 16:10:20.218637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.538 [2024-07-15 16:10:20.218648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.538 qpair failed and we were unable to recover it. 00:28:51.538 [2024-07-15 16:10:20.218811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.538 [2024-07-15 16:10:20.218823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.538 qpair failed and we were unable to recover it. 00:28:51.538 [2024-07-15 16:10:20.218925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.538 [2024-07-15 16:10:20.218937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.538 qpair failed and we were unable to recover it. 00:28:51.538 [2024-07-15 16:10:20.219034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.538 [2024-07-15 16:10:20.219045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.538 qpair failed and we were unable to recover it. 00:28:51.538 [2024-07-15 16:10:20.219129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.538 [2024-07-15 16:10:20.219139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.538 qpair failed and we were unable to recover it. 00:28:51.538 [2024-07-15 16:10:20.219381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.538 [2024-07-15 16:10:20.219393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.538 qpair failed and we were unable to recover it. 00:28:51.538 [2024-07-15 16:10:20.219506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.538 [2024-07-15 16:10:20.219518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.538 qpair failed and we were unable to recover it. 00:28:51.538 [2024-07-15 16:10:20.219679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.538 [2024-07-15 16:10:20.219690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.538 qpair failed and we were unable to recover it. 00:28:51.538 [2024-07-15 16:10:20.219920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.538 [2024-07-15 16:10:20.219932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.538 qpair failed and we were unable to recover it. 00:28:51.538 [2024-07-15 16:10:20.220111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.538 [2024-07-15 16:10:20.220124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.538 qpair failed and we were unable to recover it. 00:28:51.538 [2024-07-15 16:10:20.220391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.538 [2024-07-15 16:10:20.220403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.538 qpair failed and we were unable to recover it. 00:28:51.538 [2024-07-15 16:10:20.220509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.538 [2024-07-15 16:10:20.220521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.538 qpair failed and we were unable to recover it. 00:28:51.538 [2024-07-15 16:10:20.220687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.538 [2024-07-15 16:10:20.220699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.538 qpair failed and we were unable to recover it. 00:28:51.538 [2024-07-15 16:10:20.220923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.538 [2024-07-15 16:10:20.220935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.538 qpair failed and we were unable to recover it. 00:28:51.538 [2024-07-15 16:10:20.221157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.538 [2024-07-15 16:10:20.221169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.538 qpair failed and we were unable to recover it. 00:28:51.538 [2024-07-15 16:10:20.221284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.538 [2024-07-15 16:10:20.221297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.538 qpair failed and we were unable to recover it. 00:28:51.538 [2024-07-15 16:10:20.221458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.538 [2024-07-15 16:10:20.221469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.538 qpair failed and we were unable to recover it. 00:28:51.538 [2024-07-15 16:10:20.221534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.538 [2024-07-15 16:10:20.221545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.538 qpair failed and we were unable to recover it. 00:28:51.538 [2024-07-15 16:10:20.221646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.538 [2024-07-15 16:10:20.221656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.538 qpair failed and we were unable to recover it. 00:28:51.538 [2024-07-15 16:10:20.221762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.538 [2024-07-15 16:10:20.221776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.538 qpair failed and we were unable to recover it. 00:28:51.538 [2024-07-15 16:10:20.222006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.538 [2024-07-15 16:10:20.222018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.538 qpair failed and we were unable to recover it. 00:28:51.538 [2024-07-15 16:10:20.222175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.538 [2024-07-15 16:10:20.222188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.538 qpair failed and we were unable to recover it. 00:28:51.538 [2024-07-15 16:10:20.222296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.539 [2024-07-15 16:10:20.222308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.539 qpair failed and we were unable to recover it. 00:28:51.539 [2024-07-15 16:10:20.222400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.539 [2024-07-15 16:10:20.222412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.539 qpair failed and we were unable to recover it. 00:28:51.539 [2024-07-15 16:10:20.222568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.539 [2024-07-15 16:10:20.222579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.539 qpair failed and we were unable to recover it. 00:28:51.539 [2024-07-15 16:10:20.222721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.539 [2024-07-15 16:10:20.222733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.539 qpair failed and we were unable to recover it. 00:28:51.539 [2024-07-15 16:10:20.222840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.539 [2024-07-15 16:10:20.222852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.539 qpair failed and we were unable to recover it. 00:28:51.539 [2024-07-15 16:10:20.222988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.539 [2024-07-15 16:10:20.222999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.539 qpair failed and we were unable to recover it. 00:28:51.539 [2024-07-15 16:10:20.223147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.539 [2024-07-15 16:10:20.223158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.539 qpair failed and we were unable to recover it. 00:28:51.539 [2024-07-15 16:10:20.223266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.539 [2024-07-15 16:10:20.223277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.539 qpair failed and we were unable to recover it. 00:28:51.539 [2024-07-15 16:10:20.223448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.539 [2024-07-15 16:10:20.223460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.539 qpair failed and we were unable to recover it. 00:28:51.539 [2024-07-15 16:10:20.223554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.539 [2024-07-15 16:10:20.223565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.539 qpair failed and we were unable to recover it. 00:28:51.539 [2024-07-15 16:10:20.223667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.539 [2024-07-15 16:10:20.223678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.539 qpair failed and we were unable to recover it. 00:28:51.539 [2024-07-15 16:10:20.223836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.539 [2024-07-15 16:10:20.223846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.539 qpair failed and we were unable to recover it. 00:28:51.539 [2024-07-15 16:10:20.224001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.539 [2024-07-15 16:10:20.224012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.539 qpair failed and we were unable to recover it. 00:28:51.539 [2024-07-15 16:10:20.224104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.539 [2024-07-15 16:10:20.224115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.539 qpair failed and we were unable to recover it. 00:28:51.539 [2024-07-15 16:10:20.224367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.539 [2024-07-15 16:10:20.224379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.539 qpair failed and we were unable to recover it. 00:28:51.539 [2024-07-15 16:10:20.224555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.539 [2024-07-15 16:10:20.224567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.539 qpair failed and we were unable to recover it. 00:28:51.539 [2024-07-15 16:10:20.224674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.539 [2024-07-15 16:10:20.224686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.539 qpair failed and we were unable to recover it. 00:28:51.539 [2024-07-15 16:10:20.224783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.539 [2024-07-15 16:10:20.224794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.539 qpair failed and we were unable to recover it. 00:28:51.539 [2024-07-15 16:10:20.224897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.539 [2024-07-15 16:10:20.224909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.539 qpair failed and we were unable to recover it. 00:28:51.539 [2024-07-15 16:10:20.225072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.539 [2024-07-15 16:10:20.225083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.539 qpair failed and we were unable to recover it. 00:28:51.539 [2024-07-15 16:10:20.225206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.539 [2024-07-15 16:10:20.225218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.539 qpair failed and we were unable to recover it. 00:28:51.539 [2024-07-15 16:10:20.225391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.539 [2024-07-15 16:10:20.225402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.539 qpair failed and we were unable to recover it. 00:28:51.539 [2024-07-15 16:10:20.225586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.539 [2024-07-15 16:10:20.225597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.539 qpair failed and we were unable to recover it. 00:28:51.539 [2024-07-15 16:10:20.225692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.539 [2024-07-15 16:10:20.225702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.539 qpair failed and we were unable to recover it. 00:28:51.539 [2024-07-15 16:10:20.225808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.539 [2024-07-15 16:10:20.225819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.539 qpair failed and we were unable to recover it. 00:28:51.539 [2024-07-15 16:10:20.225952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.539 [2024-07-15 16:10:20.225963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.539 qpair failed and we were unable to recover it. 00:28:51.539 [2024-07-15 16:10:20.226064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.539 [2024-07-15 16:10:20.226076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.539 qpair failed and we were unable to recover it. 00:28:51.539 [2024-07-15 16:10:20.226192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.539 [2024-07-15 16:10:20.226202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.539 qpair failed and we were unable to recover it. 00:28:51.539 [2024-07-15 16:10:20.226366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.539 [2024-07-15 16:10:20.226378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.539 qpair failed and we were unable to recover it. 00:28:51.539 [2024-07-15 16:10:20.226456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.539 [2024-07-15 16:10:20.226466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.539 qpair failed and we were unable to recover it. 00:28:51.539 [2024-07-15 16:10:20.226568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.539 [2024-07-15 16:10:20.226579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.539 qpair failed and we were unable to recover it. 00:28:51.539 [2024-07-15 16:10:20.226739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.539 [2024-07-15 16:10:20.226751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.539 qpair failed and we were unable to recover it. 00:28:51.539 [2024-07-15 16:10:20.226920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.539 [2024-07-15 16:10:20.226931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.539 qpair failed and we were unable to recover it. 00:28:51.539 [2024-07-15 16:10:20.227028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.539 [2024-07-15 16:10:20.227038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.539 qpair failed and we were unable to recover it. 00:28:51.539 [2024-07-15 16:10:20.227200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.539 [2024-07-15 16:10:20.227212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.539 qpair failed and we were unable to recover it. 00:28:51.539 [2024-07-15 16:10:20.227326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.539 [2024-07-15 16:10:20.227338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.539 qpair failed and we were unable to recover it. 00:28:51.539 [2024-07-15 16:10:20.227509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.539 [2024-07-15 16:10:20.227520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.539 qpair failed and we were unable to recover it. 00:28:51.539 [2024-07-15 16:10:20.227673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.539 [2024-07-15 16:10:20.227687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.539 qpair failed and we were unable to recover it. 00:28:51.539 [2024-07-15 16:10:20.227908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.539 [2024-07-15 16:10:20.227920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.539 qpair failed and we were unable to recover it. 00:28:51.539 [2024-07-15 16:10:20.228096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.539 [2024-07-15 16:10:20.228108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.539 qpair failed and we were unable to recover it. 00:28:51.539 [2024-07-15 16:10:20.228213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.540 [2024-07-15 16:10:20.228227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.540 qpair failed and we were unable to recover it. 00:28:51.540 [2024-07-15 16:10:20.228388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.540 [2024-07-15 16:10:20.228400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.540 qpair failed and we were unable to recover it. 00:28:51.540 [2024-07-15 16:10:20.228571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.540 [2024-07-15 16:10:20.228582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.540 qpair failed and we were unable to recover it. 00:28:51.540 [2024-07-15 16:10:20.228778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.540 [2024-07-15 16:10:20.228789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.540 qpair failed and we were unable to recover it. 00:28:51.540 [2024-07-15 16:10:20.228958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.540 [2024-07-15 16:10:20.228969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.540 qpair failed and we were unable to recover it. 00:28:51.540 [2024-07-15 16:10:20.229063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.540 [2024-07-15 16:10:20.229074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.540 qpair failed and we were unable to recover it. 00:28:51.540 [2024-07-15 16:10:20.229182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.540 [2024-07-15 16:10:20.229194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.540 qpair failed and we were unable to recover it. 00:28:51.540 [2024-07-15 16:10:20.229355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.540 [2024-07-15 16:10:20.229366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.540 qpair failed and we were unable to recover it. 00:28:51.540 [2024-07-15 16:10:20.229570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.540 [2024-07-15 16:10:20.229581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.540 qpair failed and we were unable to recover it. 00:28:51.540 [2024-07-15 16:10:20.229667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.540 [2024-07-15 16:10:20.229678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.540 qpair failed and we were unable to recover it. 00:28:51.540 [2024-07-15 16:10:20.229792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.540 [2024-07-15 16:10:20.229804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.540 qpair failed and we were unable to recover it. 00:28:51.540 [2024-07-15 16:10:20.229922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.540 [2024-07-15 16:10:20.229933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.540 qpair failed and we were unable to recover it. 00:28:51.540 [2024-07-15 16:10:20.230108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.540 [2024-07-15 16:10:20.230119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.540 qpair failed and we were unable to recover it. 00:28:51.540 [2024-07-15 16:10:20.230209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.540 [2024-07-15 16:10:20.230219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.540 qpair failed and we were unable to recover it. 00:28:51.540 [2024-07-15 16:10:20.230444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.540 [2024-07-15 16:10:20.230456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.540 qpair failed and we were unable to recover it. 00:28:51.540 [2024-07-15 16:10:20.230625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.540 [2024-07-15 16:10:20.230636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.540 qpair failed and we were unable to recover it. 00:28:51.540 [2024-07-15 16:10:20.230803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.540 [2024-07-15 16:10:20.230814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.540 qpair failed and we were unable to recover it. 00:28:51.540 [2024-07-15 16:10:20.230980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.540 [2024-07-15 16:10:20.230992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.540 qpair failed and we were unable to recover it. 00:28:51.540 [2024-07-15 16:10:20.231155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.540 [2024-07-15 16:10:20.231166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.540 qpair failed and we were unable to recover it. 00:28:51.540 [2024-07-15 16:10:20.231275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.540 [2024-07-15 16:10:20.231287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.540 qpair failed and we were unable to recover it. 00:28:51.540 [2024-07-15 16:10:20.231439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.540 [2024-07-15 16:10:20.231450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.540 qpair failed and we were unable to recover it. 00:28:51.540 [2024-07-15 16:10:20.231611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.540 [2024-07-15 16:10:20.231622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.540 qpair failed and we were unable to recover it. 00:28:51.540 [2024-07-15 16:10:20.231848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.540 [2024-07-15 16:10:20.231859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.540 qpair failed and we were unable to recover it. 00:28:51.540 [2024-07-15 16:10:20.232047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.540 [2024-07-15 16:10:20.232058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.540 qpair failed and we were unable to recover it. 00:28:51.540 [2024-07-15 16:10:20.232259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.540 [2024-07-15 16:10:20.232277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.540 qpair failed and we were unable to recover it. 00:28:51.540 [2024-07-15 16:10:20.232464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.540 [2024-07-15 16:10:20.232479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.540 qpair failed and we were unable to recover it. 00:28:51.540 [2024-07-15 16:10:20.232574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.540 [2024-07-15 16:10:20.232589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.540 qpair failed and we were unable to recover it. 00:28:51.540 [2024-07-15 16:10:20.232824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.540 [2024-07-15 16:10:20.232840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.540 qpair failed and we were unable to recover it. 00:28:51.540 [2024-07-15 16:10:20.233020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.540 [2024-07-15 16:10:20.233035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.540 qpair failed and we were unable to recover it. 00:28:51.540 [2024-07-15 16:10:20.233149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.540 [2024-07-15 16:10:20.233163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.540 qpair failed and we were unable to recover it. 00:28:51.540 [2024-07-15 16:10:20.233278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.540 [2024-07-15 16:10:20.233291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.540 qpair failed and we were unable to recover it. 00:28:51.540 [2024-07-15 16:10:20.233387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.540 [2024-07-15 16:10:20.233399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.540 qpair failed and we were unable to recover it. 00:28:51.540 [2024-07-15 16:10:20.233574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.540 [2024-07-15 16:10:20.233585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.540 qpair failed and we were unable to recover it. 00:28:51.540 [2024-07-15 16:10:20.233743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.540 [2024-07-15 16:10:20.233754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.540 qpair failed and we were unable to recover it. 00:28:51.540 [2024-07-15 16:10:20.233975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.540 [2024-07-15 16:10:20.233987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.540 qpair failed and we were unable to recover it. 00:28:51.540 [2024-07-15 16:10:20.234184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.540 [2024-07-15 16:10:20.234196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.540 qpair failed and we were unable to recover it. 00:28:51.540 [2024-07-15 16:10:20.234372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.540 [2024-07-15 16:10:20.234383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.540 qpair failed and we were unable to recover it. 00:28:51.540 [2024-07-15 16:10:20.234484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.540 [2024-07-15 16:10:20.234498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.540 qpair failed and we were unable to recover it. 00:28:51.540 [2024-07-15 16:10:20.234678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.540 [2024-07-15 16:10:20.234689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.541 qpair failed and we were unable to recover it. 00:28:51.541 [2024-07-15 16:10:20.234957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.541 [2024-07-15 16:10:20.234969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.541 qpair failed and we were unable to recover it. 00:28:51.541 [2024-07-15 16:10:20.235142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.541 [2024-07-15 16:10:20.235153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.541 qpair failed and we were unable to recover it. 00:28:51.541 [2024-07-15 16:10:20.235314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.541 [2024-07-15 16:10:20.235326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.541 qpair failed and we were unable to recover it. 00:28:51.541 [2024-07-15 16:10:20.235493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.541 [2024-07-15 16:10:20.235504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.541 qpair failed and we were unable to recover it. 00:28:51.541 [2024-07-15 16:10:20.235600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.541 [2024-07-15 16:10:20.235612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.541 qpair failed and we were unable to recover it. 00:28:51.541 [2024-07-15 16:10:20.235727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.541 [2024-07-15 16:10:20.235738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.541 qpair failed and we were unable to recover it. 00:28:51.541 [2024-07-15 16:10:20.235974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.541 [2024-07-15 16:10:20.235986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.541 qpair failed and we were unable to recover it. 00:28:51.541 [2024-07-15 16:10:20.236086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.541 [2024-07-15 16:10:20.236098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.541 qpair failed and we were unable to recover it. 00:28:51.541 [2024-07-15 16:10:20.236189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.541 [2024-07-15 16:10:20.236200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.541 qpair failed and we were unable to recover it. 00:28:51.541 [2024-07-15 16:10:20.236420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.541 [2024-07-15 16:10:20.236432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.541 qpair failed and we were unable to recover it. 00:28:51.541 [2024-07-15 16:10:20.236606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.541 [2024-07-15 16:10:20.236618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.541 qpair failed and we were unable to recover it. 00:28:51.541 [2024-07-15 16:10:20.236842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.541 [2024-07-15 16:10:20.236853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.541 qpair failed and we were unable to recover it. 00:28:51.541 [2024-07-15 16:10:20.236940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.541 [2024-07-15 16:10:20.236950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.541 qpair failed and we were unable to recover it. 00:28:51.541 [2024-07-15 16:10:20.237035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.541 [2024-07-15 16:10:20.237044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.541 qpair failed and we were unable to recover it. 00:28:51.541 [2024-07-15 16:10:20.237230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.541 [2024-07-15 16:10:20.237241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.541 qpair failed and we were unable to recover it. 00:28:51.541 [2024-07-15 16:10:20.237406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.541 [2024-07-15 16:10:20.237418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.541 qpair failed and we were unable to recover it. 00:28:51.541 [2024-07-15 16:10:20.237573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.541 [2024-07-15 16:10:20.237585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.541 qpair failed and we were unable to recover it. 00:28:51.541 [2024-07-15 16:10:20.237690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.541 [2024-07-15 16:10:20.237702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.541 qpair failed and we were unable to recover it. 00:28:51.541 [2024-07-15 16:10:20.237931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.541 [2024-07-15 16:10:20.237943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.541 qpair failed and we were unable to recover it. 00:28:51.541 [2024-07-15 16:10:20.238053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.541 [2024-07-15 16:10:20.238064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.541 qpair failed and we were unable to recover it. 00:28:51.541 [2024-07-15 16:10:20.238212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.541 [2024-07-15 16:10:20.238227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.541 qpair failed and we were unable to recover it. 00:28:51.541 [2024-07-15 16:10:20.238416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.541 [2024-07-15 16:10:20.238427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.541 qpair failed and we were unable to recover it. 00:28:51.541 [2024-07-15 16:10:20.238592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.541 [2024-07-15 16:10:20.238603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.541 qpair failed and we were unable to recover it. 00:28:51.541 [2024-07-15 16:10:20.238827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.541 [2024-07-15 16:10:20.238838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.541 qpair failed and we were unable to recover it. 00:28:51.541 [2024-07-15 16:10:20.239013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.541 [2024-07-15 16:10:20.239025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.541 qpair failed and we were unable to recover it. 00:28:51.541 [2024-07-15 16:10:20.239148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.541 [2024-07-15 16:10:20.239168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.541 qpair failed and we were unable to recover it. 00:28:51.541 [2024-07-15 16:10:20.239341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.541 [2024-07-15 16:10:20.239357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.541 qpair failed and we were unable to recover it. 00:28:51.541 [2024-07-15 16:10:20.239527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.541 [2024-07-15 16:10:20.239542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.541 qpair failed and we were unable to recover it. 00:28:51.541 [2024-07-15 16:10:20.239657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.541 [2024-07-15 16:10:20.239673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.541 qpair failed and we were unable to recover it. 00:28:51.541 [2024-07-15 16:10:20.239785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.541 [2024-07-15 16:10:20.239800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.541 qpair failed and we were unable to recover it. 00:28:51.541 [2024-07-15 16:10:20.239909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.541 [2024-07-15 16:10:20.239924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.541 qpair failed and we were unable to recover it. 00:28:51.541 [2024-07-15 16:10:20.240108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.541 [2024-07-15 16:10:20.240120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.541 qpair failed and we were unable to recover it. 00:28:51.541 [2024-07-15 16:10:20.240222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.541 [2024-07-15 16:10:20.240237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.541 qpair failed and we were unable to recover it. 00:28:51.541 [2024-07-15 16:10:20.240418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.541 [2024-07-15 16:10:20.240430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.541 qpair failed and we were unable to recover it. 00:28:51.541 [2024-07-15 16:10:20.240621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.541 [2024-07-15 16:10:20.240633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.541 qpair failed and we were unable to recover it. 00:28:51.541 [2024-07-15 16:10:20.240759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.541 [2024-07-15 16:10:20.240771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.541 qpair failed and we were unable to recover it. 00:28:51.541 [2024-07-15 16:10:20.240887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.541 [2024-07-15 16:10:20.240900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.541 qpair failed and we were unable to recover it. 00:28:51.541 [2024-07-15 16:10:20.241053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.541 [2024-07-15 16:10:20.241065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.541 qpair failed and we were unable to recover it. 00:28:51.541 [2024-07-15 16:10:20.241141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.541 [2024-07-15 16:10:20.241151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.541 qpair failed and we were unable to recover it. 00:28:51.541 [2024-07-15 16:10:20.241256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.541 [2024-07-15 16:10:20.241268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.541 qpair failed and we were unable to recover it. 00:28:51.542 [2024-07-15 16:10:20.241385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.542 [2024-07-15 16:10:20.241395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.542 qpair failed and we were unable to recover it. 00:28:51.542 [2024-07-15 16:10:20.241530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.542 [2024-07-15 16:10:20.241543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.542 qpair failed and we were unable to recover it. 00:28:51.542 [2024-07-15 16:10:20.241726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.542 [2024-07-15 16:10:20.241737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.542 qpair failed and we were unable to recover it. 00:28:51.542 [2024-07-15 16:10:20.241897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.542 [2024-07-15 16:10:20.241909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.542 qpair failed and we were unable to recover it. 00:28:51.542 [2024-07-15 16:10:20.242033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.542 [2024-07-15 16:10:20.242045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.542 qpair failed and we were unable to recover it. 00:28:51.542 [2024-07-15 16:10:20.242151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.542 [2024-07-15 16:10:20.242162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.542 qpair failed and we were unable to recover it. 00:28:51.542 [2024-07-15 16:10:20.242265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.542 [2024-07-15 16:10:20.242277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.542 qpair failed and we were unable to recover it. 00:28:51.542 [2024-07-15 16:10:20.242369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.542 [2024-07-15 16:10:20.242380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.542 qpair failed and we were unable to recover it. 00:28:51.542 [2024-07-15 16:10:20.242644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.542 [2024-07-15 16:10:20.242656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.542 qpair failed and we were unable to recover it. 00:28:51.542 [2024-07-15 16:10:20.242751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.542 [2024-07-15 16:10:20.242763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.542 qpair failed and we were unable to recover it. 00:28:51.542 [2024-07-15 16:10:20.242874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.542 [2024-07-15 16:10:20.242886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.542 qpair failed and we were unable to recover it. 00:28:51.542 [2024-07-15 16:10:20.242995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.542 [2024-07-15 16:10:20.243005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.542 qpair failed and we were unable to recover it. 00:28:51.542 [2024-07-15 16:10:20.243111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.542 [2024-07-15 16:10:20.243123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.542 qpair failed and we were unable to recover it. 00:28:51.542 [2024-07-15 16:10:20.243232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.542 [2024-07-15 16:10:20.243244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.542 qpair failed and we were unable to recover it. 00:28:51.542 [2024-07-15 16:10:20.243346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.542 [2024-07-15 16:10:20.243358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.542 qpair failed and we were unable to recover it. 00:28:51.542 [2024-07-15 16:10:20.243476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.542 [2024-07-15 16:10:20.243487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.542 qpair failed and we were unable to recover it. 00:28:51.542 [2024-07-15 16:10:20.243763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.542 [2024-07-15 16:10:20.243775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.542 qpair failed and we were unable to recover it. 00:28:51.542 [2024-07-15 16:10:20.243867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.542 [2024-07-15 16:10:20.243879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.542 qpair failed and we were unable to recover it. 00:28:51.542 [2024-07-15 16:10:20.243978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.542 [2024-07-15 16:10:20.243989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.542 qpair failed and we were unable to recover it. 00:28:51.542 [2024-07-15 16:10:20.244074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.542 [2024-07-15 16:10:20.244085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.542 qpair failed and we were unable to recover it. 00:28:51.542 [2024-07-15 16:10:20.244352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.542 [2024-07-15 16:10:20.244364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.542 qpair failed and we were unable to recover it. 00:28:51.542 [2024-07-15 16:10:20.244430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.542 [2024-07-15 16:10:20.244446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.542 qpair failed and we were unable to recover it. 00:28:51.542 [2024-07-15 16:10:20.244606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.542 [2024-07-15 16:10:20.244619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.542 qpair failed and we were unable to recover it. 00:28:51.542 [2024-07-15 16:10:20.244772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.542 [2024-07-15 16:10:20.244783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.542 qpair failed and we were unable to recover it. 00:28:51.542 [2024-07-15 16:10:20.244960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.542 [2024-07-15 16:10:20.244972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.542 qpair failed and we were unable to recover it. 00:28:51.542 [2024-07-15 16:10:20.245074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.542 [2024-07-15 16:10:20.245087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.542 qpair failed and we were unable to recover it. 00:28:51.542 [2024-07-15 16:10:20.245189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.542 [2024-07-15 16:10:20.245201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.542 qpair failed and we were unable to recover it. 00:28:51.542 [2024-07-15 16:10:20.245306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.542 [2024-07-15 16:10:20.245318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.542 qpair failed and we were unable to recover it. 00:28:51.542 [2024-07-15 16:10:20.245436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.542 [2024-07-15 16:10:20.245448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.542 qpair failed and we were unable to recover it. 00:28:51.542 [2024-07-15 16:10:20.245569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.542 [2024-07-15 16:10:20.245579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.542 qpair failed and we were unable to recover it. 00:28:51.542 [2024-07-15 16:10:20.245735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.542 [2024-07-15 16:10:20.245747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.542 qpair failed and we were unable to recover it. 00:28:51.542 [2024-07-15 16:10:20.245908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.542 [2024-07-15 16:10:20.245919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.542 qpair failed and we were unable to recover it. 00:28:51.542 [2024-07-15 16:10:20.246010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.542 [2024-07-15 16:10:20.246022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.542 qpair failed and we were unable to recover it. 00:28:51.542 [2024-07-15 16:10:20.246115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.542 [2024-07-15 16:10:20.246127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.542 qpair failed and we were unable to recover it. 00:28:51.542 [2024-07-15 16:10:20.246322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.543 [2024-07-15 16:10:20.246335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.543 qpair failed and we were unable to recover it. 00:28:51.543 [2024-07-15 16:10:20.246442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.543 [2024-07-15 16:10:20.246453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.543 qpair failed and we were unable to recover it. 00:28:51.543 [2024-07-15 16:10:20.246623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.543 [2024-07-15 16:10:20.246634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.543 qpair failed and we were unable to recover it. 00:28:51.543 [2024-07-15 16:10:20.246859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.543 [2024-07-15 16:10:20.246870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.543 qpair failed and we were unable to recover it. 00:28:51.543 [2024-07-15 16:10:20.246985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.543 [2024-07-15 16:10:20.246997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.543 qpair failed and we were unable to recover it. 00:28:51.543 [2024-07-15 16:10:20.247177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.543 [2024-07-15 16:10:20.247188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.543 qpair failed and we were unable to recover it. 00:28:51.543 [2024-07-15 16:10:20.247371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.543 [2024-07-15 16:10:20.247383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.543 qpair failed and we were unable to recover it. 00:28:51.543 [2024-07-15 16:10:20.247501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.543 [2024-07-15 16:10:20.247513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.543 qpair failed and we were unable to recover it. 00:28:51.543 [2024-07-15 16:10:20.247614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.543 [2024-07-15 16:10:20.247625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.543 qpair failed and we were unable to recover it. 00:28:51.543 [2024-07-15 16:10:20.247863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.543 [2024-07-15 16:10:20.247875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.543 qpair failed and we were unable to recover it. 00:28:51.543 [2024-07-15 16:10:20.248101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.543 [2024-07-15 16:10:20.248112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.543 qpair failed and we were unable to recover it. 00:28:51.543 [2024-07-15 16:10:20.248237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.543 [2024-07-15 16:10:20.248250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.543 qpair failed and we were unable to recover it. 00:28:51.543 [2024-07-15 16:10:20.248471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.543 [2024-07-15 16:10:20.248484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.543 qpair failed and we were unable to recover it. 00:28:51.543 [2024-07-15 16:10:20.248608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.543 [2024-07-15 16:10:20.248619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.543 qpair failed and we were unable to recover it. 00:28:51.543 [2024-07-15 16:10:20.248833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.543 [2024-07-15 16:10:20.248845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.543 qpair failed and we were unable to recover it. 00:28:51.543 [2024-07-15 16:10:20.248948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.543 [2024-07-15 16:10:20.248959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.543 qpair failed and we were unable to recover it. 00:28:51.543 [2024-07-15 16:10:20.249110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.543 [2024-07-15 16:10:20.249122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.543 qpair failed and we were unable to recover it. 00:28:51.543 [2024-07-15 16:10:20.249242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.543 [2024-07-15 16:10:20.249255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.543 qpair failed and we were unable to recover it. 00:28:51.543 [2024-07-15 16:10:20.249357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.543 [2024-07-15 16:10:20.249368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.543 qpair failed and we were unable to recover it. 00:28:51.543 [2024-07-15 16:10:20.249594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.543 [2024-07-15 16:10:20.249606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.543 qpair failed and we were unable to recover it. 00:28:51.543 [2024-07-15 16:10:20.249738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.543 [2024-07-15 16:10:20.249750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.543 qpair failed and we were unable to recover it. 00:28:51.543 [2024-07-15 16:10:20.249861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.543 [2024-07-15 16:10:20.249873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.543 qpair failed and we were unable to recover it. 00:28:51.543 [2024-07-15 16:10:20.250024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.543 [2024-07-15 16:10:20.250035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.543 qpair failed and we were unable to recover it. 00:28:51.543 [2024-07-15 16:10:20.250130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.543 [2024-07-15 16:10:20.250141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.543 qpair failed and we were unable to recover it. 00:28:51.543 [2024-07-15 16:10:20.250336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.543 [2024-07-15 16:10:20.250348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.543 qpair failed and we were unable to recover it. 00:28:51.543 [2024-07-15 16:10:20.250455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.543 [2024-07-15 16:10:20.250466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.543 qpair failed and we were unable to recover it. 00:28:51.543 [2024-07-15 16:10:20.250624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.543 [2024-07-15 16:10:20.250635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.543 qpair failed and we were unable to recover it. 00:28:51.543 [2024-07-15 16:10:20.250791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.543 [2024-07-15 16:10:20.250802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.543 qpair failed and we were unable to recover it. 00:28:51.543 [2024-07-15 16:10:20.250886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.543 [2024-07-15 16:10:20.250898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.543 qpair failed and we were unable to recover it. 00:28:51.543 [2024-07-15 16:10:20.251015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.543 [2024-07-15 16:10:20.251027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.543 qpair failed and we were unable to recover it. 00:28:51.543 [2024-07-15 16:10:20.251195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.543 [2024-07-15 16:10:20.251207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.543 qpair failed and we were unable to recover it. 00:28:51.543 [2024-07-15 16:10:20.251298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.543 [2024-07-15 16:10:20.251312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.543 qpair failed and we were unable to recover it. 00:28:51.543 [2024-07-15 16:10:20.251486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.543 [2024-07-15 16:10:20.251497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.543 qpair failed and we were unable to recover it. 00:28:51.543 [2024-07-15 16:10:20.251605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.543 [2024-07-15 16:10:20.251616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.543 qpair failed and we were unable to recover it. 00:28:51.543 [2024-07-15 16:10:20.251795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.543 [2024-07-15 16:10:20.251806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.543 qpair failed and we were unable to recover it. 00:28:51.543 [2024-07-15 16:10:20.251986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.543 [2024-07-15 16:10:20.251997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.543 qpair failed and we were unable to recover it. 00:28:51.543 [2024-07-15 16:10:20.252058] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:51.543 [2024-07-15 16:10:20.252205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.543 [2024-07-15 16:10:20.252216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.543 qpair failed and we were unable to recover it. 00:28:51.543 [2024-07-15 16:10:20.252404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.543 [2024-07-15 16:10:20.252417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.543 qpair failed and we were unable to recover it. 00:28:51.543 [2024-07-15 16:10:20.252516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.543 [2024-07-15 16:10:20.252528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.543 qpair failed and we were unable to recover it. 00:28:51.543 [2024-07-15 16:10:20.252697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.543 [2024-07-15 16:10:20.252708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.543 qpair failed and we were unable to recover it. 00:28:51.543 [2024-07-15 16:10:20.252884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.544 [2024-07-15 16:10:20.252896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.544 qpair failed and we were unable to recover it. 00:28:51.544 [2024-07-15 16:10:20.253142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.544 [2024-07-15 16:10:20.253154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.544 qpair failed and we were unable to recover it. 00:28:51.544 [2024-07-15 16:10:20.253249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.544 [2024-07-15 16:10:20.253262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.544 qpair failed and we were unable to recover it. 00:28:51.544 [2024-07-15 16:10:20.253416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.544 [2024-07-15 16:10:20.253428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.544 qpair failed and we were unable to recover it. 00:28:51.544 [2024-07-15 16:10:20.253612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.544 [2024-07-15 16:10:20.253626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.544 qpair failed and we were unable to recover it. 00:28:51.544 [2024-07-15 16:10:20.253793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.544 [2024-07-15 16:10:20.253804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.544 qpair failed and we were unable to recover it. 00:28:51.544 [2024-07-15 16:10:20.253910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.544 [2024-07-15 16:10:20.253921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.544 qpair failed and we were unable to recover it. 00:28:51.544 [2024-07-15 16:10:20.254171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.544 [2024-07-15 16:10:20.254184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.544 qpair failed and we were unable to recover it. 00:28:51.544 [2024-07-15 16:10:20.254352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.544 [2024-07-15 16:10:20.254364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.544 qpair failed and we were unable to recover it. 00:28:51.544 [2024-07-15 16:10:20.254531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.544 [2024-07-15 16:10:20.254543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.544 qpair failed and we were unable to recover it. 00:28:51.544 [2024-07-15 16:10:20.254717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.544 [2024-07-15 16:10:20.254730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.544 qpair failed and we were unable to recover it. 00:28:51.544 [2024-07-15 16:10:20.254848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.544 [2024-07-15 16:10:20.254859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.544 qpair failed and we were unable to recover it. 00:28:51.544 [2024-07-15 16:10:20.254960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.544 [2024-07-15 16:10:20.254972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.544 qpair failed and we were unable to recover it. 00:28:51.544 [2024-07-15 16:10:20.255132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.544 [2024-07-15 16:10:20.255143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.544 qpair failed and we were unable to recover it. 00:28:51.544 [2024-07-15 16:10:20.255246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.544 [2024-07-15 16:10:20.255257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.544 qpair failed and we were unable to recover it. 00:28:51.544 [2024-07-15 16:10:20.255525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.544 [2024-07-15 16:10:20.255537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.544 qpair failed and we were unable to recover it. 00:28:51.544 [2024-07-15 16:10:20.255655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.544 [2024-07-15 16:10:20.255666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.544 qpair failed and we were unable to recover it. 00:28:51.544 [2024-07-15 16:10:20.255826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.544 [2024-07-15 16:10:20.255838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.544 qpair failed and we were unable to recover it. 00:28:51.544 [2024-07-15 16:10:20.255965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.544 [2024-07-15 16:10:20.255977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.544 qpair failed and we were unable to recover it. 00:28:51.544 [2024-07-15 16:10:20.256145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.544 [2024-07-15 16:10:20.256157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.544 qpair failed and we were unable to recover it. 00:28:51.544 [2024-07-15 16:10:20.256245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.544 [2024-07-15 16:10:20.256255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.544 qpair failed and we were unable to recover it. 00:28:51.544 [2024-07-15 16:10:20.256419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.544 [2024-07-15 16:10:20.256431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.544 qpair failed and we were unable to recover it. 00:28:51.544 [2024-07-15 16:10:20.256604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.544 [2024-07-15 16:10:20.256616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.544 qpair failed and we were unable to recover it. 00:28:51.544 [2024-07-15 16:10:20.256724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.544 [2024-07-15 16:10:20.256735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.544 qpair failed and we were unable to recover it. 00:28:51.544 [2024-07-15 16:10:20.256836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.544 [2024-07-15 16:10:20.256849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.544 qpair failed and we were unable to recover it. 00:28:51.544 [2024-07-15 16:10:20.256951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.544 [2024-07-15 16:10:20.256963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.544 qpair failed and we were unable to recover it. 00:28:51.544 [2024-07-15 16:10:20.257122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.544 [2024-07-15 16:10:20.257134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.544 qpair failed and we were unable to recover it. 00:28:51.544 [2024-07-15 16:10:20.257235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.544 [2024-07-15 16:10:20.257247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.544 qpair failed and we were unable to recover it. 00:28:51.544 [2024-07-15 16:10:20.257360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.544 [2024-07-15 16:10:20.257372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.544 qpair failed and we were unable to recover it. 00:28:51.544 [2024-07-15 16:10:20.257484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.544 [2024-07-15 16:10:20.257496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.544 qpair failed and we were unable to recover it. 00:28:51.544 [2024-07-15 16:10:20.257588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.544 [2024-07-15 16:10:20.257600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.544 qpair failed and we were unable to recover it. 00:28:51.544 [2024-07-15 16:10:20.257759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.544 [2024-07-15 16:10:20.257771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.544 qpair failed and we were unable to recover it. 00:28:51.544 [2024-07-15 16:10:20.257993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.544 [2024-07-15 16:10:20.258006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.544 qpair failed and we were unable to recover it. 00:28:51.544 [2024-07-15 16:10:20.258099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.544 [2024-07-15 16:10:20.258111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.544 qpair failed and we were unable to recover it. 00:28:51.544 [2024-07-15 16:10:20.258288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.544 [2024-07-15 16:10:20.258300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.544 qpair failed and we were unable to recover it. 00:28:51.544 [2024-07-15 16:10:20.258459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.544 [2024-07-15 16:10:20.258472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.544 qpair failed and we were unable to recover it. 00:28:51.544 [2024-07-15 16:10:20.258630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.544 [2024-07-15 16:10:20.258641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.544 qpair failed and we were unable to recover it. 00:28:51.544 [2024-07-15 16:10:20.258746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.544 [2024-07-15 16:10:20.258759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.544 qpair failed and we were unable to recover it. 00:28:51.544 [2024-07-15 16:10:20.258846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.544 [2024-07-15 16:10:20.258858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.544 qpair failed and we were unable to recover it. 00:28:51.544 [2024-07-15 16:10:20.258964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.544 [2024-07-15 16:10:20.258976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.544 qpair failed and we were unable to recover it. 00:28:51.544 [2024-07-15 16:10:20.259176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.544 [2024-07-15 16:10:20.259189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.544 qpair failed and we were unable to recover it. 00:28:51.545 [2024-07-15 16:10:20.259362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.545 [2024-07-15 16:10:20.259374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.545 qpair failed and we were unable to recover it. 00:28:51.545 [2024-07-15 16:10:20.259595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.545 [2024-07-15 16:10:20.259607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.545 qpair failed and we were unable to recover it. 00:28:51.545 [2024-07-15 16:10:20.259781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.545 [2024-07-15 16:10:20.259792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.545 qpair failed and we were unable to recover it. 00:28:51.545 [2024-07-15 16:10:20.260012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.545 [2024-07-15 16:10:20.260027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.545 qpair failed and we were unable to recover it. 00:28:51.545 [2024-07-15 16:10:20.260131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.545 [2024-07-15 16:10:20.260143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.545 qpair failed and we were unable to recover it. 00:28:51.545 [2024-07-15 16:10:20.260338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.545 [2024-07-15 16:10:20.260354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.545 qpair failed and we were unable to recover it. 00:28:51.545 [2024-07-15 16:10:20.260527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.545 [2024-07-15 16:10:20.260540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.545 qpair failed and we were unable to recover it. 00:28:51.545 [2024-07-15 16:10:20.260654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.545 [2024-07-15 16:10:20.260666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.545 qpair failed and we were unable to recover it. 00:28:51.545 [2024-07-15 16:10:20.260774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.545 [2024-07-15 16:10:20.260786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.545 qpair failed and we were unable to recover it. 00:28:51.545 [2024-07-15 16:10:20.261031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.545 [2024-07-15 16:10:20.261043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.545 qpair failed and we were unable to recover it. 00:28:51.545 [2024-07-15 16:10:20.261208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.545 [2024-07-15 16:10:20.261220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.545 qpair failed and we were unable to recover it. 00:28:51.545 [2024-07-15 16:10:20.261339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.545 [2024-07-15 16:10:20.261351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.545 qpair failed and we were unable to recover it. 00:28:51.545 [2024-07-15 16:10:20.261511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.545 [2024-07-15 16:10:20.261524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.545 qpair failed and we were unable to recover it. 00:28:51.545 [2024-07-15 16:10:20.261627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.545 [2024-07-15 16:10:20.261638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.545 qpair failed and we were unable to recover it. 00:28:51.545 [2024-07-15 16:10:20.261799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.545 [2024-07-15 16:10:20.261812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.545 qpair failed and we were unable to recover it. 00:28:51.545 [2024-07-15 16:10:20.261972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.545 [2024-07-15 16:10:20.261984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.545 qpair failed and we were unable to recover it. 00:28:51.545 [2024-07-15 16:10:20.262156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.545 [2024-07-15 16:10:20.262168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.545 qpair failed and we were unable to recover it. 00:28:51.545 [2024-07-15 16:10:20.262356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.545 [2024-07-15 16:10:20.262368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.545 qpair failed and we were unable to recover it. 00:28:51.545 [2024-07-15 16:10:20.262534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.545 [2024-07-15 16:10:20.262545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.545 qpair failed and we were unable to recover it. 00:28:51.545 [2024-07-15 16:10:20.262618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.545 [2024-07-15 16:10:20.262629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.545 qpair failed and we were unable to recover it. 00:28:51.545 [2024-07-15 16:10:20.262793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.545 [2024-07-15 16:10:20.262804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.545 qpair failed and we were unable to recover it. 00:28:51.545 [2024-07-15 16:10:20.262913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.545 [2024-07-15 16:10:20.262926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.545 qpair failed and we were unable to recover it. 00:28:51.545 [2024-07-15 16:10:20.263030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.545 [2024-07-15 16:10:20.263041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.545 qpair failed and we were unable to recover it. 00:28:51.545 [2024-07-15 16:10:20.263144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.545 [2024-07-15 16:10:20.263155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.545 qpair failed and we were unable to recover it. 00:28:51.545 [2024-07-15 16:10:20.263294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.545 [2024-07-15 16:10:20.263306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.545 qpair failed and we were unable to recover it. 00:28:51.545 [2024-07-15 16:10:20.263526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.545 [2024-07-15 16:10:20.263538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.545 qpair failed and we were unable to recover it. 00:28:51.545 [2024-07-15 16:10:20.263707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.545 [2024-07-15 16:10:20.263718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.545 qpair failed and we were unable to recover it. 00:28:51.545 [2024-07-15 16:10:20.263948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.545 [2024-07-15 16:10:20.263959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.545 qpair failed and we were unable to recover it. 00:28:51.545 [2024-07-15 16:10:20.264130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.545 [2024-07-15 16:10:20.264142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.545 qpair failed and we were unable to recover it. 00:28:51.545 [2024-07-15 16:10:20.264336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.545 [2024-07-15 16:10:20.264348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.545 qpair failed and we were unable to recover it. 00:28:51.545 [2024-07-15 16:10:20.264519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.545 [2024-07-15 16:10:20.264531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.545 qpair failed and we were unable to recover it. 00:28:51.545 [2024-07-15 16:10:20.264695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.545 [2024-07-15 16:10:20.264706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.545 qpair failed and we were unable to recover it. 00:28:51.545 [2024-07-15 16:10:20.264864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.545 [2024-07-15 16:10:20.264876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.545 qpair failed and we were unable to recover it. 00:28:51.545 [2024-07-15 16:10:20.265056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.545 [2024-07-15 16:10:20.265068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.545 qpair failed and we were unable to recover it. 00:28:51.545 [2024-07-15 16:10:20.265162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.545 [2024-07-15 16:10:20.265174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.545 qpair failed and we were unable to recover it. 00:28:51.545 [2024-07-15 16:10:20.265292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.545 [2024-07-15 16:10:20.265304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.545 qpair failed and we were unable to recover it. 00:28:51.545 [2024-07-15 16:10:20.265384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.545 [2024-07-15 16:10:20.265394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.545 qpair failed and we were unable to recover it. 00:28:51.545 [2024-07-15 16:10:20.265547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.545 [2024-07-15 16:10:20.265558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.545 qpair failed and we were unable to recover it. 00:28:51.545 [2024-07-15 16:10:20.265745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.545 [2024-07-15 16:10:20.265757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.545 qpair failed and we were unable to recover it. 00:28:51.545 [2024-07-15 16:10:20.265942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.545 [2024-07-15 16:10:20.265954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.545 qpair failed and we were unable to recover it. 00:28:51.545 [2024-07-15 16:10:20.266129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.545 [2024-07-15 16:10:20.266141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.546 qpair failed and we were unable to recover it. 00:28:51.546 [2024-07-15 16:10:20.266236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.546 [2024-07-15 16:10:20.266247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.546 qpair failed and we were unable to recover it. 00:28:51.546 [2024-07-15 16:10:20.266366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.546 [2024-07-15 16:10:20.266377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.546 qpair failed and we were unable to recover it. 00:28:51.546 [2024-07-15 16:10:20.266546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.546 [2024-07-15 16:10:20.266559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.546 qpair failed and we were unable to recover it. 00:28:51.546 [2024-07-15 16:10:20.266657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.546 [2024-07-15 16:10:20.266669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.546 qpair failed and we were unable to recover it. 00:28:51.546 [2024-07-15 16:10:20.266762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.546 [2024-07-15 16:10:20.266774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.546 qpair failed and we were unable to recover it. 00:28:51.546 [2024-07-15 16:10:20.266871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.546 [2024-07-15 16:10:20.266882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.546 qpair failed and we were unable to recover it. 00:28:51.546 [2024-07-15 16:10:20.267045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.546 [2024-07-15 16:10:20.267056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.546 qpair failed and we were unable to recover it. 00:28:51.546 [2024-07-15 16:10:20.267158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.546 [2024-07-15 16:10:20.267169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.546 qpair failed and we were unable to recover it. 00:28:51.546 [2024-07-15 16:10:20.267416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.546 [2024-07-15 16:10:20.267428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.546 qpair failed and we were unable to recover it. 00:28:51.546 [2024-07-15 16:10:20.267529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.546 [2024-07-15 16:10:20.267540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.546 qpair failed and we were unable to recover it. 00:28:51.546 [2024-07-15 16:10:20.267832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.546 [2024-07-15 16:10:20.267843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.546 qpair failed and we were unable to recover it. 00:28:51.546 [2024-07-15 16:10:20.268010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.546 [2024-07-15 16:10:20.268021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.546 qpair failed and we were unable to recover it. 00:28:51.546 [2024-07-15 16:10:20.268183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.546 [2024-07-15 16:10:20.268195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.546 qpair failed and we were unable to recover it. 00:28:51.546 [2024-07-15 16:10:20.268331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.546 [2024-07-15 16:10:20.268342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.546 qpair failed and we were unable to recover it. 00:28:51.546 [2024-07-15 16:10:20.268591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.546 [2024-07-15 16:10:20.268602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.546 qpair failed and we were unable to recover it. 00:28:51.546 [2024-07-15 16:10:20.268765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.546 [2024-07-15 16:10:20.268776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.546 qpair failed and we were unable to recover it. 00:28:51.546 [2024-07-15 16:10:20.268965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.546 [2024-07-15 16:10:20.268977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.546 qpair failed and we were unable to recover it. 00:28:51.546 [2024-07-15 16:10:20.269124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.546 [2024-07-15 16:10:20.269135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.546 qpair failed and we were unable to recover it. 00:28:51.546 [2024-07-15 16:10:20.269361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.546 [2024-07-15 16:10:20.269372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.546 qpair failed and we were unable to recover it. 00:28:51.546 [2024-07-15 16:10:20.269526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.546 [2024-07-15 16:10:20.269538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.546 qpair failed and we were unable to recover it. 00:28:51.546 [2024-07-15 16:10:20.269710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.546 [2024-07-15 16:10:20.269721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.546 qpair failed and we were unable to recover it. 00:28:51.546 [2024-07-15 16:10:20.269819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.546 [2024-07-15 16:10:20.269830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.546 qpair failed and we were unable to recover it. 00:28:51.546 [2024-07-15 16:10:20.269994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.546 [2024-07-15 16:10:20.270005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.546 qpair failed and we were unable to recover it. 00:28:51.546 [2024-07-15 16:10:20.270253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.546 [2024-07-15 16:10:20.270265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.546 qpair failed and we were unable to recover it. 00:28:51.546 [2024-07-15 16:10:20.270431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.546 [2024-07-15 16:10:20.270443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.546 qpair failed and we were unable to recover it. 00:28:51.546 [2024-07-15 16:10:20.270631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.546 [2024-07-15 16:10:20.270643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.546 qpair failed and we were unable to recover it. 00:28:51.546 [2024-07-15 16:10:20.270790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.546 [2024-07-15 16:10:20.270802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.546 qpair failed and we were unable to recover it. 00:28:51.546 [2024-07-15 16:10:20.270927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.546 [2024-07-15 16:10:20.270938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.546 qpair failed and we were unable to recover it. 00:28:51.546 [2024-07-15 16:10:20.271207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.546 [2024-07-15 16:10:20.271218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.546 qpair failed and we were unable to recover it. 00:28:51.546 [2024-07-15 16:10:20.271338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.546 [2024-07-15 16:10:20.271350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.546 qpair failed and we were unable to recover it. 00:28:51.546 [2024-07-15 16:10:20.271449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.546 [2024-07-15 16:10:20.271460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.546 qpair failed and we were unable to recover it. 00:28:51.546 [2024-07-15 16:10:20.271551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.546 [2024-07-15 16:10:20.271561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.546 qpair failed and we were unable to recover it. 00:28:51.546 [2024-07-15 16:10:20.271716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.546 [2024-07-15 16:10:20.271727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.546 qpair failed and we were unable to recover it. 00:28:51.546 [2024-07-15 16:10:20.271913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.546 [2024-07-15 16:10:20.271924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.546 qpair failed and we were unable to recover it. 00:28:51.546 [2024-07-15 16:10:20.272011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.547 [2024-07-15 16:10:20.272022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.547 qpair failed and we were unable to recover it. 00:28:51.547 [2024-07-15 16:10:20.272127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.547 [2024-07-15 16:10:20.272138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.547 qpair failed and we were unable to recover it. 00:28:51.547 [2024-07-15 16:10:20.272280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.547 [2024-07-15 16:10:20.272292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.547 qpair failed and we were unable to recover it. 00:28:51.547 [2024-07-15 16:10:20.272472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.547 [2024-07-15 16:10:20.272483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.547 qpair failed and we were unable to recover it. 00:28:51.547 [2024-07-15 16:10:20.272612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.547 [2024-07-15 16:10:20.272623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.547 qpair failed and we were unable to recover it. 00:28:51.547 [2024-07-15 16:10:20.272820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.547 [2024-07-15 16:10:20.272832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.547 qpair failed and we were unable to recover it. 00:28:51.547 [2024-07-15 16:10:20.272944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.547 [2024-07-15 16:10:20.272956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.547 qpair failed and we were unable to recover it. 00:28:51.547 [2024-07-15 16:10:20.273135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.547 [2024-07-15 16:10:20.273146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.547 qpair failed and we were unable to recover it. 00:28:51.547 [2024-07-15 16:10:20.273248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.547 [2024-07-15 16:10:20.273262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.547 qpair failed and we were unable to recover it. 00:28:51.547 [2024-07-15 16:10:20.273363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.547 [2024-07-15 16:10:20.273375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.547 qpair failed and we were unable to recover it. 00:28:51.547 [2024-07-15 16:10:20.273525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.547 [2024-07-15 16:10:20.273537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.547 qpair failed and we were unable to recover it. 00:28:51.547 [2024-07-15 16:10:20.273639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.547 [2024-07-15 16:10:20.273650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.547 qpair failed and we were unable to recover it. 00:28:51.547 [2024-07-15 16:10:20.273812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.547 [2024-07-15 16:10:20.273824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.547 qpair failed and we were unable to recover it. 00:28:51.547 [2024-07-15 16:10:20.273938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.547 [2024-07-15 16:10:20.273951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.547 qpair failed and we were unable to recover it. 00:28:51.547 [2024-07-15 16:10:20.274052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.547 [2024-07-15 16:10:20.274064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.547 qpair failed and we were unable to recover it. 00:28:51.547 [2024-07-15 16:10:20.274285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.547 [2024-07-15 16:10:20.274300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.547 qpair failed and we were unable to recover it. 00:28:51.547 [2024-07-15 16:10:20.274403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.547 [2024-07-15 16:10:20.274415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.547 qpair failed and we were unable to recover it. 00:28:51.547 [2024-07-15 16:10:20.274545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.547 [2024-07-15 16:10:20.274557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.547 qpair failed and we were unable to recover it. 00:28:51.547 [2024-07-15 16:10:20.274664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.547 [2024-07-15 16:10:20.274675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.547 qpair failed and we were unable to recover it. 00:28:51.547 [2024-07-15 16:10:20.274778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.547 [2024-07-15 16:10:20.274789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.547 qpair failed and we were unable to recover it. 00:28:51.547 [2024-07-15 16:10:20.275017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.547 [2024-07-15 16:10:20.275029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.547 qpair failed and we were unable to recover it. 00:28:51.547 [2024-07-15 16:10:20.275150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.547 [2024-07-15 16:10:20.275163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.547 qpair failed and we were unable to recover it. 00:28:51.547 [2024-07-15 16:10:20.275272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.547 [2024-07-15 16:10:20.275283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.547 qpair failed and we were unable to recover it. 00:28:51.547 [2024-07-15 16:10:20.275486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.547 [2024-07-15 16:10:20.275498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.547 qpair failed and we were unable to recover it. 00:28:51.547 [2024-07-15 16:10:20.275670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.547 [2024-07-15 16:10:20.275681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.547 qpair failed and we were unable to recover it. 00:28:51.547 [2024-07-15 16:10:20.275844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.547 [2024-07-15 16:10:20.275856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.547 qpair failed and we were unable to recover it. 00:28:51.547 [2024-07-15 16:10:20.276012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.547 [2024-07-15 16:10:20.276023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.547 qpair failed and we were unable to recover it. 00:28:51.547 [2024-07-15 16:10:20.276137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.547 [2024-07-15 16:10:20.276149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.547 qpair failed and we were unable to recover it. 00:28:51.547 [2024-07-15 16:10:20.276256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.547 [2024-07-15 16:10:20.276268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.547 qpair failed and we were unable to recover it. 00:28:51.547 [2024-07-15 16:10:20.276386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.547 [2024-07-15 16:10:20.276396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.547 qpair failed and we were unable to recover it. 00:28:51.547 [2024-07-15 16:10:20.276548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.547 [2024-07-15 16:10:20.276559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.547 qpair failed and we were unable to recover it. 00:28:51.547 [2024-07-15 16:10:20.276718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.547 [2024-07-15 16:10:20.276729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.547 qpair failed and we were unable to recover it. 00:28:51.547 [2024-07-15 16:10:20.276886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.547 [2024-07-15 16:10:20.276898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.547 qpair failed and we were unable to recover it. 00:28:51.547 [2024-07-15 16:10:20.277089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.547 [2024-07-15 16:10:20.277101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.547 qpair failed and we were unable to recover it. 00:28:51.547 [2024-07-15 16:10:20.277273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.547 [2024-07-15 16:10:20.277284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.547 qpair failed and we were unable to recover it. 00:28:51.547 [2024-07-15 16:10:20.277547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.547 [2024-07-15 16:10:20.277559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.547 qpair failed and we were unable to recover it. 00:28:51.547 [2024-07-15 16:10:20.277732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.547 [2024-07-15 16:10:20.277744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.547 qpair failed and we were unable to recover it. 00:28:51.547 [2024-07-15 16:10:20.277907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.547 [2024-07-15 16:10:20.277917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.547 qpair failed and we were unable to recover it. 00:28:51.547 [2024-07-15 16:10:20.278068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.547 [2024-07-15 16:10:20.278079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.547 qpair failed and we were unable to recover it. 00:28:51.547 [2024-07-15 16:10:20.278237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.547 [2024-07-15 16:10:20.278249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.547 qpair failed and we were unable to recover it. 00:28:51.547 [2024-07-15 16:10:20.278412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.548 [2024-07-15 16:10:20.278424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.548 qpair failed and we were unable to recover it. 00:28:51.548 [2024-07-15 16:10:20.278592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.548 [2024-07-15 16:10:20.278604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.548 qpair failed and we were unable to recover it. 00:28:51.548 [2024-07-15 16:10:20.278794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.548 [2024-07-15 16:10:20.278806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.548 qpair failed and we were unable to recover it. 00:28:51.548 [2024-07-15 16:10:20.279002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.548 [2024-07-15 16:10:20.279013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.548 qpair failed and we were unable to recover it. 00:28:51.548 [2024-07-15 16:10:20.279104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.548 [2024-07-15 16:10:20.279115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.548 qpair failed and we were unable to recover it. 00:28:51.548 [2024-07-15 16:10:20.279289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.548 [2024-07-15 16:10:20.279300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.548 qpair failed and we were unable to recover it. 00:28:51.548 [2024-07-15 16:10:20.279417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.548 [2024-07-15 16:10:20.279429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.548 qpair failed and we were unable to recover it. 00:28:51.548 [2024-07-15 16:10:20.279589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.548 [2024-07-15 16:10:20.279600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.548 qpair failed and we were unable to recover it. 00:28:51.548 [2024-07-15 16:10:20.279754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.548 [2024-07-15 16:10:20.279768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.548 qpair failed and we were unable to recover it. 00:28:51.548 [2024-07-15 16:10:20.279876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.548 [2024-07-15 16:10:20.279887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.548 qpair failed and we were unable to recover it. 00:28:51.548 [2024-07-15 16:10:20.280046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.548 [2024-07-15 16:10:20.280058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.548 qpair failed and we were unable to recover it. 00:28:51.548 [2024-07-15 16:10:20.280215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.548 [2024-07-15 16:10:20.280240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.548 qpair failed and we were unable to recover it. 00:28:51.548 [2024-07-15 16:10:20.280402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.548 [2024-07-15 16:10:20.280415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.548 qpair failed and we were unable to recover it. 00:28:51.548 [2024-07-15 16:10:20.280637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.548 [2024-07-15 16:10:20.280647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.548 qpair failed and we were unable to recover it. 00:28:51.548 [2024-07-15 16:10:20.280744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.548 [2024-07-15 16:10:20.280755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.548 qpair failed and we were unable to recover it. 00:28:51.548 [2024-07-15 16:10:20.280916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.548 [2024-07-15 16:10:20.280928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.548 qpair failed and we were unable to recover it. 00:28:51.548 [2024-07-15 16:10:20.281090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.548 [2024-07-15 16:10:20.281101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.548 qpair failed and we were unable to recover it. 00:28:51.548 [2024-07-15 16:10:20.281278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.548 [2024-07-15 16:10:20.281290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.548 qpair failed and we were unable to recover it. 00:28:51.548 [2024-07-15 16:10:20.281470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.548 [2024-07-15 16:10:20.281482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.548 qpair failed and we were unable to recover it. 00:28:51.548 [2024-07-15 16:10:20.281630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.548 [2024-07-15 16:10:20.281641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.548 qpair failed and we were unable to recover it. 00:28:51.548 [2024-07-15 16:10:20.281804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.548 [2024-07-15 16:10:20.281816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.548 qpair failed and we were unable to recover it. 00:28:51.548 [2024-07-15 16:10:20.281988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.548 [2024-07-15 16:10:20.281999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.548 qpair failed and we were unable to recover it. 00:28:51.548 [2024-07-15 16:10:20.282157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.548 [2024-07-15 16:10:20.282169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.548 qpair failed and we were unable to recover it. 00:28:51.548 [2024-07-15 16:10:20.282413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.548 [2024-07-15 16:10:20.282425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.548 qpair failed and we were unable to recover it. 00:28:51.548 [2024-07-15 16:10:20.282581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.548 [2024-07-15 16:10:20.282592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.548 qpair failed and we were unable to recover it. 00:28:51.548 [2024-07-15 16:10:20.282695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.548 [2024-07-15 16:10:20.282706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.548 qpair failed and we were unable to recover it. 00:28:51.548 [2024-07-15 16:10:20.282875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.548 [2024-07-15 16:10:20.282886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.548 qpair failed and we were unable to recover it. 00:28:51.548 [2024-07-15 16:10:20.283079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.548 [2024-07-15 16:10:20.283091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.548 qpair failed and we were unable to recover it. 00:28:51.548 [2024-07-15 16:10:20.283290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.548 [2024-07-15 16:10:20.283302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.548 qpair failed and we were unable to recover it. 00:28:51.548 [2024-07-15 16:10:20.283397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.548 [2024-07-15 16:10:20.283409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.548 qpair failed and we were unable to recover it. 00:28:51.548 [2024-07-15 16:10:20.283510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.548 [2024-07-15 16:10:20.283522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.548 qpair failed and we were unable to recover it. 00:28:51.548 [2024-07-15 16:10:20.283645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.548 [2024-07-15 16:10:20.283656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.548 qpair failed and we were unable to recover it. 00:28:51.548 [2024-07-15 16:10:20.283807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.548 [2024-07-15 16:10:20.283819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.548 qpair failed and we were unable to recover it. 00:28:51.548 [2024-07-15 16:10:20.283945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.548 [2024-07-15 16:10:20.283957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.548 qpair failed and we were unable to recover it. 00:28:51.548 [2024-07-15 16:10:20.284143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.548 [2024-07-15 16:10:20.284155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.548 qpair failed and we were unable to recover it. 00:28:51.548 [2024-07-15 16:10:20.284311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.548 [2024-07-15 16:10:20.284323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.548 qpair failed and we were unable to recover it. 00:28:51.548 [2024-07-15 16:10:20.284545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.548 [2024-07-15 16:10:20.284556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.548 qpair failed and we were unable to recover it. 00:28:51.548 [2024-07-15 16:10:20.284664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.548 [2024-07-15 16:10:20.284675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.548 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-15 16:10:20.284839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.549 [2024-07-15 16:10:20.284850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-15 16:10:20.285010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.549 [2024-07-15 16:10:20.285022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-15 16:10:20.285121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.549 [2024-07-15 16:10:20.285132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-15 16:10:20.285233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.549 [2024-07-15 16:10:20.285245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-15 16:10:20.285412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.549 [2024-07-15 16:10:20.285424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-15 16:10:20.285512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.549 [2024-07-15 16:10:20.285522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-15 16:10:20.285640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.549 [2024-07-15 16:10:20.285651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-15 16:10:20.285839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.549 [2024-07-15 16:10:20.285850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-15 16:10:20.285983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.549 [2024-07-15 16:10:20.285994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-15 16:10:20.286089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.549 [2024-07-15 16:10:20.286101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-15 16:10:20.286292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.549 [2024-07-15 16:10:20.286305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-15 16:10:20.286501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.549 [2024-07-15 16:10:20.286512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-15 16:10:20.286825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.549 [2024-07-15 16:10:20.286837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-15 16:10:20.286954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.549 [2024-07-15 16:10:20.286965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-15 16:10:20.287063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.549 [2024-07-15 16:10:20.287074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-15 16:10:20.287178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.549 [2024-07-15 16:10:20.287190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-15 16:10:20.287347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.549 [2024-07-15 16:10:20.287358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-15 16:10:20.287447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.549 [2024-07-15 16:10:20.287457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-15 16:10:20.287526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.549 [2024-07-15 16:10:20.287536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-15 16:10:20.287629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.549 [2024-07-15 16:10:20.287641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-15 16:10:20.287807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.549 [2024-07-15 16:10:20.287819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-15 16:10:20.287983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.549 [2024-07-15 16:10:20.287994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-15 16:10:20.288163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.549 [2024-07-15 16:10:20.288174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-15 16:10:20.288328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.549 [2024-07-15 16:10:20.288340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-15 16:10:20.288509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.549 [2024-07-15 16:10:20.288520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-15 16:10:20.288656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.549 [2024-07-15 16:10:20.288669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-15 16:10:20.288828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.549 [2024-07-15 16:10:20.288839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-15 16:10:20.288996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.549 [2024-07-15 16:10:20.289007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-15 16:10:20.289174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.549 [2024-07-15 16:10:20.289185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-15 16:10:20.289371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.549 [2024-07-15 16:10:20.289384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-15 16:10:20.289472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.549 [2024-07-15 16:10:20.289485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-15 16:10:20.289640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.549 [2024-07-15 16:10:20.289653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-15 16:10:20.289845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.549 [2024-07-15 16:10:20.289860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-15 16:10:20.289957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.549 [2024-07-15 16:10:20.289970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-15 16:10:20.290082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.549 [2024-07-15 16:10:20.290095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-15 16:10:20.290201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.549 [2024-07-15 16:10:20.290214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-15 16:10:20.290343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.549 [2024-07-15 16:10:20.290376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-15 16:10:20.290499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.549 [2024-07-15 16:10:20.290521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-15 16:10:20.290699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.549 [2024-07-15 16:10:20.290715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-15 16:10:20.290878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.549 [2024-07-15 16:10:20.290893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.550 [2024-07-15 16:10:20.291006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.550 [2024-07-15 16:10:20.291022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.550 qpair failed and we were unable to recover it. 00:28:51.550 [2024-07-15 16:10:20.291195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.550 [2024-07-15 16:10:20.291211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.550 qpair failed and we were unable to recover it. 00:28:51.550 [2024-07-15 16:10:20.291321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.550 [2024-07-15 16:10:20.291336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.550 qpair failed and we were unable to recover it. 00:28:51.550 [2024-07-15 16:10:20.291554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.550 [2024-07-15 16:10:20.291566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.550 qpair failed and we were unable to recover it. 00:28:51.550 [2024-07-15 16:10:20.291665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.550 [2024-07-15 16:10:20.291677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.550 qpair failed and we were unable to recover it. 00:28:51.550 [2024-07-15 16:10:20.291841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.550 [2024-07-15 16:10:20.291852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.550 qpair failed and we were unable to recover it. 00:28:51.550 [2024-07-15 16:10:20.292031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.550 [2024-07-15 16:10:20.292044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.550 qpair failed and we were unable to recover it. 00:28:51.550 [2024-07-15 16:10:20.292158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.550 [2024-07-15 16:10:20.292169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.550 qpair failed and we were unable to recover it. 00:28:51.550 [2024-07-15 16:10:20.292390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.550 [2024-07-15 16:10:20.292402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.550 qpair failed and we were unable to recover it. 00:28:51.550 [2024-07-15 16:10:20.292497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.550 [2024-07-15 16:10:20.292508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.550 qpair failed and we were unable to recover it. 00:28:51.550 [2024-07-15 16:10:20.292672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.550 [2024-07-15 16:10:20.292687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.550 qpair failed and we were unable to recover it. 00:28:51.550 [2024-07-15 16:10:20.292808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.550 [2024-07-15 16:10:20.292820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.550 qpair failed and we were unable to recover it. 00:28:51.550 [2024-07-15 16:10:20.292993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.550 [2024-07-15 16:10:20.293004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.550 qpair failed and we were unable to recover it. 00:28:51.550 [2024-07-15 16:10:20.293109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.550 [2024-07-15 16:10:20.293122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.550 qpair failed and we were unable to recover it. 00:28:51.550 [2024-07-15 16:10:20.293352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.550 [2024-07-15 16:10:20.293364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.550 qpair failed and we were unable to recover it. 00:28:51.550 [2024-07-15 16:10:20.293460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.550 [2024-07-15 16:10:20.293472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.550 qpair failed and we were unable to recover it. 00:28:51.550 [2024-07-15 16:10:20.293566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.550 [2024-07-15 16:10:20.293578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.550 qpair failed and we were unable to recover it. 00:28:51.550 [2024-07-15 16:10:20.293743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.550 [2024-07-15 16:10:20.293756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.550 qpair failed and we were unable to recover it. 00:28:51.550 [2024-07-15 16:10:20.293836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.550 [2024-07-15 16:10:20.293847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.550 qpair failed and we were unable to recover it. 00:28:51.550 [2024-07-15 16:10:20.294077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.550 [2024-07-15 16:10:20.294088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.550 qpair failed and we were unable to recover it. 00:28:51.550 [2024-07-15 16:10:20.294212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.550 [2024-07-15 16:10:20.294228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.550 qpair failed and we were unable to recover it. 00:28:51.550 [2024-07-15 16:10:20.294391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.550 [2024-07-15 16:10:20.294404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.550 qpair failed and we were unable to recover it. 00:28:51.550 [2024-07-15 16:10:20.294615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.550 [2024-07-15 16:10:20.294628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.550 qpair failed and we were unable to recover it. 00:28:51.550 [2024-07-15 16:10:20.294810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.550 [2024-07-15 16:10:20.294822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.550 qpair failed and we were unable to recover it. 00:28:51.550 [2024-07-15 16:10:20.294947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.550 [2024-07-15 16:10:20.294960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.550 qpair failed and we were unable to recover it. 00:28:51.550 [2024-07-15 16:10:20.295066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.550 [2024-07-15 16:10:20.295078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.550 qpair failed and we were unable to recover it. 00:28:51.550 [2024-07-15 16:10:20.295196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.550 [2024-07-15 16:10:20.295208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.550 qpair failed and we were unable to recover it. 00:28:51.550 [2024-07-15 16:10:20.295321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.550 [2024-07-15 16:10:20.295334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.550 qpair failed and we were unable to recover it. 00:28:51.550 [2024-07-15 16:10:20.295445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.550 [2024-07-15 16:10:20.295457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.550 qpair failed and we were unable to recover it. 00:28:51.550 [2024-07-15 16:10:20.295552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.550 [2024-07-15 16:10:20.295564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.550 qpair failed and we were unable to recover it. 00:28:51.550 [2024-07-15 16:10:20.295669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.550 [2024-07-15 16:10:20.295682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.550 qpair failed and we were unable to recover it. 00:28:51.550 [2024-07-15 16:10:20.295779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.550 [2024-07-15 16:10:20.295791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.550 qpair failed and we were unable to recover it. 00:28:51.550 [2024-07-15 16:10:20.295889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.550 [2024-07-15 16:10:20.295901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.550 qpair failed and we were unable to recover it. 00:28:51.550 [2024-07-15 16:10:20.296077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.550 [2024-07-15 16:10:20.296089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.550 qpair failed and we were unable to recover it. 00:28:51.550 [2024-07-15 16:10:20.296186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.551 [2024-07-15 16:10:20.296199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.551 qpair failed and we were unable to recover it. 00:28:51.551 [2024-07-15 16:10:20.296376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.551 [2024-07-15 16:10:20.296389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.551 qpair failed and we were unable to recover it. 00:28:51.551 [2024-07-15 16:10:20.296564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.551 [2024-07-15 16:10:20.296577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.551 qpair failed and we were unable to recover it. 00:28:51.551 [2024-07-15 16:10:20.296738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.551 [2024-07-15 16:10:20.296750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.551 qpair failed and we were unable to recover it. 00:28:51.551 [2024-07-15 16:10:20.296846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.551 [2024-07-15 16:10:20.296858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.551 qpair failed and we were unable to recover it. 00:28:51.551 [2024-07-15 16:10:20.296960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.551 [2024-07-15 16:10:20.296971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.551 qpair failed and we were unable to recover it. 00:28:51.551 [2024-07-15 16:10:20.297133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.551 [2024-07-15 16:10:20.297146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.551 qpair failed and we were unable to recover it. 00:28:51.551 [2024-07-15 16:10:20.297254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.551 [2024-07-15 16:10:20.297267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.551 qpair failed and we were unable to recover it. 00:28:51.551 [2024-07-15 16:10:20.297337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.551 [2024-07-15 16:10:20.297349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.551 qpair failed and we were unable to recover it. 00:28:51.551 [2024-07-15 16:10:20.297475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.551 [2024-07-15 16:10:20.297487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.551 qpair failed and we were unable to recover it. 00:28:51.551 [2024-07-15 16:10:20.297650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.551 [2024-07-15 16:10:20.297662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.551 qpair failed and we were unable to recover it. 00:28:51.551 [2024-07-15 16:10:20.297824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.551 [2024-07-15 16:10:20.297836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.551 qpair failed and we were unable to recover it. 00:28:51.551 [2024-07-15 16:10:20.297929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.551 [2024-07-15 16:10:20.297941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.551 qpair failed and we were unable to recover it. 00:28:51.551 [2024-07-15 16:10:20.298107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.551 [2024-07-15 16:10:20.298120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.551 qpair failed and we were unable to recover it. 00:28:51.551 [2024-07-15 16:10:20.298229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.551 [2024-07-15 16:10:20.298241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.551 qpair failed and we were unable to recover it. 00:28:51.551 [2024-07-15 16:10:20.298411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.551 [2024-07-15 16:10:20.298424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.551 qpair failed and we were unable to recover it. 00:28:51.551 [2024-07-15 16:10:20.298527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.551 [2024-07-15 16:10:20.298542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.551 qpair failed and we were unable to recover it. 00:28:51.551 [2024-07-15 16:10:20.298633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.551 [2024-07-15 16:10:20.298643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.551 qpair failed and we were unable to recover it. 00:28:51.551 [2024-07-15 16:10:20.298800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.551 [2024-07-15 16:10:20.298811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.551 qpair failed and we were unable to recover it. 00:28:51.551 [2024-07-15 16:10:20.298982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.551 [2024-07-15 16:10:20.298994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.551 qpair failed and we were unable to recover it. 00:28:51.551 [2024-07-15 16:10:20.299073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.551 [2024-07-15 16:10:20.299084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.551 qpair failed and we were unable to recover it. 00:28:51.551 [2024-07-15 16:10:20.299204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.551 [2024-07-15 16:10:20.299216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.551 qpair failed and we were unable to recover it. 00:28:51.551 [2024-07-15 16:10:20.299520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.551 [2024-07-15 16:10:20.299532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.551 qpair failed and we were unable to recover it. 00:28:51.551 [2024-07-15 16:10:20.299656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.551 [2024-07-15 16:10:20.299668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.551 qpair failed and we were unable to recover it. 00:28:51.551 [2024-07-15 16:10:20.299831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.551 [2024-07-15 16:10:20.299842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.551 qpair failed and we were unable to recover it. 00:28:51.551 [2024-07-15 16:10:20.299977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.551 [2024-07-15 16:10:20.299989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.551 qpair failed and we were unable to recover it. 00:28:51.551 [2024-07-15 16:10:20.300158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.551 [2024-07-15 16:10:20.300170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.551 qpair failed and we were unable to recover it. 00:28:51.551 [2024-07-15 16:10:20.300355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.551 [2024-07-15 16:10:20.300367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.551 qpair failed and we were unable to recover it. 00:28:51.551 [2024-07-15 16:10:20.300587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.551 [2024-07-15 16:10:20.300599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.551 qpair failed and we were unable to recover it. 00:28:51.551 [2024-07-15 16:10:20.300794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.551 [2024-07-15 16:10:20.300805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.551 qpair failed and we were unable to recover it. 00:28:51.551 [2024-07-15 16:10:20.300977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.551 [2024-07-15 16:10:20.300989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.551 qpair failed and we were unable to recover it. 00:28:51.552 [2024-07-15 16:10:20.301160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.552 [2024-07-15 16:10:20.301172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.552 qpair failed and we were unable to recover it. 00:28:51.552 [2024-07-15 16:10:20.301279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.552 [2024-07-15 16:10:20.301291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.552 qpair failed and we were unable to recover it. 00:28:51.552 [2024-07-15 16:10:20.301393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.552 [2024-07-15 16:10:20.301405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.552 qpair failed and we were unable to recover it. 00:28:51.552 [2024-07-15 16:10:20.301582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.552 [2024-07-15 16:10:20.301593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.552 qpair failed and we were unable to recover it. 00:28:51.552 [2024-07-15 16:10:20.301784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.552 [2024-07-15 16:10:20.301795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.552 qpair failed and we were unable to recover it. 00:28:51.552 [2024-07-15 16:10:20.301952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.552 [2024-07-15 16:10:20.301963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.552 qpair failed and we were unable to recover it. 00:28:51.552 [2024-07-15 16:10:20.302130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.552 [2024-07-15 16:10:20.302141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.552 qpair failed and we were unable to recover it. 00:28:51.552 [2024-07-15 16:10:20.302244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.552 [2024-07-15 16:10:20.302256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.552 qpair failed and we were unable to recover it. 00:28:51.552 [2024-07-15 16:10:20.302420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.552 [2024-07-15 16:10:20.302431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.552 qpair failed and we were unable to recover it. 00:28:51.552 [2024-07-15 16:10:20.302604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.552 [2024-07-15 16:10:20.302617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.552 qpair failed and we were unable to recover it. 00:28:51.552 [2024-07-15 16:10:20.302758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.552 [2024-07-15 16:10:20.302769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.552 qpair failed and we were unable to recover it. 00:28:51.552 [2024-07-15 16:10:20.302937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.552 [2024-07-15 16:10:20.302948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.552 qpair failed and we were unable to recover it. 00:28:51.552 [2024-07-15 16:10:20.303028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.552 [2024-07-15 16:10:20.303038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.552 qpair failed and we were unable to recover it. 00:28:51.552 [2024-07-15 16:10:20.303221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.552 [2024-07-15 16:10:20.303236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.552 qpair failed and we were unable to recover it. 00:28:51.552 [2024-07-15 16:10:20.303339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.552 [2024-07-15 16:10:20.303351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.552 qpair failed and we were unable to recover it. 00:28:51.552 [2024-07-15 16:10:20.303444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.552 [2024-07-15 16:10:20.303455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.552 qpair failed and we were unable to recover it. 00:28:51.552 [2024-07-15 16:10:20.303570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.552 [2024-07-15 16:10:20.303581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.552 qpair failed and we were unable to recover it. 00:28:51.552 [2024-07-15 16:10:20.303743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.552 [2024-07-15 16:10:20.303756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.552 qpair failed and we were unable to recover it. 00:28:51.552 [2024-07-15 16:10:20.303824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.552 [2024-07-15 16:10:20.303834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.552 qpair failed and we were unable to recover it. 00:28:51.552 [2024-07-15 16:10:20.303990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.552 [2024-07-15 16:10:20.304001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.552 qpair failed and we were unable to recover it. 00:28:51.552 [2024-07-15 16:10:20.304178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.552 [2024-07-15 16:10:20.304189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.552 qpair failed and we were unable to recover it. 00:28:51.552 [2024-07-15 16:10:20.304312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.552 [2024-07-15 16:10:20.304323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.552 qpair failed and we were unable to recover it. 00:28:51.552 [2024-07-15 16:10:20.304480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.552 [2024-07-15 16:10:20.304492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.552 qpair failed and we were unable to recover it. 00:28:51.552 [2024-07-15 16:10:20.304713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.552 [2024-07-15 16:10:20.304725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.552 qpair failed and we were unable to recover it. 00:28:51.552 [2024-07-15 16:10:20.304899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.552 [2024-07-15 16:10:20.304910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.552 qpair failed and we were unable to recover it. 00:28:51.552 [2024-07-15 16:10:20.304997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.552 [2024-07-15 16:10:20.305012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.552 qpair failed and we were unable to recover it. 00:28:51.552 [2024-07-15 16:10:20.305177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.552 [2024-07-15 16:10:20.305189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.552 qpair failed and we were unable to recover it. 00:28:51.552 [2024-07-15 16:10:20.305286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.552 [2024-07-15 16:10:20.305298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.552 qpair failed and we were unable to recover it. 00:28:51.552 [2024-07-15 16:10:20.305461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.552 [2024-07-15 16:10:20.305472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.552 qpair failed and we were unable to recover it. 00:28:51.552 [2024-07-15 16:10:20.305643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.552 [2024-07-15 16:10:20.305654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.552 qpair failed and we were unable to recover it. 00:28:51.552 [2024-07-15 16:10:20.305895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.552 [2024-07-15 16:10:20.305906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.552 qpair failed and we were unable to recover it. 00:28:51.552 [2024-07-15 16:10:20.306002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.552 [2024-07-15 16:10:20.306013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.552 qpair failed and we were unable to recover it. 00:28:51.552 [2024-07-15 16:10:20.306236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.552 [2024-07-15 16:10:20.306247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.552 qpair failed and we were unable to recover it. 00:28:51.552 [2024-07-15 16:10:20.306422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.552 [2024-07-15 16:10:20.306434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.552 qpair failed and we were unable to recover it. 00:28:51.552 [2024-07-15 16:10:20.306547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.552 [2024-07-15 16:10:20.306559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.552 qpair failed and we were unable to recover it. 00:28:51.552 [2024-07-15 16:10:20.306648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.552 [2024-07-15 16:10:20.306660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.552 qpair failed and we were unable to recover it. 00:28:51.553 [2024-07-15 16:10:20.306939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.553 [2024-07-15 16:10:20.306950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.553 qpair failed and we were unable to recover it. 00:28:51.553 [2024-07-15 16:10:20.307059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.553 [2024-07-15 16:10:20.307071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.553 qpair failed and we were unable to recover it. 00:28:51.553 [2024-07-15 16:10:20.307240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.553 [2024-07-15 16:10:20.307252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.553 qpair failed and we were unable to recover it. 00:28:51.553 [2024-07-15 16:10:20.307410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.553 [2024-07-15 16:10:20.307423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.553 qpair failed and we were unable to recover it. 00:28:51.553 [2024-07-15 16:10:20.307634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.553 [2024-07-15 16:10:20.307645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.553 qpair failed and we were unable to recover it. 00:28:51.553 [2024-07-15 16:10:20.307814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.553 [2024-07-15 16:10:20.307826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.553 qpair failed and we were unable to recover it. 00:28:51.553 [2024-07-15 16:10:20.307948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.553 [2024-07-15 16:10:20.307959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.553 qpair failed and we were unable to recover it. 00:28:51.553 [2024-07-15 16:10:20.308119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.553 [2024-07-15 16:10:20.308131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.553 qpair failed and we were unable to recover it. 00:28:51.553 [2024-07-15 16:10:20.308241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.553 [2024-07-15 16:10:20.308253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.553 qpair failed and we were unable to recover it. 00:28:51.553 [2024-07-15 16:10:20.308423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.553 [2024-07-15 16:10:20.308434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.553 qpair failed and we were unable to recover it. 00:28:51.553 [2024-07-15 16:10:20.308584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.553 [2024-07-15 16:10:20.308596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.553 qpair failed and we were unable to recover it. 00:28:51.553 [2024-07-15 16:10:20.308754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.553 [2024-07-15 16:10:20.308765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.553 qpair failed and we were unable to recover it. 00:28:51.553 [2024-07-15 16:10:20.308877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.553 [2024-07-15 16:10:20.308888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.553 qpair failed and we were unable to recover it. 00:28:51.553 [2024-07-15 16:10:20.309090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.553 [2024-07-15 16:10:20.309102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.553 qpair failed and we were unable to recover it. 00:28:51.553 [2024-07-15 16:10:20.309207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.553 [2024-07-15 16:10:20.309218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.553 qpair failed and we were unable to recover it. 00:28:51.553 [2024-07-15 16:10:20.309298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.553 [2024-07-15 16:10:20.309308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.553 qpair failed and we were unable to recover it. 00:28:51.553 [2024-07-15 16:10:20.309616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.553 [2024-07-15 16:10:20.309643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.553 qpair failed and we were unable to recover it. 00:28:51.553 [2024-07-15 16:10:20.309811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.553 [2024-07-15 16:10:20.309826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.553 qpair failed and we were unable to recover it. 00:28:51.553 [2024-07-15 16:10:20.309955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.553 [2024-07-15 16:10:20.309970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.553 qpair failed and we were unable to recover it. 00:28:51.553 [2024-07-15 16:10:20.310082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.553 [2024-07-15 16:10:20.310097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.553 qpair failed and we were unable to recover it. 00:28:51.553 [2024-07-15 16:10:20.310231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.553 [2024-07-15 16:10:20.310248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.553 qpair failed and we were unable to recover it. 00:28:51.553 [2024-07-15 16:10:20.310440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.553 [2024-07-15 16:10:20.310455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.553 qpair failed and we were unable to recover it. 00:28:51.553 [2024-07-15 16:10:20.310618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.553 [2024-07-15 16:10:20.310633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.553 qpair failed and we were unable to recover it. 00:28:51.553 [2024-07-15 16:10:20.310796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.553 [2024-07-15 16:10:20.310811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.553 qpair failed and we were unable to recover it. 00:28:51.553 [2024-07-15 16:10:20.311053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.553 [2024-07-15 16:10:20.311068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.553 qpair failed and we were unable to recover it. 00:28:51.553 [2024-07-15 16:10:20.311179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.553 [2024-07-15 16:10:20.311194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.553 qpair failed and we were unable to recover it. 00:28:51.553 [2024-07-15 16:10:20.311306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.553 [2024-07-15 16:10:20.311322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.553 qpair failed and we were unable to recover it. 00:28:51.553 [2024-07-15 16:10:20.311465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.553 [2024-07-15 16:10:20.311481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.553 qpair failed and we were unable to recover it. 00:28:51.553 [2024-07-15 16:10:20.311578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.553 [2024-07-15 16:10:20.311593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.553 qpair failed and we were unable to recover it. 00:28:51.553 [2024-07-15 16:10:20.311774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.553 [2024-07-15 16:10:20.311788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.553 qpair failed and we were unable to recover it. 00:28:51.553 [2024-07-15 16:10:20.311916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.553 [2024-07-15 16:10:20.311931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.553 qpair failed and we were unable to recover it. 00:28:51.553 [2024-07-15 16:10:20.312041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.553 [2024-07-15 16:10:20.312056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.553 qpair failed and we were unable to recover it. 00:28:51.553 [2024-07-15 16:10:20.312292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.553 [2024-07-15 16:10:20.312308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.553 qpair failed and we were unable to recover it. 00:28:51.553 [2024-07-15 16:10:20.312479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.553 [2024-07-15 16:10:20.312494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.553 qpair failed and we were unable to recover it. 00:28:51.553 [2024-07-15 16:10:20.312612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.553 [2024-07-15 16:10:20.312627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.553 qpair failed and we were unable to recover it. 00:28:51.553 [2024-07-15 16:10:20.312875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.553 [2024-07-15 16:10:20.312890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.554 qpair failed and we were unable to recover it. 00:28:51.554 [2024-07-15 16:10:20.313067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.554 [2024-07-15 16:10:20.313081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.554 qpair failed and we were unable to recover it. 00:28:51.554 [2024-07-15 16:10:20.313263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.554 [2024-07-15 16:10:20.313278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.554 qpair failed and we were unable to recover it. 00:28:51.554 [2024-07-15 16:10:20.313471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.554 [2024-07-15 16:10:20.313487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.554 qpair failed and we were unable to recover it. 00:28:51.554 [2024-07-15 16:10:20.313681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.554 [2024-07-15 16:10:20.313696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.554 qpair failed and we were unable to recover it. 00:28:51.554 [2024-07-15 16:10:20.313814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.554 [2024-07-15 16:10:20.313830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.554 qpair failed and we were unable to recover it. 00:28:51.554 [2024-07-15 16:10:20.314003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.554 [2024-07-15 16:10:20.314018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.554 qpair failed and we were unable to recover it. 00:28:51.554 [2024-07-15 16:10:20.314189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.554 [2024-07-15 16:10:20.314204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.554 qpair failed and we were unable to recover it. 00:28:51.554 [2024-07-15 16:10:20.314371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.554 [2024-07-15 16:10:20.314391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.554 qpair failed and we were unable to recover it. 00:28:51.554 [2024-07-15 16:10:20.314562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.554 [2024-07-15 16:10:20.314576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.554 qpair failed and we were unable to recover it. 00:28:51.554 [2024-07-15 16:10:20.314690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.554 [2024-07-15 16:10:20.314705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.554 qpair failed and we were unable to recover it. 00:28:51.554 [2024-07-15 16:10:20.314950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.554 [2024-07-15 16:10:20.314965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.554 qpair failed and we were unable to recover it. 00:28:51.554 [2024-07-15 16:10:20.315087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.554 [2024-07-15 16:10:20.315102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.554 qpair failed and we were unable to recover it. 00:28:51.554 [2024-07-15 16:10:20.315285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.554 [2024-07-15 16:10:20.315300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.554 qpair failed and we were unable to recover it. 00:28:51.554 [2024-07-15 16:10:20.315472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.554 [2024-07-15 16:10:20.315487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.554 qpair failed and we were unable to recover it. 00:28:51.554 [2024-07-15 16:10:20.315590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.554 [2024-07-15 16:10:20.315605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.554 qpair failed and we were unable to recover it. 00:28:51.554 [2024-07-15 16:10:20.315814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.554 [2024-07-15 16:10:20.315829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.554 qpair failed and we were unable to recover it. 00:28:51.554 [2024-07-15 16:10:20.315942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.554 [2024-07-15 16:10:20.315957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.554 qpair failed and we were unable to recover it. 00:28:51.554 [2024-07-15 16:10:20.316078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.554 [2024-07-15 16:10:20.316094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.554 qpair failed and we were unable to recover it. 00:28:51.554 [2024-07-15 16:10:20.316283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.554 [2024-07-15 16:10:20.316299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.554 qpair failed and we were unable to recover it. 00:28:51.554 [2024-07-15 16:10:20.316408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.554 [2024-07-15 16:10:20.316423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.554 qpair failed and we were unable to recover it. 00:28:51.554 [2024-07-15 16:10:20.316624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.554 [2024-07-15 16:10:20.316639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.554 qpair failed and we were unable to recover it. 00:28:51.554 [2024-07-15 16:10:20.316770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.554 [2024-07-15 16:10:20.316785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.554 qpair failed and we were unable to recover it. 00:28:51.554 [2024-07-15 16:10:20.316888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.554 [2024-07-15 16:10:20.316904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.554 qpair failed and we were unable to recover it. 00:28:51.554 [2024-07-15 16:10:20.317136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.554 [2024-07-15 16:10:20.317151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.554 qpair failed and we were unable to recover it. 00:28:51.554 [2024-07-15 16:10:20.317255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.554 [2024-07-15 16:10:20.317270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.554 qpair failed and we were unable to recover it. 00:28:51.554 [2024-07-15 16:10:20.317434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.554 [2024-07-15 16:10:20.317449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.554 qpair failed and we were unable to recover it. 00:28:51.554 [2024-07-15 16:10:20.317547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.554 [2024-07-15 16:10:20.317562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.554 qpair failed and we were unable to recover it. 00:28:51.554 [2024-07-15 16:10:20.317729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.554 [2024-07-15 16:10:20.317744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.554 qpair failed and we were unable to recover it. 00:28:51.554 [2024-07-15 16:10:20.317860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.554 [2024-07-15 16:10:20.317874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.554 qpair failed and we were unable to recover it. 00:28:51.554 [2024-07-15 16:10:20.318047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.554 [2024-07-15 16:10:20.318059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.554 qpair failed and we were unable to recover it. 00:28:51.554 [2024-07-15 16:10:20.318281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.554 [2024-07-15 16:10:20.318294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.554 qpair failed and we were unable to recover it. 00:28:51.554 [2024-07-15 16:10:20.318393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.554 [2024-07-15 16:10:20.318405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.554 qpair failed and we were unable to recover it. 00:28:51.554 [2024-07-15 16:10:20.318651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.554 [2024-07-15 16:10:20.318662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.554 qpair failed and we were unable to recover it. 00:28:51.554 [2024-07-15 16:10:20.318787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.554 [2024-07-15 16:10:20.318799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.554 qpair failed and we were unable to recover it. 00:28:51.554 [2024-07-15 16:10:20.318974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.554 [2024-07-15 16:10:20.318988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.554 qpair failed and we were unable to recover it. 00:28:51.554 [2024-07-15 16:10:20.319089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.554 [2024-07-15 16:10:20.319100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.555 qpair failed and we were unable to recover it. 00:28:51.555 [2024-07-15 16:10:20.319281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.555 [2024-07-15 16:10:20.319293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.555 qpair failed and we were unable to recover it. 00:28:51.555 [2024-07-15 16:10:20.319452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.555 [2024-07-15 16:10:20.319463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.555 qpair failed and we were unable to recover it. 00:28:51.555 [2024-07-15 16:10:20.319566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.555 [2024-07-15 16:10:20.319577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.555 qpair failed and we were unable to recover it. 00:28:51.555 [2024-07-15 16:10:20.319741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.555 [2024-07-15 16:10:20.319754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.555 qpair failed and we were unable to recover it. 00:28:51.555 [2024-07-15 16:10:20.319859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.555 [2024-07-15 16:10:20.319872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.555 qpair failed and we were unable to recover it. 00:28:51.555 [2024-07-15 16:10:20.319978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.555 [2024-07-15 16:10:20.319990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.555 qpair failed and we were unable to recover it. 00:28:51.555 [2024-07-15 16:10:20.320099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.555 [2024-07-15 16:10:20.320110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.555 qpair failed and we were unable to recover it. 00:28:51.555 [2024-07-15 16:10:20.320197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.555 [2024-07-15 16:10:20.320208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.555 qpair failed and we were unable to recover it. 00:28:51.555 [2024-07-15 16:10:20.320305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.555 [2024-07-15 16:10:20.320318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.555 qpair failed and we were unable to recover it. 00:28:51.555 [2024-07-15 16:10:20.320427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.555 [2024-07-15 16:10:20.320438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.555 qpair failed and we were unable to recover it. 00:28:51.555 [2024-07-15 16:10:20.320540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.555 [2024-07-15 16:10:20.320552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.555 qpair failed and we were unable to recover it. 00:28:51.555 [2024-07-15 16:10:20.320741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.555 [2024-07-15 16:10:20.320753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.555 qpair failed and we were unable to recover it. 00:28:51.555 [2024-07-15 16:10:20.320866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.555 [2024-07-15 16:10:20.320878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.555 qpair failed and we were unable to recover it. 00:28:51.555 [2024-07-15 16:10:20.321041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.555 [2024-07-15 16:10:20.321053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.555 qpair failed and we were unable to recover it. 00:28:51.555 [2024-07-15 16:10:20.321210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.555 [2024-07-15 16:10:20.321222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.555 qpair failed and we were unable to recover it. 00:28:51.555 [2024-07-15 16:10:20.321413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.555 [2024-07-15 16:10:20.321425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.555 qpair failed and we were unable to recover it. 00:28:51.555 [2024-07-15 16:10:20.321590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.555 [2024-07-15 16:10:20.321602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.555 qpair failed and we were unable to recover it. 00:28:51.555 [2024-07-15 16:10:20.321761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.555 [2024-07-15 16:10:20.321773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.555 qpair failed and we were unable to recover it. 00:28:51.555 [2024-07-15 16:10:20.322019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.555 [2024-07-15 16:10:20.322031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.555 qpair failed and we were unable to recover it. 00:28:51.555 [2024-07-15 16:10:20.322196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.555 [2024-07-15 16:10:20.322207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.555 qpair failed and we were unable to recover it. 00:28:51.555 [2024-07-15 16:10:20.322398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.555 [2024-07-15 16:10:20.322411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.555 qpair failed and we were unable to recover it. 00:28:51.555 [2024-07-15 16:10:20.322563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.555 [2024-07-15 16:10:20.322575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.555 qpair failed and we were unable to recover it. 00:28:51.555 [2024-07-15 16:10:20.322666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.555 [2024-07-15 16:10:20.322680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.555 qpair failed and we were unable to recover it. 00:28:51.555 [2024-07-15 16:10:20.322860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.555 [2024-07-15 16:10:20.322872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.555 qpair failed and we were unable to recover it. 00:28:51.555 [2024-07-15 16:10:20.323058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.555 [2024-07-15 16:10:20.323070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.555 qpair failed and we were unable to recover it. 00:28:51.555 [2024-07-15 16:10:20.323245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.555 [2024-07-15 16:10:20.323257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.555 qpair failed and we were unable to recover it. 00:28:51.555 [2024-07-15 16:10:20.323438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.555 [2024-07-15 16:10:20.323450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.555 qpair failed and we were unable to recover it. 00:28:51.555 [2024-07-15 16:10:20.323528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.555 [2024-07-15 16:10:20.323538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.555 qpair failed and we were unable to recover it. 00:28:51.555 [2024-07-15 16:10:20.323648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.555 [2024-07-15 16:10:20.323659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.555 qpair failed and we were unable to recover it. 00:28:51.555 [2024-07-15 16:10:20.323811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.555 [2024-07-15 16:10:20.323823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.555 qpair failed and we were unable to recover it. 00:28:51.555 [2024-07-15 16:10:20.323919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.555 [2024-07-15 16:10:20.323930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.556 qpair failed and we were unable to recover it. 00:28:51.556 [2024-07-15 16:10:20.324028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.556 [2024-07-15 16:10:20.324039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.556 qpair failed and we were unable to recover it. 00:28:51.556 [2024-07-15 16:10:20.324258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.556 [2024-07-15 16:10:20.324270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.556 qpair failed and we were unable to recover it. 00:28:51.556 [2024-07-15 16:10:20.324379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.556 [2024-07-15 16:10:20.324392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.556 qpair failed and we were unable to recover it. 00:28:51.556 [2024-07-15 16:10:20.324496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.556 [2024-07-15 16:10:20.324508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.556 qpair failed and we were unable to recover it. 00:28:51.556 [2024-07-15 16:10:20.324671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.556 [2024-07-15 16:10:20.324683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.556 qpair failed and we were unable to recover it. 00:28:51.556 [2024-07-15 16:10:20.324919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.556 [2024-07-15 16:10:20.324931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.556 qpair failed and we were unable to recover it. 00:28:51.556 [2024-07-15 16:10:20.325057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.556 [2024-07-15 16:10:20.325069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.556 qpair failed and we were unable to recover it. 00:28:51.556 [2024-07-15 16:10:20.325200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.556 [2024-07-15 16:10:20.325213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.556 qpair failed and we were unable to recover it. 00:28:51.556 [2024-07-15 16:10:20.325416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.556 [2024-07-15 16:10:20.325427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.556 qpair failed and we were unable to recover it. 00:28:51.556 [2024-07-15 16:10:20.325591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.556 [2024-07-15 16:10:20.325604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.556 qpair failed and we were unable to recover it. 00:28:51.556 [2024-07-15 16:10:20.325715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.556 [2024-07-15 16:10:20.325727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.556 qpair failed and we were unable to recover it. 00:28:51.556 [2024-07-15 16:10:20.325836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.556 [2024-07-15 16:10:20.325848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.556 qpair failed and we were unable to recover it. 00:28:51.556 [2024-07-15 16:10:20.326016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.556 [2024-07-15 16:10:20.326027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.556 qpair failed and we were unable to recover it. 00:28:51.556 [2024-07-15 16:10:20.326182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.556 [2024-07-15 16:10:20.326193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.556 qpair failed and we were unable to recover it. 00:28:51.556 [2024-07-15 16:10:20.326281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.556 [2024-07-15 16:10:20.326293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.556 qpair failed and we were unable to recover it. 00:28:51.556 [2024-07-15 16:10:20.326409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.556 [2024-07-15 16:10:20.326420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.556 qpair failed and we were unable to recover it. 00:28:51.556 [2024-07-15 16:10:20.326537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.556 [2024-07-15 16:10:20.326550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.556 qpair failed and we were unable to recover it. 00:28:51.556 [2024-07-15 16:10:20.326705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.556 [2024-07-15 16:10:20.326716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.556 qpair failed and we were unable to recover it. 00:28:51.556 [2024-07-15 16:10:20.326870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.556 [2024-07-15 16:10:20.326882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.556 qpair failed and we were unable to recover it. 00:28:51.556 [2024-07-15 16:10:20.326980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.556 [2024-07-15 16:10:20.326992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.556 qpair failed and we were unable to recover it. 00:28:51.556 [2024-07-15 16:10:20.327249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.556 [2024-07-15 16:10:20.327262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.556 qpair failed and we were unable to recover it. 00:28:51.556 [2024-07-15 16:10:20.327435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.556 [2024-07-15 16:10:20.327450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.556 qpair failed and we were unable to recover it. 00:28:51.556 [2024-07-15 16:10:20.327612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.556 [2024-07-15 16:10:20.327625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.556 qpair failed and we were unable to recover it. 00:28:51.556 [2024-07-15 16:10:20.327850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.556 [2024-07-15 16:10:20.327861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.556 qpair failed and we were unable to recover it. 00:28:51.556 [2024-07-15 16:10:20.328112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.556 [2024-07-15 16:10:20.328124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.556 qpair failed and we were unable to recover it. 00:28:51.556 [2024-07-15 16:10:20.328284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.556 [2024-07-15 16:10:20.328296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.556 qpair failed and we were unable to recover it. 00:28:51.556 [2024-07-15 16:10:20.328526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.556 [2024-07-15 16:10:20.328538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.556 qpair failed and we were unable to recover it. 00:28:51.556 [2024-07-15 16:10:20.328642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.556 [2024-07-15 16:10:20.328654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.556 qpair failed and we were unable to recover it. 00:28:51.556 [2024-07-15 16:10:20.328841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.556 [2024-07-15 16:10:20.328852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.556 qpair failed and we were unable to recover it. 00:28:51.556 [2024-07-15 16:10:20.328920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.556 [2024-07-15 16:10:20.328932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.556 qpair failed and we were unable to recover it. 00:28:51.556 [2024-07-15 16:10:20.329086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.556 [2024-07-15 16:10:20.329098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.556 qpair failed and we were unable to recover it. 00:28:51.556 [2024-07-15 16:10:20.329328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.556 [2024-07-15 16:10:20.329341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.556 qpair failed and we were unable to recover it. 00:28:51.556 [2024-07-15 16:10:20.329446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.556 [2024-07-15 16:10:20.329461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.556 qpair failed and we were unable to recover it. 00:28:51.557 [2024-07-15 16:10:20.329622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.557 [2024-07-15 16:10:20.329635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.557 qpair failed and we were unable to recover it. 00:28:51.557 [2024-07-15 16:10:20.329768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.557 [2024-07-15 16:10:20.329797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.557 qpair failed and we were unable to recover it. 00:28:51.557 [2024-07-15 16:10:20.329969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.557 [2024-07-15 16:10:20.329985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.557 qpair failed and we were unable to recover it. 00:28:51.557 [2024-07-15 16:10:20.330083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.557 [2024-07-15 16:10:20.330099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.557 qpair failed and we were unable to recover it. 00:28:51.557 [2024-07-15 16:10:20.330270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.557 [2024-07-15 16:10:20.330288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.557 qpair failed and we were unable to recover it. 00:28:51.557 [2024-07-15 16:10:20.330454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.557 [2024-07-15 16:10:20.330470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.557 qpair failed and we were unable to recover it. 00:28:51.557 [2024-07-15 16:10:20.330562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.557 [2024-07-15 16:10:20.330577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.557 qpair failed and we were unable to recover it. 00:28:51.557 [2024-07-15 16:10:20.330752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.557 [2024-07-15 16:10:20.330767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.557 qpair failed and we were unable to recover it. 00:28:51.557 [2024-07-15 16:10:20.330928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.557 [2024-07-15 16:10:20.330943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.557 qpair failed and we were unable to recover it. 00:28:51.557 [2024-07-15 16:10:20.330961] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:51.557 [2024-07-15 16:10:20.330989] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:51.557 [2024-07-15 16:10:20.330996] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:51.557 [2024-07-15 16:10:20.331003] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:51.557 [2024-07-15 16:10:20.331009] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:51.557 [2024-07-15 16:10:20.331106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.557 [2024-07-15 16:10:20.331120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.557 qpair failed and we were unable to recover it. 00:28:51.557 [2024-07-15 16:10:20.331191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.557 [2024-07-15 16:10:20.331204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.557 qpair failed and we were unable to recover it. 00:28:51.557 [2024-07-15 16:10:20.331122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:28:51.557 [2024-07-15 16:10:20.331340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.557 [2024-07-15 16:10:20.331353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.557 qpair failed and we were unable to recover it. 00:28:51.557 [2024-07-15 16:10:20.331285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:28:51.557 [2024-07-15 16:10:20.331392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:28:51.557 [2024-07-15 16:10:20.331394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:28:51.557 [2024-07-15 16:10:20.331527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.557 [2024-07-15 16:10:20.331544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.557 qpair failed and we were unable to recover it. 00:28:51.557 [2024-07-15 16:10:20.331747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.557 [2024-07-15 16:10:20.331762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.557 qpair failed and we were unable to recover it. 00:28:51.557 [2024-07-15 16:10:20.331890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.557 [2024-07-15 16:10:20.331905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.557 qpair failed and we were unable to recover it. 00:28:51.557 [2024-07-15 16:10:20.332090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.557 [2024-07-15 16:10:20.332105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.557 qpair failed and we were unable to recover it. 00:28:51.557 [2024-07-15 16:10:20.332287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.557 [2024-07-15 16:10:20.332303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.557 qpair failed and we were unable to recover it. 00:28:51.557 [2024-07-15 16:10:20.332404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.557 [2024-07-15 16:10:20.332419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.557 qpair failed and we were unable to recover it. 00:28:51.557 [2024-07-15 16:10:20.332534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.557 [2024-07-15 16:10:20.332550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.557 qpair failed and we were unable to recover it. 00:28:51.557 [2024-07-15 16:10:20.332723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.557 [2024-07-15 16:10:20.332739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.557 qpair failed and we were unable to recover it. 00:28:51.557 [2024-07-15 16:10:20.332863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.557 [2024-07-15 16:10:20.332879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.557 qpair failed and we were unable to recover it. 00:28:51.557 [2024-07-15 16:10:20.333065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.557 [2024-07-15 16:10:20.333081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.557 qpair failed and we were unable to recover it. 00:28:51.557 [2024-07-15 16:10:20.333266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.557 [2024-07-15 16:10:20.333281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.557 qpair failed and we were unable to recover it. 00:28:51.557 [2024-07-15 16:10:20.333445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.557 [2024-07-15 16:10:20.333460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.557 qpair failed and we were unable to recover it. 00:28:51.557 [2024-07-15 16:10:20.333579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.557 [2024-07-15 16:10:20.333593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.557 qpair failed and we were unable to recover it. 00:28:51.557 [2024-07-15 16:10:20.333762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.557 [2024-07-15 16:10:20.333774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.557 qpair failed and we were unable to recover it. 00:28:51.557 [2024-07-15 16:10:20.333878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.557 [2024-07-15 16:10:20.333890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.557 qpair failed and we were unable to recover it. 00:28:51.557 [2024-07-15 16:10:20.333989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.557 [2024-07-15 16:10:20.334001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.557 qpair failed and we were unable to recover it. 00:28:51.557 [2024-07-15 16:10:20.334091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.557 [2024-07-15 16:10:20.334103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.557 qpair failed and we were unable to recover it. 00:28:51.557 [2024-07-15 16:10:20.334276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.557 [2024-07-15 16:10:20.334289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.557 qpair failed and we were unable to recover it. 00:28:51.557 [2024-07-15 16:10:20.334446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.557 [2024-07-15 16:10:20.334458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.557 qpair failed and we were unable to recover it. 00:28:51.557 [2024-07-15 16:10:20.334556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.558 [2024-07-15 16:10:20.334568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.558 qpair failed and we were unable to recover it. 00:28:51.558 [2024-07-15 16:10:20.334676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.558 [2024-07-15 16:10:20.334688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.558 qpair failed and we were unable to recover it. 00:28:51.558 [2024-07-15 16:10:20.334792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.558 [2024-07-15 16:10:20.334803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.558 qpair failed and we were unable to recover it. 00:28:51.558 [2024-07-15 16:10:20.334959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.558 [2024-07-15 16:10:20.334972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.558 qpair failed and we were unable to recover it. 00:28:51.558 [2024-07-15 16:10:20.335076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.558 [2024-07-15 16:10:20.335089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.558 qpair failed and we were unable to recover it. 00:28:51.558 [2024-07-15 16:10:20.335264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.558 [2024-07-15 16:10:20.335276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.558 qpair failed and we were unable to recover it. 00:28:51.558 [2024-07-15 16:10:20.335448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.558 [2024-07-15 16:10:20.335459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.558 qpair failed and we were unable to recover it. 00:28:51.558 [2024-07-15 16:10:20.335635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.558 [2024-07-15 16:10:20.335647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.558 qpair failed and we were unable to recover it. 00:28:51.558 [2024-07-15 16:10:20.335737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.558 [2024-07-15 16:10:20.335750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.558 qpair failed and we were unable to recover it. 00:28:51.558 [2024-07-15 16:10:20.335829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.558 [2024-07-15 16:10:20.335841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.558 qpair failed and we were unable to recover it. 00:28:51.558 [2024-07-15 16:10:20.336021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.558 [2024-07-15 16:10:20.336032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.558 qpair failed and we were unable to recover it. 00:28:51.558 [2024-07-15 16:10:20.336147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.558 [2024-07-15 16:10:20.336159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.558 qpair failed and we were unable to recover it. 00:28:51.558 [2024-07-15 16:10:20.336271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.558 [2024-07-15 16:10:20.336282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.558 qpair failed and we were unable to recover it. 00:28:51.558 [2024-07-15 16:10:20.336458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.558 [2024-07-15 16:10:20.336469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.558 qpair failed and we were unable to recover it. 00:28:51.558 [2024-07-15 16:10:20.336580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.558 [2024-07-15 16:10:20.336591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.558 qpair failed and we were unable to recover it. 00:28:51.558 [2024-07-15 16:10:20.336685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.558 [2024-07-15 16:10:20.336697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.558 qpair failed and we were unable to recover it. 00:28:51.558 [2024-07-15 16:10:20.336877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.558 [2024-07-15 16:10:20.336889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.558 qpair failed and we were unable to recover it. 00:28:51.558 [2024-07-15 16:10:20.337014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.558 [2024-07-15 16:10:20.337025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.558 qpair failed and we were unable to recover it. 00:28:51.558 [2024-07-15 16:10:20.337247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.558 [2024-07-15 16:10:20.337260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.558 qpair failed and we were unable to recover it. 00:28:51.558 [2024-07-15 16:10:20.337361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.558 [2024-07-15 16:10:20.337373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.558 qpair failed and we were unable to recover it. 00:28:51.558 [2024-07-15 16:10:20.337644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.558 [2024-07-15 16:10:20.337658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.558 qpair failed and we were unable to recover it. 00:28:51.558 [2024-07-15 16:10:20.337814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.558 [2024-07-15 16:10:20.337826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.558 qpair failed and we were unable to recover it. 00:28:51.558 [2024-07-15 16:10:20.337940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.558 [2024-07-15 16:10:20.337952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.558 qpair failed and we were unable to recover it. 00:28:51.558 [2024-07-15 16:10:20.338116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.558 [2024-07-15 16:10:20.338128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.558 qpair failed and we were unable to recover it. 00:28:51.558 [2024-07-15 16:10:20.338397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.558 [2024-07-15 16:10:20.338410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.558 qpair failed and we were unable to recover it. 00:28:51.558 [2024-07-15 16:10:20.338564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.558 [2024-07-15 16:10:20.338576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.558 qpair failed and we were unable to recover it. 00:28:51.558 [2024-07-15 16:10:20.338680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.558 [2024-07-15 16:10:20.338693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.558 qpair failed and we were unable to recover it. 00:28:51.558 [2024-07-15 16:10:20.338913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.558 [2024-07-15 16:10:20.338924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.558 qpair failed and we were unable to recover it. 00:28:51.558 [2024-07-15 16:10:20.339157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.558 [2024-07-15 16:10:20.339170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.558 qpair failed and we were unable to recover it. 00:28:51.558 [2024-07-15 16:10:20.339329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.558 [2024-07-15 16:10:20.339342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.558 qpair failed and we were unable to recover it. 00:28:51.558 [2024-07-15 16:10:20.339575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.558 [2024-07-15 16:10:20.339588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.558 qpair failed and we were unable to recover it. 00:28:51.558 [2024-07-15 16:10:20.339701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.558 [2024-07-15 16:10:20.339714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.558 qpair failed and we were unable to recover it. 00:28:51.558 [2024-07-15 16:10:20.339901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.558 [2024-07-15 16:10:20.339913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.558 qpair failed and we were unable to recover it. 00:28:51.558 [2024-07-15 16:10:20.340029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.558 [2024-07-15 16:10:20.340042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.558 qpair failed and we were unable to recover it. 00:28:51.558 [2024-07-15 16:10:20.340153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.558 [2024-07-15 16:10:20.340165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.558 qpair failed and we were unable to recover it. 00:28:51.558 [2024-07-15 16:10:20.340259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.559 [2024-07-15 16:10:20.340271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.559 qpair failed and we were unable to recover it. 00:28:51.559 [2024-07-15 16:10:20.340504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.559 [2024-07-15 16:10:20.340517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.559 qpair failed and we were unable to recover it. 00:28:51.559 [2024-07-15 16:10:20.340674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.559 [2024-07-15 16:10:20.340687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.559 qpair failed and we were unable to recover it. 00:28:51.559 [2024-07-15 16:10:20.340936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.559 [2024-07-15 16:10:20.340948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.559 qpair failed and we were unable to recover it. 00:28:51.559 [2024-07-15 16:10:20.341170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.559 [2024-07-15 16:10:20.341183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.559 qpair failed and we were unable to recover it. 00:28:51.559 [2024-07-15 16:10:20.341285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.559 [2024-07-15 16:10:20.341298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.559 qpair failed and we were unable to recover it. 00:28:51.559 [2024-07-15 16:10:20.341466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.559 [2024-07-15 16:10:20.341478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.559 qpair failed and we were unable to recover it. 00:28:51.559 [2024-07-15 16:10:20.341702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.559 [2024-07-15 16:10:20.341715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.559 qpair failed and we were unable to recover it. 00:28:51.559 [2024-07-15 16:10:20.341948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.559 [2024-07-15 16:10:20.341961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.559 qpair failed and we were unable to recover it. 00:28:51.559 [2024-07-15 16:10:20.342073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.559 [2024-07-15 16:10:20.342085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.559 qpair failed and we were unable to recover it. 00:28:51.559 [2024-07-15 16:10:20.342196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.559 [2024-07-15 16:10:20.342208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.559 qpair failed and we were unable to recover it. 00:28:51.559 [2024-07-15 16:10:20.342431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.559 [2024-07-15 16:10:20.342444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.559 qpair failed and we were unable to recover it. 00:28:51.559 [2024-07-15 16:10:20.342545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.559 [2024-07-15 16:10:20.342558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.559 qpair failed and we were unable to recover it. 00:28:51.559 [2024-07-15 16:10:20.342656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.559 [2024-07-15 16:10:20.342668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.559 qpair failed and we were unable to recover it. 00:28:51.559 [2024-07-15 16:10:20.342888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.559 [2024-07-15 16:10:20.342900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.559 qpair failed and we were unable to recover it. 00:28:51.559 [2024-07-15 16:10:20.343024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.559 [2024-07-15 16:10:20.343035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.559 qpair failed and we were unable to recover it. 00:28:51.559 [2024-07-15 16:10:20.343256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.559 [2024-07-15 16:10:20.343269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.559 qpair failed and we were unable to recover it. 00:28:51.559 [2024-07-15 16:10:20.343427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.559 [2024-07-15 16:10:20.343440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.559 qpair failed and we were unable to recover it. 00:28:51.559 [2024-07-15 16:10:20.343608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.559 [2024-07-15 16:10:20.343620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.559 qpair failed and we were unable to recover it. 00:28:51.559 [2024-07-15 16:10:20.343866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.559 [2024-07-15 16:10:20.343880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.559 qpair failed and we were unable to recover it. 00:28:51.559 [2024-07-15 16:10:20.344045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.559 [2024-07-15 16:10:20.344058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.559 qpair failed and we were unable to recover it. 00:28:51.559 [2024-07-15 16:10:20.344232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.559 [2024-07-15 16:10:20.344247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.559 qpair failed and we were unable to recover it. 00:28:51.559 [2024-07-15 16:10:20.344456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.559 [2024-07-15 16:10:20.344470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.559 qpair failed and we were unable to recover it. 00:28:51.559 [2024-07-15 16:10:20.344549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.559 [2024-07-15 16:10:20.344561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.559 qpair failed and we were unable to recover it. 00:28:51.559 [2024-07-15 16:10:20.344670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.559 [2024-07-15 16:10:20.344682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.559 qpair failed and we were unable to recover it. 00:28:51.559 [2024-07-15 16:10:20.344777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.559 [2024-07-15 16:10:20.344794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.559 qpair failed and we were unable to recover it. 00:28:51.559 [2024-07-15 16:10:20.344971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.559 [2024-07-15 16:10:20.344984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.559 qpair failed and we were unable to recover it. 00:28:51.559 [2024-07-15 16:10:20.345137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.559 [2024-07-15 16:10:20.345150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.559 qpair failed and we were unable to recover it. 00:28:51.559 [2024-07-15 16:10:20.345260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.559 [2024-07-15 16:10:20.345273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.559 qpair failed and we were unable to recover it. 00:28:51.559 [2024-07-15 16:10:20.345425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.559 [2024-07-15 16:10:20.345438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.559 qpair failed and we were unable to recover it. 00:28:51.559 [2024-07-15 16:10:20.345535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.559 [2024-07-15 16:10:20.345546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.559 qpair failed and we were unable to recover it. 00:28:51.559 [2024-07-15 16:10:20.345637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.559 [2024-07-15 16:10:20.345650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.559 qpair failed and we were unable to recover it. 00:28:51.559 [2024-07-15 16:10:20.345743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.559 [2024-07-15 16:10:20.345756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.559 qpair failed and we were unable to recover it. 00:28:51.559 [2024-07-15 16:10:20.345857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.559 [2024-07-15 16:10:20.345869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.559 qpair failed and we were unable to recover it. 00:28:51.559 [2024-07-15 16:10:20.346094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.559 [2024-07-15 16:10:20.346106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.559 qpair failed and we were unable to recover it. 00:28:51.559 [2024-07-15 16:10:20.346218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.559 [2024-07-15 16:10:20.346233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.559 qpair failed and we were unable to recover it. 00:28:51.559 [2024-07-15 16:10:20.346456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.559 [2024-07-15 16:10:20.346469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.559 qpair failed and we were unable to recover it. 00:28:51.559 [2024-07-15 16:10:20.346575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.559 [2024-07-15 16:10:20.346587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.559 qpair failed and we were unable to recover it. 00:28:51.559 [2024-07-15 16:10:20.346705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.559 [2024-07-15 16:10:20.346718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.559 qpair failed and we were unable to recover it. 00:28:51.559 [2024-07-15 16:10:20.346821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.559 [2024-07-15 16:10:20.346833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.560 qpair failed and we were unable to recover it. 00:28:51.560 [2024-07-15 16:10:20.347008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.560 [2024-07-15 16:10:20.347021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.560 qpair failed and we were unable to recover it. 00:28:51.560 [2024-07-15 16:10:20.347234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.560 [2024-07-15 16:10:20.347248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.560 qpair failed and we were unable to recover it. 00:28:51.560 [2024-07-15 16:10:20.347467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.560 [2024-07-15 16:10:20.347481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.560 qpair failed and we were unable to recover it. 00:28:51.560 [2024-07-15 16:10:20.347582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.560 [2024-07-15 16:10:20.347595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.560 qpair failed and we were unable to recover it. 00:28:51.560 [2024-07-15 16:10:20.347775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.560 [2024-07-15 16:10:20.347789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.560 qpair failed and we were unable to recover it. 00:28:51.560 [2024-07-15 16:10:20.347891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.560 [2024-07-15 16:10:20.347904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.560 qpair failed and we were unable to recover it. 00:28:51.560 [2024-07-15 16:10:20.348020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.560 [2024-07-15 16:10:20.348032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.560 qpair failed and we were unable to recover it. 00:28:51.560 [2024-07-15 16:10:20.348193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.560 [2024-07-15 16:10:20.348206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.560 qpair failed and we were unable to recover it. 00:28:51.560 [2024-07-15 16:10:20.348383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.560 [2024-07-15 16:10:20.348396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.560 qpair failed and we were unable to recover it. 00:28:51.560 [2024-07-15 16:10:20.348568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.560 [2024-07-15 16:10:20.348584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.560 qpair failed and we were unable to recover it. 00:28:51.560 [2024-07-15 16:10:20.348760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.560 [2024-07-15 16:10:20.348773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.560 qpair failed and we were unable to recover it. 00:28:51.560 [2024-07-15 16:10:20.348881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.560 [2024-07-15 16:10:20.348893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.560 qpair failed and we were unable to recover it. 00:28:51.560 [2024-07-15 16:10:20.349097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.560 [2024-07-15 16:10:20.349125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.560 qpair failed and we were unable to recover it. 00:28:51.560 [2024-07-15 16:10:20.349356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.560 [2024-07-15 16:10:20.349373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.560 qpair failed and we were unable to recover it. 00:28:51.560 [2024-07-15 16:10:20.349484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.560 [2024-07-15 16:10:20.349500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.560 qpair failed and we were unable to recover it. 00:28:51.560 [2024-07-15 16:10:20.349669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.560 [2024-07-15 16:10:20.349686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.560 qpair failed and we were unable to recover it. 00:28:51.560 [2024-07-15 16:10:20.349851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.560 [2024-07-15 16:10:20.349868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.560 qpair failed and we were unable to recover it. 00:28:51.560 [2024-07-15 16:10:20.349979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.560 [2024-07-15 16:10:20.349995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.560 qpair failed and we were unable to recover it. 00:28:51.560 [2024-07-15 16:10:20.350242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.560 [2024-07-15 16:10:20.350260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.560 qpair failed and we were unable to recover it. 00:28:51.560 [2024-07-15 16:10:20.350381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.560 [2024-07-15 16:10:20.350397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.560 qpair failed and we were unable to recover it. 00:28:51.560 [2024-07-15 16:10:20.350561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.560 [2024-07-15 16:10:20.350577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.560 qpair failed and we were unable to recover it. 00:28:51.560 [2024-07-15 16:10:20.350756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.560 [2024-07-15 16:10:20.350771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.560 qpair failed and we were unable to recover it. 00:28:51.560 [2024-07-15 16:10:20.350883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.560 [2024-07-15 16:10:20.350898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.560 qpair failed and we were unable to recover it. 00:28:51.560 [2024-07-15 16:10:20.351155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.560 [2024-07-15 16:10:20.351172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.560 qpair failed and we were unable to recover it. 00:28:51.560 [2024-07-15 16:10:20.351304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.560 [2024-07-15 16:10:20.351321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.560 qpair failed and we were unable to recover it. 00:28:51.560 [2024-07-15 16:10:20.351443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.560 [2024-07-15 16:10:20.351464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.560 qpair failed and we were unable to recover it. 00:28:51.560 [2024-07-15 16:10:20.351571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.560 [2024-07-15 16:10:20.351587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.560 qpair failed and we were unable to recover it. 00:28:51.560 [2024-07-15 16:10:20.351785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.560 [2024-07-15 16:10:20.351802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.560 qpair failed and we were unable to recover it. 00:28:51.560 [2024-07-15 16:10:20.352081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.560 [2024-07-15 16:10:20.352097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.560 qpair failed and we were unable to recover it. 00:28:51.560 [2024-07-15 16:10:20.352230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.560 [2024-07-15 16:10:20.352247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.560 qpair failed and we were unable to recover it. 00:28:51.560 [2024-07-15 16:10:20.352365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.560 [2024-07-15 16:10:20.352381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.560 qpair failed and we were unable to recover it. 00:28:51.560 [2024-07-15 16:10:20.352476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.560 [2024-07-15 16:10:20.352493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.560 qpair failed and we were unable to recover it. 00:28:51.560 [2024-07-15 16:10:20.352597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.561 [2024-07-15 16:10:20.352613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.561 qpair failed and we were unable to recover it. 00:28:51.561 [2024-07-15 16:10:20.352736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.561 [2024-07-15 16:10:20.352752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.561 qpair failed and we were unable to recover it. 00:28:51.561 [2024-07-15 16:10:20.352939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.561 [2024-07-15 16:10:20.352956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.561 qpair failed and we were unable to recover it. 00:28:51.561 [2024-07-15 16:10:20.353088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.561 [2024-07-15 16:10:20.353105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.561 qpair failed and we were unable to recover it. 00:28:51.561 [2024-07-15 16:10:20.353360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.561 [2024-07-15 16:10:20.353378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.561 qpair failed and we were unable to recover it. 00:28:51.561 [2024-07-15 16:10:20.353553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.561 [2024-07-15 16:10:20.353568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.561 qpair failed and we were unable to recover it. 00:28:51.561 [2024-07-15 16:10:20.353683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.561 [2024-07-15 16:10:20.353700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.561 qpair failed and we were unable to recover it. 00:28:51.561 [2024-07-15 16:10:20.353877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.561 [2024-07-15 16:10:20.353894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.561 qpair failed and we were unable to recover it. 00:28:51.561 [2024-07-15 16:10:20.354004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.561 [2024-07-15 16:10:20.354020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.561 qpair failed and we were unable to recover it. 00:28:51.561 [2024-07-15 16:10:20.354278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.561 [2024-07-15 16:10:20.354295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.561 qpair failed and we were unable to recover it. 00:28:51.561 [2024-07-15 16:10:20.354487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.561 [2024-07-15 16:10:20.354504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.561 qpair failed and we were unable to recover it. 00:28:51.561 [2024-07-15 16:10:20.354681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.561 [2024-07-15 16:10:20.354699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.561 qpair failed and we were unable to recover it. 00:28:51.561 [2024-07-15 16:10:20.354810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.561 [2024-07-15 16:10:20.354827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.561 qpair failed and we were unable to recover it. 00:28:51.561 [2024-07-15 16:10:20.354997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.561 [2024-07-15 16:10:20.355016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.561 qpair failed and we were unable to recover it. 00:28:51.561 [2024-07-15 16:10:20.355138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.561 [2024-07-15 16:10:20.355156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.561 qpair failed and we were unable to recover it. 00:28:51.561 [2024-07-15 16:10:20.355267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.561 [2024-07-15 16:10:20.355286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.561 qpair failed and we were unable to recover it. 00:28:51.561 [2024-07-15 16:10:20.355473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.561 [2024-07-15 16:10:20.355492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.561 qpair failed and we were unable to recover it. 00:28:51.561 [2024-07-15 16:10:20.355670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.561 [2024-07-15 16:10:20.355688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.561 qpair failed and we were unable to recover it. 00:28:51.561 [2024-07-15 16:10:20.355940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.561 [2024-07-15 16:10:20.355959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.561 qpair failed and we were unable to recover it. 00:28:51.561 [2024-07-15 16:10:20.356125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.561 [2024-07-15 16:10:20.356143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:51.561 qpair failed and we were unable to recover it. 00:28:51.561 [2024-07-15 16:10:20.356341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.561 [2024-07-15 16:10:20.356377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.561 qpair failed and we were unable to recover it. 00:28:51.561 [2024-07-15 16:10:20.356508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.561 [2024-07-15 16:10:20.356524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.561 qpair failed and we were unable to recover it. 00:28:51.561 [2024-07-15 16:10:20.356686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.561 [2024-07-15 16:10:20.356702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.561 qpair failed and we were unable to recover it. 00:28:51.561 [2024-07-15 16:10:20.356810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.561 [2024-07-15 16:10:20.356827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.561 qpair failed and we were unable to recover it. 00:28:51.561 [2024-07-15 16:10:20.356999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.561 [2024-07-15 16:10:20.357014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.561 qpair failed and we were unable to recover it. 00:28:51.561 [2024-07-15 16:10:20.357112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.561 [2024-07-15 16:10:20.357127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.561 qpair failed and we were unable to recover it. 00:28:51.561 [2024-07-15 16:10:20.357385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.561 [2024-07-15 16:10:20.357402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.561 qpair failed and we were unable to recover it. 00:28:51.561 [2024-07-15 16:10:20.357503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.561 [2024-07-15 16:10:20.357519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.561 qpair failed and we were unable to recover it. 00:28:51.561 [2024-07-15 16:10:20.357754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.561 [2024-07-15 16:10:20.357769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.561 qpair failed and we were unable to recover it. 00:28:51.561 [2024-07-15 16:10:20.357883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.561 [2024-07-15 16:10:20.357899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.561 qpair failed and we were unable to recover it. 00:28:51.561 [2024-07-15 16:10:20.358000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.561 [2024-07-15 16:10:20.358017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.561 qpair failed and we were unable to recover it. 00:28:51.561 [2024-07-15 16:10:20.358185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.561 [2024-07-15 16:10:20.358199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.561 qpair failed and we were unable to recover it. 00:28:51.561 [2024-07-15 16:10:20.358312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.561 [2024-07-15 16:10:20.358327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.561 qpair failed and we were unable to recover it. 00:28:51.561 [2024-07-15 16:10:20.358555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.561 [2024-07-15 16:10:20.358576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.561 qpair failed and we were unable to recover it. 00:28:51.561 [2024-07-15 16:10:20.358824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.561 [2024-07-15 16:10:20.358839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.561 qpair failed and we were unable to recover it. 00:28:51.561 [2024-07-15 16:10:20.358942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.561 [2024-07-15 16:10:20.358957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.561 qpair failed and we were unable to recover it. 00:28:51.561 [2024-07-15 16:10:20.359064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.561 [2024-07-15 16:10:20.359080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.561 qpair failed and we were unable to recover it. 00:28:51.561 [2024-07-15 16:10:20.359245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.561 [2024-07-15 16:10:20.359261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.561 qpair failed and we were unable to recover it. 00:28:51.561 [2024-07-15 16:10:20.359368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.561 [2024-07-15 16:10:20.359383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.561 qpair failed and we were unable to recover it. 00:28:51.561 [2024-07-15 16:10:20.359559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.561 [2024-07-15 16:10:20.359575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.561 qpair failed and we were unable to recover it. 00:28:51.561 [2024-07-15 16:10:20.359673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.561 [2024-07-15 16:10:20.359687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.562 qpair failed and we were unable to recover it. 00:28:51.562 [2024-07-15 16:10:20.359789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.562 [2024-07-15 16:10:20.359805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.562 qpair failed and we were unable to recover it. 00:28:51.562 [2024-07-15 16:10:20.360045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.562 [2024-07-15 16:10:20.360060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.562 qpair failed and we were unable to recover it. 00:28:51.562 [2024-07-15 16:10:20.360158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.562 [2024-07-15 16:10:20.360173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.562 qpair failed and we were unable to recover it. 00:28:51.562 [2024-07-15 16:10:20.360401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.562 [2024-07-15 16:10:20.360418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.562 qpair failed and we were unable to recover it. 00:28:51.562 [2024-07-15 16:10:20.360607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.562 [2024-07-15 16:10:20.360622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.562 qpair failed and we were unable to recover it. 00:28:51.562 [2024-07-15 16:10:20.360754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.562 [2024-07-15 16:10:20.360769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.562 qpair failed and we were unable to recover it. 00:28:51.562 [2024-07-15 16:10:20.360919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.562 [2024-07-15 16:10:20.360935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.562 qpair failed and we were unable to recover it. 00:28:51.562 [2024-07-15 16:10:20.361097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.562 [2024-07-15 16:10:20.361113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.562 qpair failed and we were unable to recover it. 00:28:51.562 [2024-07-15 16:10:20.361234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.562 [2024-07-15 16:10:20.361251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.562 qpair failed and we were unable to recover it. 00:28:51.562 [2024-07-15 16:10:20.361368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.562 [2024-07-15 16:10:20.361386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.562 qpair failed and we were unable to recover it. 00:28:51.562 [2024-07-15 16:10:20.361579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.562 [2024-07-15 16:10:20.361597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.562 qpair failed and we were unable to recover it. 00:28:51.562 [2024-07-15 16:10:20.361778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.562 [2024-07-15 16:10:20.361797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.562 qpair failed and we were unable to recover it. 00:28:51.562 [2024-07-15 16:10:20.361974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.562 [2024-07-15 16:10:20.361991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.562 qpair failed and we were unable to recover it. 00:28:51.562 [2024-07-15 16:10:20.362160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.562 [2024-07-15 16:10:20.362178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.562 qpair failed and we were unable to recover it. 00:28:51.562 [2024-07-15 16:10:20.362366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.562 [2024-07-15 16:10:20.362383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.562 qpair failed and we were unable to recover it. 00:28:51.562 [2024-07-15 16:10:20.362612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.562 [2024-07-15 16:10:20.362629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.562 qpair failed and we were unable to recover it. 00:28:51.562 [2024-07-15 16:10:20.362830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.562 [2024-07-15 16:10:20.362846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.562 qpair failed and we were unable to recover it. 00:28:51.562 [2024-07-15 16:10:20.362969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.562 [2024-07-15 16:10:20.362985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.562 qpair failed and we were unable to recover it. 00:28:51.562 [2024-07-15 16:10:20.363170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.562 [2024-07-15 16:10:20.363187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.562 qpair failed and we were unable to recover it. 00:28:51.562 [2024-07-15 16:10:20.363392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.562 [2024-07-15 16:10:20.363424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.562 qpair failed and we were unable to recover it. 00:28:51.562 [2024-07-15 16:10:20.363752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.562 [2024-07-15 16:10:20.363770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.562 qpair failed and we were unable to recover it. 00:28:51.562 [2024-07-15 16:10:20.364002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.562 [2024-07-15 16:10:20.364013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.562 qpair failed and we were unable to recover it. 00:28:51.562 [2024-07-15 16:10:20.364178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.562 [2024-07-15 16:10:20.364189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.562 qpair failed and we were unable to recover it. 00:28:51.562 [2024-07-15 16:10:20.364305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.562 [2024-07-15 16:10:20.364317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.562 qpair failed and we were unable to recover it. 00:28:51.562 [2024-07-15 16:10:20.364534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.562 [2024-07-15 16:10:20.364546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.562 qpair failed and we were unable to recover it. 00:28:51.562 [2024-07-15 16:10:20.364702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.562 [2024-07-15 16:10:20.364713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.562 qpair failed and we were unable to recover it. 00:28:51.562 [2024-07-15 16:10:20.364899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.562 [2024-07-15 16:10:20.364911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.562 qpair failed and we were unable to recover it. 00:28:51.562 [2024-07-15 16:10:20.365017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.562 [2024-07-15 16:10:20.365029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.562 qpair failed and we were unable to recover it. 00:28:51.562 [2024-07-15 16:10:20.365142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.562 [2024-07-15 16:10:20.365153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.562 qpair failed and we were unable to recover it. 00:28:51.562 [2024-07-15 16:10:20.365255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.562 [2024-07-15 16:10:20.365267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.562 qpair failed and we were unable to recover it. 00:28:51.562 [2024-07-15 16:10:20.365432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.562 [2024-07-15 16:10:20.365444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.562 qpair failed and we were unable to recover it. 00:28:51.562 [2024-07-15 16:10:20.365639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.562 [2024-07-15 16:10:20.365651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.562 qpair failed and we were unable to recover it. 00:28:51.562 [2024-07-15 16:10:20.365739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.562 [2024-07-15 16:10:20.365751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.562 qpair failed and we were unable to recover it. 00:28:51.562 [2024-07-15 16:10:20.365855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.562 [2024-07-15 16:10:20.365867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.562 qpair failed and we were unable to recover it. 00:28:51.562 [2024-07-15 16:10:20.366030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.562 [2024-07-15 16:10:20.366041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.562 qpair failed and we were unable to recover it. 00:28:51.562 [2024-07-15 16:10:20.366140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.562 [2024-07-15 16:10:20.366152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.562 qpair failed and we were unable to recover it. 00:28:51.562 [2024-07-15 16:10:20.366306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.562 [2024-07-15 16:10:20.366318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.562 qpair failed and we were unable to recover it. 00:28:51.562 [2024-07-15 16:10:20.366391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.562 [2024-07-15 16:10:20.366403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.562 qpair failed and we were unable to recover it. 00:28:51.562 [2024-07-15 16:10:20.366565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.562 [2024-07-15 16:10:20.366578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.562 qpair failed and we were unable to recover it. 00:28:51.562 [2024-07-15 16:10:20.366667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.562 [2024-07-15 16:10:20.366678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.562 qpair failed and we were unable to recover it. 00:28:51.563 [2024-07-15 16:10:20.366851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.563 [2024-07-15 16:10:20.366862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.563 qpair failed and we were unable to recover it. 00:28:51.563 [2024-07-15 16:10:20.367043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.563 [2024-07-15 16:10:20.367055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.563 qpair failed and we were unable to recover it. 00:28:51.563 [2024-07-15 16:10:20.367211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.563 [2024-07-15 16:10:20.367223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.563 qpair failed and we were unable to recover it. 00:28:51.563 [2024-07-15 16:10:20.367394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.563 [2024-07-15 16:10:20.367406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.563 qpair failed and we were unable to recover it. 00:28:51.563 [2024-07-15 16:10:20.367509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.563 [2024-07-15 16:10:20.367522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.563 qpair failed and we were unable to recover it. 00:28:51.563 [2024-07-15 16:10:20.367712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.563 [2024-07-15 16:10:20.367723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.563 qpair failed and we were unable to recover it. 00:28:51.563 [2024-07-15 16:10:20.367842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.563 [2024-07-15 16:10:20.367854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.563 qpair failed and we were unable to recover it. 00:28:51.563 [2024-07-15 16:10:20.368039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.563 [2024-07-15 16:10:20.368050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.563 qpair failed and we were unable to recover it. 00:28:51.563 [2024-07-15 16:10:20.368278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.563 [2024-07-15 16:10:20.368290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.563 qpair failed and we were unable to recover it. 00:28:51.563 [2024-07-15 16:10:20.368475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.563 [2024-07-15 16:10:20.368486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.563 qpair failed and we were unable to recover it. 00:28:51.563 [2024-07-15 16:10:20.368657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.563 [2024-07-15 16:10:20.368669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.563 qpair failed and we were unable to recover it. 00:28:51.563 [2024-07-15 16:10:20.368835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.563 [2024-07-15 16:10:20.368846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.563 qpair failed and we were unable to recover it. 00:28:51.563 [2024-07-15 16:10:20.369024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.563 [2024-07-15 16:10:20.369036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.563 qpair failed and we were unable to recover it. 00:28:51.563 [2024-07-15 16:10:20.369147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.563 [2024-07-15 16:10:20.369158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.563 qpair failed and we were unable to recover it. 00:28:51.563 [2024-07-15 16:10:20.369261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.563 [2024-07-15 16:10:20.369274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.563 qpair failed and we were unable to recover it. 00:28:51.563 [2024-07-15 16:10:20.369431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.563 [2024-07-15 16:10:20.369442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.563 qpair failed and we were unable to recover it. 00:28:51.563 [2024-07-15 16:10:20.369592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.563 [2024-07-15 16:10:20.369604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.563 qpair failed and we were unable to recover it. 00:28:51.563 [2024-07-15 16:10:20.369719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.563 [2024-07-15 16:10:20.369731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.563 qpair failed and we were unable to recover it. 00:28:51.563 [2024-07-15 16:10:20.369894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.563 [2024-07-15 16:10:20.369907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.563 qpair failed and we were unable to recover it. 00:28:51.563 [2024-07-15 16:10:20.370130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.563 [2024-07-15 16:10:20.370143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.563 qpair failed and we were unable to recover it. 00:28:51.563 [2024-07-15 16:10:20.370336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.563 [2024-07-15 16:10:20.370348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.563 qpair failed and we were unable to recover it. 00:28:51.563 [2024-07-15 16:10:20.370498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.563 [2024-07-15 16:10:20.370509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.563 qpair failed and we were unable to recover it. 00:28:51.563 [2024-07-15 16:10:20.370676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.563 [2024-07-15 16:10:20.370688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.563 qpair failed and we were unable to recover it. 00:28:51.563 [2024-07-15 16:10:20.370937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.563 [2024-07-15 16:10:20.370949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.563 qpair failed and we were unable to recover it. 00:28:51.563 [2024-07-15 16:10:20.371119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.563 [2024-07-15 16:10:20.371131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.563 qpair failed and we were unable to recover it. 00:28:51.563 [2024-07-15 16:10:20.371376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.563 [2024-07-15 16:10:20.371388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.563 qpair failed and we were unable to recover it. 00:28:51.563 [2024-07-15 16:10:20.371554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.563 [2024-07-15 16:10:20.371566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.563 qpair failed and we were unable to recover it. 00:28:51.563 [2024-07-15 16:10:20.371746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.563 [2024-07-15 16:10:20.371759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.563 qpair failed and we were unable to recover it. 00:28:51.563 [2024-07-15 16:10:20.371979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.563 [2024-07-15 16:10:20.371990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.563 qpair failed and we were unable to recover it. 00:28:51.563 [2024-07-15 16:10:20.372216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.563 [2024-07-15 16:10:20.372241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.563 qpair failed and we were unable to recover it. 00:28:51.563 [2024-07-15 16:10:20.372423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.563 [2024-07-15 16:10:20.372435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.563 qpair failed and we were unable to recover it. 00:28:51.563 [2024-07-15 16:10:20.372602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.563 [2024-07-15 16:10:20.372614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.563 qpair failed and we were unable to recover it. 00:28:51.563 [2024-07-15 16:10:20.372807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.563 [2024-07-15 16:10:20.372819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.563 qpair failed and we were unable to recover it. 00:28:51.563 [2024-07-15 16:10:20.372924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.563 [2024-07-15 16:10:20.372936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.563 qpair failed and we were unable to recover it. 00:28:51.563 [2024-07-15 16:10:20.373008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.563 [2024-07-15 16:10:20.373027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.563 qpair failed and we were unable to recover it. 00:28:51.563 [2024-07-15 16:10:20.373112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.563 [2024-07-15 16:10:20.373124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.563 qpair failed and we were unable to recover it. 00:28:51.563 [2024-07-15 16:10:20.373284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.563 [2024-07-15 16:10:20.373296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.563 qpair failed and we were unable to recover it. 00:28:51.563 [2024-07-15 16:10:20.373399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.563 [2024-07-15 16:10:20.373411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.563 qpair failed and we were unable to recover it. 00:28:51.563 [2024-07-15 16:10:20.373519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.563 [2024-07-15 16:10:20.373531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.563 qpair failed and we were unable to recover it. 00:28:51.563 [2024-07-15 16:10:20.373650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.563 [2024-07-15 16:10:20.373662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.563 qpair failed and we were unable to recover it. 00:28:51.563 [2024-07-15 16:10:20.373819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.563 [2024-07-15 16:10:20.373832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.563 qpair failed and we were unable to recover it. 00:28:51.563 [2024-07-15 16:10:20.374075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.563 [2024-07-15 16:10:20.374086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.564 qpair failed and we were unable to recover it. 00:28:51.564 [2024-07-15 16:10:20.374337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.564 [2024-07-15 16:10:20.374349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.564 qpair failed and we were unable to recover it. 00:28:51.564 [2024-07-15 16:10:20.374439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.564 [2024-07-15 16:10:20.374452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.564 qpair failed and we were unable to recover it. 00:28:51.564 [2024-07-15 16:10:20.374558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.564 [2024-07-15 16:10:20.374570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.564 qpair failed and we were unable to recover it. 00:28:51.564 [2024-07-15 16:10:20.374671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.564 [2024-07-15 16:10:20.374682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.564 qpair failed and we were unable to recover it. 00:28:51.564 [2024-07-15 16:10:20.374802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.564 [2024-07-15 16:10:20.374813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.564 qpair failed and we were unable to recover it. 00:28:51.564 [2024-07-15 16:10:20.374911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.564 [2024-07-15 16:10:20.374923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.564 qpair failed and we were unable to recover it. 00:28:51.564 [2024-07-15 16:10:20.375023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.564 [2024-07-15 16:10:20.375035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.564 qpair failed and we were unable to recover it. 00:28:51.564 [2024-07-15 16:10:20.375207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.564 [2024-07-15 16:10:20.375218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.564 qpair failed and we were unable to recover it. 00:28:51.564 [2024-07-15 16:10:20.375389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.564 [2024-07-15 16:10:20.375401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.564 qpair failed and we were unable to recover it. 00:28:51.564 [2024-07-15 16:10:20.375508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.564 [2024-07-15 16:10:20.375519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.564 qpair failed and we were unable to recover it. 00:28:51.564 [2024-07-15 16:10:20.375672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.564 [2024-07-15 16:10:20.375684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.564 qpair failed and we were unable to recover it. 00:28:51.564 [2024-07-15 16:10:20.375778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.564 [2024-07-15 16:10:20.375790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.564 qpair failed and we were unable to recover it. 00:28:51.564 [2024-07-15 16:10:20.375894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.564 [2024-07-15 16:10:20.375906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.564 qpair failed and we were unable to recover it. 00:28:51.564 [2024-07-15 16:10:20.376070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.564 [2024-07-15 16:10:20.376082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.564 qpair failed and we were unable to recover it. 00:28:51.564 [2024-07-15 16:10:20.376287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.564 [2024-07-15 16:10:20.376300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.564 qpair failed and we were unable to recover it. 00:28:51.564 [2024-07-15 16:10:20.376418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.564 [2024-07-15 16:10:20.376430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.564 qpair failed and we were unable to recover it. 00:28:51.564 [2024-07-15 16:10:20.376598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.564 [2024-07-15 16:10:20.376610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.564 qpair failed and we were unable to recover it. 00:28:51.564 [2024-07-15 16:10:20.376802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.564 [2024-07-15 16:10:20.376818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.564 qpair failed and we were unable to recover it. 00:28:51.564 [2024-07-15 16:10:20.376917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.564 [2024-07-15 16:10:20.376930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.564 qpair failed and we were unable to recover it. 00:28:51.564 [2024-07-15 16:10:20.377159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.564 [2024-07-15 16:10:20.377172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.564 qpair failed and we were unable to recover it. 00:28:51.564 [2024-07-15 16:10:20.377280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.564 [2024-07-15 16:10:20.377292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.564 qpair failed and we were unable to recover it. 00:28:51.564 [2024-07-15 16:10:20.377489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.564 [2024-07-15 16:10:20.377502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.564 qpair failed and we were unable to recover it. 00:28:51.564 [2024-07-15 16:10:20.377620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.564 [2024-07-15 16:10:20.377631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.564 qpair failed and we were unable to recover it. 00:28:51.564 [2024-07-15 16:10:20.377723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.564 [2024-07-15 16:10:20.377735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.564 qpair failed and we were unable to recover it. 00:28:51.564 [2024-07-15 16:10:20.377901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.564 [2024-07-15 16:10:20.377914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.564 qpair failed and we were unable to recover it. 00:28:51.564 [2024-07-15 16:10:20.378075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.564 [2024-07-15 16:10:20.378087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.564 qpair failed and we were unable to recover it. 00:28:51.564 [2024-07-15 16:10:20.378189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.564 [2024-07-15 16:10:20.378201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.564 qpair failed and we were unable to recover it. 00:28:51.564 [2024-07-15 16:10:20.378361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.564 [2024-07-15 16:10:20.378375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.564 qpair failed and we were unable to recover it. 00:28:51.564 [2024-07-15 16:10:20.378498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.564 [2024-07-15 16:10:20.378510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.564 qpair failed and we were unable to recover it. 00:28:51.564 [2024-07-15 16:10:20.378613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.564 [2024-07-15 16:10:20.378625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.564 qpair failed and we were unable to recover it. 00:28:51.564 [2024-07-15 16:10:20.378698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.564 [2024-07-15 16:10:20.378711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.564 qpair failed and we were unable to recover it. 00:28:51.564 [2024-07-15 16:10:20.378884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.564 [2024-07-15 16:10:20.378896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.564 qpair failed and we were unable to recover it. 00:28:51.564 [2024-07-15 16:10:20.379007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.564 [2024-07-15 16:10:20.379019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.564 qpair failed and we were unable to recover it. 00:28:51.564 [2024-07-15 16:10:20.379112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.564 [2024-07-15 16:10:20.379125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.564 qpair failed and we were unable to recover it. 00:28:51.564 [2024-07-15 16:10:20.379311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.564 [2024-07-15 16:10:20.379323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.564 qpair failed and we were unable to recover it. 00:28:51.564 [2024-07-15 16:10:20.379427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.564 [2024-07-15 16:10:20.379439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.564 qpair failed and we were unable to recover it. 00:28:51.564 [2024-07-15 16:10:20.379682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.564 [2024-07-15 16:10:20.379696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.564 qpair failed and we were unable to recover it. 00:28:51.564 [2024-07-15 16:10:20.379871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.564 [2024-07-15 16:10:20.379883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.564 qpair failed and we were unable to recover it. 00:28:51.564 [2024-07-15 16:10:20.380056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.564 [2024-07-15 16:10:20.380068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.564 qpair failed and we were unable to recover it. 00:28:51.564 [2024-07-15 16:10:20.380232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.564 [2024-07-15 16:10:20.380245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.564 qpair failed and we were unable to recover it. 00:28:51.565 [2024-07-15 16:10:20.380399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.565 [2024-07-15 16:10:20.380412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.565 qpair failed and we were unable to recover it. 00:28:51.565 [2024-07-15 16:10:20.380529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.565 [2024-07-15 16:10:20.380542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.565 qpair failed and we were unable to recover it. 00:28:51.565 [2024-07-15 16:10:20.380695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.565 [2024-07-15 16:10:20.380708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.565 qpair failed and we were unable to recover it. 00:28:51.565 [2024-07-15 16:10:20.380932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.565 [2024-07-15 16:10:20.380946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.565 qpair failed and we were unable to recover it. 00:28:51.565 [2024-07-15 16:10:20.381134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.565 [2024-07-15 16:10:20.381147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.565 qpair failed and we were unable to recover it. 00:28:51.565 [2024-07-15 16:10:20.381256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.565 [2024-07-15 16:10:20.381269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.565 qpair failed and we were unable to recover it. 00:28:51.565 [2024-07-15 16:10:20.381455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.565 [2024-07-15 16:10:20.381468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.565 qpair failed and we were unable to recover it. 00:28:51.565 [2024-07-15 16:10:20.381693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.565 [2024-07-15 16:10:20.381707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.565 qpair failed and we were unable to recover it. 00:28:51.565 [2024-07-15 16:10:20.381861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.565 [2024-07-15 16:10:20.381874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.565 qpair failed and we were unable to recover it. 00:28:51.565 [2024-07-15 16:10:20.381986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.565 [2024-07-15 16:10:20.381998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.565 qpair failed and we were unable to recover it. 00:28:51.565 [2024-07-15 16:10:20.382111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.565 [2024-07-15 16:10:20.382124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.565 qpair failed and we were unable to recover it. 00:28:51.565 [2024-07-15 16:10:20.382353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.565 [2024-07-15 16:10:20.382365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.565 qpair failed and we were unable to recover it. 00:28:51.565 [2024-07-15 16:10:20.382549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.565 [2024-07-15 16:10:20.382561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.565 qpair failed and we were unable to recover it. 00:28:51.565 [2024-07-15 16:10:20.382734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.565 [2024-07-15 16:10:20.382746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.565 qpair failed and we were unable to recover it. 00:28:51.565 [2024-07-15 16:10:20.382974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.565 [2024-07-15 16:10:20.382986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.565 qpair failed and we were unable to recover it. 00:28:51.565 [2024-07-15 16:10:20.383152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.565 [2024-07-15 16:10:20.383165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.565 qpair failed and we were unable to recover it. 00:28:51.565 [2024-07-15 16:10:20.383343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.565 [2024-07-15 16:10:20.383356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.565 qpair failed and we were unable to recover it. 00:28:51.565 [2024-07-15 16:10:20.383538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.565 [2024-07-15 16:10:20.383553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.565 qpair failed and we were unable to recover it. 00:28:51.565 [2024-07-15 16:10:20.383676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.565 [2024-07-15 16:10:20.383688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.565 qpair failed and we were unable to recover it. 00:28:51.565 [2024-07-15 16:10:20.383849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.565 [2024-07-15 16:10:20.383861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.565 qpair failed and we were unable to recover it. 00:28:51.565 [2024-07-15 16:10:20.384099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.565 [2024-07-15 16:10:20.384111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.565 qpair failed and we were unable to recover it. 00:28:51.565 [2024-07-15 16:10:20.384215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.565 [2024-07-15 16:10:20.384241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.565 qpair failed and we were unable to recover it. 00:28:51.565 [2024-07-15 16:10:20.384405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.565 [2024-07-15 16:10:20.384417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.565 qpair failed and we were unable to recover it. 00:28:51.565 [2024-07-15 16:10:20.384520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.565 [2024-07-15 16:10:20.384532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.565 qpair failed and we were unable to recover it. 00:28:51.565 [2024-07-15 16:10:20.384647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.565 [2024-07-15 16:10:20.384659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.566 qpair failed and we were unable to recover it. 00:28:51.566 [2024-07-15 16:10:20.384774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.566 [2024-07-15 16:10:20.384787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.566 qpair failed and we were unable to recover it. 00:28:51.566 [2024-07-15 16:10:20.384937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.566 [2024-07-15 16:10:20.384949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.566 qpair failed and we were unable to recover it. 00:28:51.566 [2024-07-15 16:10:20.385022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.566 [2024-07-15 16:10:20.385034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.566 qpair failed and we were unable to recover it. 00:28:51.566 [2024-07-15 16:10:20.385142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.566 [2024-07-15 16:10:20.385155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.566 qpair failed and we were unable to recover it. 00:28:51.566 [2024-07-15 16:10:20.385329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.566 [2024-07-15 16:10:20.385343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.566 qpair failed and we were unable to recover it. 00:28:51.566 [2024-07-15 16:10:20.385448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.566 [2024-07-15 16:10:20.385461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.566 qpair failed and we were unable to recover it. 00:28:51.566 [2024-07-15 16:10:20.385563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.566 [2024-07-15 16:10:20.385575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.566 qpair failed and we were unable to recover it. 00:28:51.566 [2024-07-15 16:10:20.385672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.566 [2024-07-15 16:10:20.385684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.566 qpair failed and we were unable to recover it. 00:28:51.566 [2024-07-15 16:10:20.385928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.566 [2024-07-15 16:10:20.385942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.566 qpair failed and we were unable to recover it. 00:28:51.566 [2024-07-15 16:10:20.386087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.566 [2024-07-15 16:10:20.386102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.566 qpair failed and we were unable to recover it. 00:28:51.566 [2024-07-15 16:10:20.386272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.566 [2024-07-15 16:10:20.386284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.566 qpair failed and we were unable to recover it. 00:28:51.566 [2024-07-15 16:10:20.386383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.566 [2024-07-15 16:10:20.386395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.566 qpair failed and we were unable to recover it. 00:28:51.566 [2024-07-15 16:10:20.386507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.566 [2024-07-15 16:10:20.386517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.566 qpair failed and we were unable to recover it. 00:28:51.566 [2024-07-15 16:10:20.386717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.566 [2024-07-15 16:10:20.386728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.566 qpair failed and we were unable to recover it. 00:28:51.566 [2024-07-15 16:10:20.386834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.566 [2024-07-15 16:10:20.386846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.566 qpair failed and we were unable to recover it. 00:28:51.566 [2024-07-15 16:10:20.387012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.566 [2024-07-15 16:10:20.387024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.566 qpair failed and we were unable to recover it. 00:28:51.566 [2024-07-15 16:10:20.387208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.566 [2024-07-15 16:10:20.387220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.566 qpair failed and we were unable to recover it. 00:28:51.566 [2024-07-15 16:10:20.387397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.566 [2024-07-15 16:10:20.387408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.566 qpair failed and we were unable to recover it. 00:28:51.566 [2024-07-15 16:10:20.387579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.566 [2024-07-15 16:10:20.387590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.566 qpair failed and we were unable to recover it. 00:28:51.566 [2024-07-15 16:10:20.387837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.566 [2024-07-15 16:10:20.387848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.566 qpair failed and we were unable to recover it. 00:28:51.566 [2024-07-15 16:10:20.387941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.566 [2024-07-15 16:10:20.387953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.566 qpair failed and we were unable to recover it. 00:28:51.566 [2024-07-15 16:10:20.388074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.566 [2024-07-15 16:10:20.388086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.566 qpair failed and we were unable to recover it. 00:28:51.566 [2024-07-15 16:10:20.388259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.566 [2024-07-15 16:10:20.388271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.566 qpair failed and we were unable to recover it. 00:28:51.566 [2024-07-15 16:10:20.388410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.566 [2024-07-15 16:10:20.388422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.566 qpair failed and we were unable to recover it. 00:28:51.566 [2024-07-15 16:10:20.388662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.566 [2024-07-15 16:10:20.388673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.566 qpair failed and we were unable to recover it. 00:28:51.566 [2024-07-15 16:10:20.388893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.566 [2024-07-15 16:10:20.388904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.566 qpair failed and we were unable to recover it. 00:28:51.566 [2024-07-15 16:10:20.389012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.566 [2024-07-15 16:10:20.389024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.566 qpair failed and we were unable to recover it. 00:28:51.566 [2024-07-15 16:10:20.389139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.566 [2024-07-15 16:10:20.389151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.566 qpair failed and we were unable to recover it. 00:28:51.566 [2024-07-15 16:10:20.389249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.566 [2024-07-15 16:10:20.389261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.566 qpair failed and we were unable to recover it. 00:28:51.566 [2024-07-15 16:10:20.389437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.566 [2024-07-15 16:10:20.389449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.566 qpair failed and we were unable to recover it. 00:28:51.566 [2024-07-15 16:10:20.389675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.566 [2024-07-15 16:10:20.389686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.566 qpair failed and we were unable to recover it. 00:28:51.566 [2024-07-15 16:10:20.389845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.566 [2024-07-15 16:10:20.389856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.566 qpair failed and we were unable to recover it. 00:28:51.566 [2024-07-15 16:10:20.390037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.566 [2024-07-15 16:10:20.390051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.566 qpair failed and we were unable to recover it. 00:28:51.566 [2024-07-15 16:10:20.390146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.566 [2024-07-15 16:10:20.390158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.566 qpair failed and we were unable to recover it. 00:28:51.566 [2024-07-15 16:10:20.390261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.567 [2024-07-15 16:10:20.390273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.567 qpair failed and we were unable to recover it. 00:28:51.567 [2024-07-15 16:10:20.390379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.567 [2024-07-15 16:10:20.390390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.567 qpair failed and we were unable to recover it. 00:28:51.567 [2024-07-15 16:10:20.390556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.567 [2024-07-15 16:10:20.390567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.567 qpair failed and we were unable to recover it. 00:28:51.567 [2024-07-15 16:10:20.390682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.567 [2024-07-15 16:10:20.390693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.567 qpair failed and we were unable to recover it. 00:28:51.567 [2024-07-15 16:10:20.390890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.567 [2024-07-15 16:10:20.390902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.567 qpair failed and we were unable to recover it. 00:28:51.567 [2024-07-15 16:10:20.391070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.567 [2024-07-15 16:10:20.391081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.567 qpair failed and we were unable to recover it. 00:28:51.567 [2024-07-15 16:10:20.391220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.567 [2024-07-15 16:10:20.391235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.567 qpair failed and we were unable to recover it. 00:28:51.567 [2024-07-15 16:10:20.391332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.567 [2024-07-15 16:10:20.391345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.567 qpair failed and we were unable to recover it. 00:28:51.567 [2024-07-15 16:10:20.391514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.567 [2024-07-15 16:10:20.391527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.567 qpair failed and we were unable to recover it. 00:28:51.567 [2024-07-15 16:10:20.391620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.567 [2024-07-15 16:10:20.391631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.567 qpair failed and we were unable to recover it. 00:28:51.567 [2024-07-15 16:10:20.391856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.567 [2024-07-15 16:10:20.391868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.567 qpair failed and we were unable to recover it. 00:28:51.567 [2024-07-15 16:10:20.391937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.567 [2024-07-15 16:10:20.391949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.567 qpair failed and we were unable to recover it. 00:28:51.567 [2024-07-15 16:10:20.392112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.567 [2024-07-15 16:10:20.392123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.567 qpair failed and we were unable to recover it. 00:28:51.567 [2024-07-15 16:10:20.392248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.567 [2024-07-15 16:10:20.392260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.567 qpair failed and we were unable to recover it. 00:28:51.567 [2024-07-15 16:10:20.392435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.567 [2024-07-15 16:10:20.392447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.567 qpair failed and we were unable to recover it. 00:28:51.567 [2024-07-15 16:10:20.392551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.567 [2024-07-15 16:10:20.392563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.567 qpair failed and we were unable to recover it. 00:28:51.567 [2024-07-15 16:10:20.392654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.567 [2024-07-15 16:10:20.392665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.567 qpair failed and we were unable to recover it. 00:28:51.567 [2024-07-15 16:10:20.392759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.567 [2024-07-15 16:10:20.392772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.567 qpair failed and we were unable to recover it. 00:28:51.567 [2024-07-15 16:10:20.392999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.567 [2024-07-15 16:10:20.393010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.567 qpair failed and we were unable to recover it. 00:28:51.567 [2024-07-15 16:10:20.393128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.567 [2024-07-15 16:10:20.393140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.567 qpair failed and we were unable to recover it. 00:28:51.567 [2024-07-15 16:10:20.393295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.567 [2024-07-15 16:10:20.393307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.567 qpair failed and we were unable to recover it. 00:28:51.567 [2024-07-15 16:10:20.393424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.567 [2024-07-15 16:10:20.393435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.567 qpair failed and we were unable to recover it. 00:28:51.567 [2024-07-15 16:10:20.393594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.567 [2024-07-15 16:10:20.393606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.567 qpair failed and we were unable to recover it. 00:28:51.567 [2024-07-15 16:10:20.393762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.567 [2024-07-15 16:10:20.393773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.567 qpair failed and we were unable to recover it. 00:28:51.567 [2024-07-15 16:10:20.393946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.567 [2024-07-15 16:10:20.393958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.567 qpair failed and we were unable to recover it. 00:28:51.567 [2024-07-15 16:10:20.394115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.567 [2024-07-15 16:10:20.394128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.567 qpair failed and we were unable to recover it. 00:28:51.567 [2024-07-15 16:10:20.394327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.567 [2024-07-15 16:10:20.394339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.567 qpair failed and we were unable to recover it. 00:28:51.567 [2024-07-15 16:10:20.394440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.567 [2024-07-15 16:10:20.394452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.567 qpair failed and we were unable to recover it. 00:28:51.567 [2024-07-15 16:10:20.394539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.567 [2024-07-15 16:10:20.394551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.567 qpair failed and we were unable to recover it. 00:28:51.567 [2024-07-15 16:10:20.394784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.567 [2024-07-15 16:10:20.394796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.567 qpair failed and we were unable to recover it. 00:28:51.567 [2024-07-15 16:10:20.395037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.567 [2024-07-15 16:10:20.395049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.567 qpair failed and we were unable to recover it. 00:28:51.567 [2024-07-15 16:10:20.395221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.567 [2024-07-15 16:10:20.395236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.567 qpair failed and we were unable to recover it. 00:28:51.567 [2024-07-15 16:10:20.395312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.567 [2024-07-15 16:10:20.395324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.567 qpair failed and we were unable to recover it. 00:28:51.567 [2024-07-15 16:10:20.395416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.567 [2024-07-15 16:10:20.395428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.567 qpair failed and we were unable to recover it. 00:28:51.567 [2024-07-15 16:10:20.395592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.567 [2024-07-15 16:10:20.395603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.567 qpair failed and we were unable to recover it. 00:28:51.567 [2024-07-15 16:10:20.395719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.567 [2024-07-15 16:10:20.395731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.567 qpair failed and we were unable to recover it. 00:28:51.567 [2024-07-15 16:10:20.395883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.568 [2024-07-15 16:10:20.395895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.568 qpair failed and we were unable to recover it. 00:28:51.568 [2024-07-15 16:10:20.396050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.568 [2024-07-15 16:10:20.396062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.568 qpair failed and we were unable to recover it. 00:28:51.568 [2024-07-15 16:10:20.396169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.568 [2024-07-15 16:10:20.396182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.568 qpair failed and we were unable to recover it. 00:28:51.568 [2024-07-15 16:10:20.396405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.568 [2024-07-15 16:10:20.396417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.568 qpair failed and we were unable to recover it. 00:28:51.568 [2024-07-15 16:10:20.396556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.568 [2024-07-15 16:10:20.396567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.568 qpair failed and we were unable to recover it. 00:28:51.568 [2024-07-15 16:10:20.396740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.568 [2024-07-15 16:10:20.396751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.568 qpair failed and we were unable to recover it. 00:28:51.568 [2024-07-15 16:10:20.396906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.568 [2024-07-15 16:10:20.396918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.568 qpair failed and we were unable to recover it. 00:28:51.568 [2024-07-15 16:10:20.397075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.568 [2024-07-15 16:10:20.397087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.568 qpair failed and we were unable to recover it. 00:28:51.568 [2024-07-15 16:10:20.397186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.568 [2024-07-15 16:10:20.397198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.568 qpair failed and we were unable to recover it. 00:28:51.568 [2024-07-15 16:10:20.397362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.568 [2024-07-15 16:10:20.397373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.568 qpair failed and we were unable to recover it. 00:28:51.568 [2024-07-15 16:10:20.397463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.568 [2024-07-15 16:10:20.397474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.568 qpair failed and we were unable to recover it. 00:28:51.568 [2024-07-15 16:10:20.397637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.568 [2024-07-15 16:10:20.397649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.568 qpair failed and we were unable to recover it. 00:28:51.568 [2024-07-15 16:10:20.397818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.568 [2024-07-15 16:10:20.397830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.568 qpair failed and we were unable to recover it. 00:28:51.568 [2024-07-15 16:10:20.397945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.568 [2024-07-15 16:10:20.397957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.568 qpair failed and we were unable to recover it. 00:28:51.568 [2024-07-15 16:10:20.398125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.568 [2024-07-15 16:10:20.398136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.568 qpair failed and we were unable to recover it. 00:28:51.568 [2024-07-15 16:10:20.398305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.568 [2024-07-15 16:10:20.398317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.568 qpair failed and we were unable to recover it. 00:28:51.568 [2024-07-15 16:10:20.398441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.568 [2024-07-15 16:10:20.398453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.568 qpair failed and we were unable to recover it. 00:28:51.568 [2024-07-15 16:10:20.398556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.568 [2024-07-15 16:10:20.398568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.568 qpair failed and we were unable to recover it. 00:28:51.568 [2024-07-15 16:10:20.398733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.568 [2024-07-15 16:10:20.398744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.568 qpair failed and we were unable to recover it. 00:28:51.568 [2024-07-15 16:10:20.398843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.568 [2024-07-15 16:10:20.398855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.568 qpair failed and we were unable to recover it. 00:28:51.568 [2024-07-15 16:10:20.399026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.568 [2024-07-15 16:10:20.399038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.568 qpair failed and we were unable to recover it. 00:28:51.568 [2024-07-15 16:10:20.399210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.568 [2024-07-15 16:10:20.399221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.568 qpair failed and we were unable to recover it. 00:28:51.568 [2024-07-15 16:10:20.399313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.568 [2024-07-15 16:10:20.399325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.568 qpair failed and we were unable to recover it. 00:28:51.568 [2024-07-15 16:10:20.399573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.568 [2024-07-15 16:10:20.399584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.568 qpair failed and we were unable to recover it. 00:28:51.568 [2024-07-15 16:10:20.399735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.568 [2024-07-15 16:10:20.399746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.568 qpair failed and we were unable to recover it. 00:28:51.568 [2024-07-15 16:10:20.399902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.568 [2024-07-15 16:10:20.399914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.568 qpair failed and we were unable to recover it. 00:28:51.568 [2024-07-15 16:10:20.400139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.568 [2024-07-15 16:10:20.400150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.568 qpair failed and we were unable to recover it. 00:28:51.568 [2024-07-15 16:10:20.400261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.568 [2024-07-15 16:10:20.400272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.568 qpair failed and we were unable to recover it. 00:28:51.568 [2024-07-15 16:10:20.400387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.568 [2024-07-15 16:10:20.400398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.568 qpair failed and we were unable to recover it. 00:28:51.568 [2024-07-15 16:10:20.400579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.568 [2024-07-15 16:10:20.400591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.568 qpair failed and we were unable to recover it. 00:28:51.568 [2024-07-15 16:10:20.400681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.568 [2024-07-15 16:10:20.400693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.568 qpair failed and we were unable to recover it. 00:28:51.568 [2024-07-15 16:10:20.400791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.568 [2024-07-15 16:10:20.400802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.568 qpair failed and we were unable to recover it. 00:28:51.568 [2024-07-15 16:10:20.400905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.568 [2024-07-15 16:10:20.400917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.568 qpair failed and we were unable to recover it. 00:28:51.568 [2024-07-15 16:10:20.401015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.568 [2024-07-15 16:10:20.401027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.568 qpair failed and we were unable to recover it. 00:28:51.568 [2024-07-15 16:10:20.401115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.568 [2024-07-15 16:10:20.401126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.568 qpair failed and we were unable to recover it. 00:28:51.568 [2024-07-15 16:10:20.401293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.569 [2024-07-15 16:10:20.401306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.569 qpair failed and we were unable to recover it. 00:28:51.569 [2024-07-15 16:10:20.401467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.569 [2024-07-15 16:10:20.401478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.569 qpair failed and we were unable to recover it. 00:28:51.569 [2024-07-15 16:10:20.401651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.569 [2024-07-15 16:10:20.401662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.569 qpair failed and we were unable to recover it. 00:28:51.569 [2024-07-15 16:10:20.401828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.569 [2024-07-15 16:10:20.401839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.569 qpair failed and we were unable to recover it. 00:28:51.569 [2024-07-15 16:10:20.402012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.569 [2024-07-15 16:10:20.402024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.569 qpair failed and we were unable to recover it. 00:28:51.569 [2024-07-15 16:10:20.402115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.569 [2024-07-15 16:10:20.402127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.569 qpair failed and we were unable to recover it. 00:28:51.569 [2024-07-15 16:10:20.402232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.569 [2024-07-15 16:10:20.402246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.569 qpair failed and we were unable to recover it. 00:28:51.569 [2024-07-15 16:10:20.402498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.569 [2024-07-15 16:10:20.402512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.569 qpair failed and we were unable to recover it. 00:28:51.569 [2024-07-15 16:10:20.402610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.569 [2024-07-15 16:10:20.402621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.569 qpair failed and we were unable to recover it. 00:28:51.569 [2024-07-15 16:10:20.402694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.569 [2024-07-15 16:10:20.402705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.569 qpair failed and we were unable to recover it. 00:28:51.569 [2024-07-15 16:10:20.402815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.569 [2024-07-15 16:10:20.402827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.569 qpair failed and we were unable to recover it. 00:28:51.569 [2024-07-15 16:10:20.402916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.569 [2024-07-15 16:10:20.402928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.569 qpair failed and we were unable to recover it. 00:28:51.569 [2024-07-15 16:10:20.403014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.569 [2024-07-15 16:10:20.403026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.569 qpair failed and we were unable to recover it. 00:28:51.569 [2024-07-15 16:10:20.403129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.569 [2024-07-15 16:10:20.403140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.569 qpair failed and we were unable to recover it. 00:28:51.569 [2024-07-15 16:10:20.403301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.569 [2024-07-15 16:10:20.403313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.569 qpair failed and we were unable to recover it. 00:28:51.569 [2024-07-15 16:10:20.403420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.569 [2024-07-15 16:10:20.403431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.569 qpair failed and we were unable to recover it. 00:28:51.569 [2024-07-15 16:10:20.403628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.569 [2024-07-15 16:10:20.403640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.569 qpair failed and we were unable to recover it. 00:28:51.569 [2024-07-15 16:10:20.403703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.569 [2024-07-15 16:10:20.403715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.569 qpair failed and we were unable to recover it. 00:28:51.569 [2024-07-15 16:10:20.403963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.569 [2024-07-15 16:10:20.403974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.569 qpair failed and we were unable to recover it. 00:28:51.569 [2024-07-15 16:10:20.404200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.569 [2024-07-15 16:10:20.404211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.569 qpair failed and we were unable to recover it. 00:28:51.569 [2024-07-15 16:10:20.404316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.569 [2024-07-15 16:10:20.404328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.569 qpair failed and we were unable to recover it. 00:28:51.569 [2024-07-15 16:10:20.404490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.569 [2024-07-15 16:10:20.404501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.569 qpair failed and we were unable to recover it. 00:28:51.569 [2024-07-15 16:10:20.404659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.569 [2024-07-15 16:10:20.404671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.569 qpair failed and we were unable to recover it. 00:28:51.569 [2024-07-15 16:10:20.404830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.569 [2024-07-15 16:10:20.404842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.569 qpair failed and we were unable to recover it. 00:28:51.569 [2024-07-15 16:10:20.404938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.569 [2024-07-15 16:10:20.404949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.569 qpair failed and we were unable to recover it. 00:28:51.569 [2024-07-15 16:10:20.405048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.569 [2024-07-15 16:10:20.405059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.569 qpair failed and we were unable to recover it. 00:28:51.569 [2024-07-15 16:10:20.405283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.569 [2024-07-15 16:10:20.405295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.569 qpair failed and we were unable to recover it. 00:28:51.569 [2024-07-15 16:10:20.405490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.569 [2024-07-15 16:10:20.405501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.569 qpair failed and we were unable to recover it. 00:28:51.569 [2024-07-15 16:10:20.405619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.569 [2024-07-15 16:10:20.405630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.569 qpair failed and we were unable to recover it. 00:28:51.569 [2024-07-15 16:10:20.405728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.569 [2024-07-15 16:10:20.405739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.569 qpair failed and we were unable to recover it. 00:28:51.569 [2024-07-15 16:10:20.405854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.569 [2024-07-15 16:10:20.405866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.569 qpair failed and we were unable to recover it. 00:28:51.569 [2024-07-15 16:10:20.405975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.569 [2024-07-15 16:10:20.405987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.569 qpair failed and we were unable to recover it. 00:28:51.569 [2024-07-15 16:10:20.406169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.569 [2024-07-15 16:10:20.406181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.569 qpair failed and we were unable to recover it. 00:28:51.569 [2024-07-15 16:10:20.406406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.569 [2024-07-15 16:10:20.406418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.569 qpair failed and we were unable to recover it. 00:28:51.569 [2024-07-15 16:10:20.406573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.569 [2024-07-15 16:10:20.406584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.569 qpair failed and we were unable to recover it. 00:28:51.569 [2024-07-15 16:10:20.406702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.569 [2024-07-15 16:10:20.406713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.569 qpair failed and we were unable to recover it. 00:28:51.569 [2024-07-15 16:10:20.406958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.569 [2024-07-15 16:10:20.406970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.569 qpair failed and we were unable to recover it. 00:28:51.569 [2024-07-15 16:10:20.407140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.569 [2024-07-15 16:10:20.407152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.569 qpair failed and we were unable to recover it. 00:28:51.569 [2024-07-15 16:10:20.407428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.569 [2024-07-15 16:10:20.407439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.570 qpair failed and we were unable to recover it. 00:28:51.570 [2024-07-15 16:10:20.407534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.570 [2024-07-15 16:10:20.407546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.570 qpair failed and we were unable to recover it. 00:28:51.570 [2024-07-15 16:10:20.407703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.570 [2024-07-15 16:10:20.407715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.570 qpair failed and we were unable to recover it. 00:28:51.570 [2024-07-15 16:10:20.407878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.570 [2024-07-15 16:10:20.407890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.570 qpair failed and we were unable to recover it. 00:28:51.570 [2024-07-15 16:10:20.407996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.570 [2024-07-15 16:10:20.408008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.570 qpair failed and we were unable to recover it. 00:28:51.570 [2024-07-15 16:10:20.408172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.570 [2024-07-15 16:10:20.408183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.570 qpair failed and we were unable to recover it. 00:28:51.570 [2024-07-15 16:10:20.408295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.570 [2024-07-15 16:10:20.408307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.570 qpair failed and we were unable to recover it. 00:28:51.570 [2024-07-15 16:10:20.408462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.570 [2024-07-15 16:10:20.408474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.570 qpair failed and we were unable to recover it. 00:28:51.570 [2024-07-15 16:10:20.408642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.570 [2024-07-15 16:10:20.408653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.570 qpair failed and we were unable to recover it. 00:28:51.570 [2024-07-15 16:10:20.408821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.570 [2024-07-15 16:10:20.408834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.570 qpair failed and we were unable to recover it. 00:28:51.570 [2024-07-15 16:10:20.408937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.570 [2024-07-15 16:10:20.408948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.570 qpair failed and we were unable to recover it. 00:28:51.570 [2024-07-15 16:10:20.409065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.570 [2024-07-15 16:10:20.409077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.570 qpair failed and we were unable to recover it. 00:28:51.570 [2024-07-15 16:10:20.409296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.570 [2024-07-15 16:10:20.409308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.570 qpair failed and we were unable to recover it. 00:28:51.570 [2024-07-15 16:10:20.409467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.570 [2024-07-15 16:10:20.409479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.570 qpair failed and we were unable to recover it. 00:28:51.570 [2024-07-15 16:10:20.409581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.570 [2024-07-15 16:10:20.409593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.570 qpair failed and we were unable to recover it. 00:28:51.570 [2024-07-15 16:10:20.409759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.570 [2024-07-15 16:10:20.409770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.570 qpair failed and we were unable to recover it. 00:28:51.570 [2024-07-15 16:10:20.409871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.570 [2024-07-15 16:10:20.409883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.570 qpair failed and we were unable to recover it. 00:28:51.570 [2024-07-15 16:10:20.410048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.570 [2024-07-15 16:10:20.410059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.570 qpair failed and we were unable to recover it. 00:28:51.570 [2024-07-15 16:10:20.410159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.570 [2024-07-15 16:10:20.410171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.570 qpair failed and we were unable to recover it. 00:28:51.570 [2024-07-15 16:10:20.410262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.570 [2024-07-15 16:10:20.410274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.570 qpair failed and we were unable to recover it. 00:28:51.570 [2024-07-15 16:10:20.410378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.570 [2024-07-15 16:10:20.410389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.570 qpair failed and we were unable to recover it. 00:28:51.570 [2024-07-15 16:10:20.410485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.570 [2024-07-15 16:10:20.410497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.570 qpair failed and we were unable to recover it. 00:28:51.570 [2024-07-15 16:10:20.410602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.570 [2024-07-15 16:10:20.410614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.570 qpair failed and we were unable to recover it. 00:28:51.570 [2024-07-15 16:10:20.410791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.570 [2024-07-15 16:10:20.410802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.570 qpair failed and we were unable to recover it. 00:28:51.570 [2024-07-15 16:10:20.410888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.570 [2024-07-15 16:10:20.410899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.570 qpair failed and we were unable to recover it. 00:28:51.570 [2024-07-15 16:10:20.411061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.570 [2024-07-15 16:10:20.411073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.570 qpair failed and we were unable to recover it. 00:28:51.570 [2024-07-15 16:10:20.411162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.570 [2024-07-15 16:10:20.411173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.570 qpair failed and we were unable to recover it. 00:28:51.570 [2024-07-15 16:10:20.411328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.570 [2024-07-15 16:10:20.411340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.570 qpair failed and we were unable to recover it. 00:28:51.570 [2024-07-15 16:10:20.411505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-07-15 16:10:20.411518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-07-15 16:10:20.411616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-07-15 16:10:20.411628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-07-15 16:10:20.411801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-07-15 16:10:20.411812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-07-15 16:10:20.411933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-07-15 16:10:20.411945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-07-15 16:10:20.412045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-07-15 16:10:20.412057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-07-15 16:10:20.412289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-07-15 16:10:20.412301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-07-15 16:10:20.412471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-07-15 16:10:20.412483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-07-15 16:10:20.412583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-07-15 16:10:20.412594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-07-15 16:10:20.412754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-07-15 16:10:20.412765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-07-15 16:10:20.412868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-07-15 16:10:20.412880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-07-15 16:10:20.413075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-07-15 16:10:20.413086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-07-15 16:10:20.413314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-07-15 16:10:20.413326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-07-15 16:10:20.413556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-07-15 16:10:20.413569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-07-15 16:10:20.413662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-07-15 16:10:20.413674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-07-15 16:10:20.413844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-07-15 16:10:20.413856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-07-15 16:10:20.413946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-07-15 16:10:20.413957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-07-15 16:10:20.414119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-07-15 16:10:20.414132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-07-15 16:10:20.414233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-07-15 16:10:20.414245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-07-15 16:10:20.414350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-07-15 16:10:20.414362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-07-15 16:10:20.414482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-07-15 16:10:20.414494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-07-15 16:10:20.414592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-07-15 16:10:20.414603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-07-15 16:10:20.414760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-07-15 16:10:20.414773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-07-15 16:10:20.414938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-07-15 16:10:20.414950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-07-15 16:10:20.415107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-07-15 16:10:20.415119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-07-15 16:10:20.415350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-07-15 16:10:20.415362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-07-15 16:10:20.415612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-07-15 16:10:20.415624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-07-15 16:10:20.415695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-07-15 16:10:20.415706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.572 qpair failed and we were unable to recover it. 00:28:51.572 [2024-07-15 16:10:20.415907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-07-15 16:10:20.415918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.572 qpair failed and we were unable to recover it. 00:28:51.572 [2024-07-15 16:10:20.416088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-07-15 16:10:20.416099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.572 qpair failed and we were unable to recover it. 00:28:51.572 [2024-07-15 16:10:20.416218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-07-15 16:10:20.416235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.572 qpair failed and we were unable to recover it. 00:28:51.572 [2024-07-15 16:10:20.416388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-07-15 16:10:20.416400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.572 qpair failed and we were unable to recover it. 00:28:51.572 [2024-07-15 16:10:20.416569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-07-15 16:10:20.416581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.572 qpair failed and we were unable to recover it. 00:28:51.572 [2024-07-15 16:10:20.416825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-07-15 16:10:20.416837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.572 qpair failed and we were unable to recover it. 00:28:51.572 [2024-07-15 16:10:20.417009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-07-15 16:10:20.417021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.572 qpair failed and we were unable to recover it. 00:28:51.572 [2024-07-15 16:10:20.417125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-07-15 16:10:20.417137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.572 qpair failed and we were unable to recover it. 00:28:51.572 [2024-07-15 16:10:20.417307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-07-15 16:10:20.417319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.572 qpair failed and we were unable to recover it. 00:28:51.572 [2024-07-15 16:10:20.417425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-07-15 16:10:20.417436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.572 qpair failed and we were unable to recover it. 00:28:51.572 [2024-07-15 16:10:20.417542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-07-15 16:10:20.417554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.573 qpair failed and we were unable to recover it. 00:28:51.573 [2024-07-15 16:10:20.417749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-07-15 16:10:20.417760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.573 qpair failed and we were unable to recover it. 00:28:51.573 [2024-07-15 16:10:20.417941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-07-15 16:10:20.417952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.573 qpair failed and we were unable to recover it. 00:28:51.573 [2024-07-15 16:10:20.418050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-07-15 16:10:20.418062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.573 qpair failed and we were unable to recover it. 00:28:51.573 [2024-07-15 16:10:20.418295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-07-15 16:10:20.418307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.573 qpair failed and we were unable to recover it. 00:28:51.573 [2024-07-15 16:10:20.418480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-07-15 16:10:20.418491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.573 qpair failed and we were unable to recover it. 00:28:51.573 [2024-07-15 16:10:20.418663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-07-15 16:10:20.418674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.573 qpair failed and we were unable to recover it. 00:28:51.573 [2024-07-15 16:10:20.418877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-07-15 16:10:20.418889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.573 qpair failed and we were unable to recover it. 00:28:51.573 [2024-07-15 16:10:20.419131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-07-15 16:10:20.419144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.573 qpair failed and we were unable to recover it. 00:28:51.573 [2024-07-15 16:10:20.419246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-07-15 16:10:20.419257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.573 qpair failed and we were unable to recover it. 00:28:51.573 [2024-07-15 16:10:20.419374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-07-15 16:10:20.419387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.573 qpair failed and we were unable to recover it. 00:28:51.573 [2024-07-15 16:10:20.419610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-07-15 16:10:20.419621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.573 qpair failed and we were unable to recover it. 00:28:51.573 [2024-07-15 16:10:20.419794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-07-15 16:10:20.419806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.573 qpair failed and we were unable to recover it. 00:28:51.573 [2024-07-15 16:10:20.419965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-07-15 16:10:20.419976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.573 qpair failed and we were unable to recover it. 00:28:51.573 [2024-07-15 16:10:20.420165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-07-15 16:10:20.420177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.573 qpair failed and we were unable to recover it. 00:28:51.573 [2024-07-15 16:10:20.420401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-07-15 16:10:20.420413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.573 qpair failed and we were unable to recover it. 00:28:51.573 [2024-07-15 16:10:20.420584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-07-15 16:10:20.420595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.573 qpair failed and we were unable to recover it. 00:28:51.573 [2024-07-15 16:10:20.420817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-07-15 16:10:20.420828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.574 qpair failed and we were unable to recover it. 00:28:51.574 [2024-07-15 16:10:20.420947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-07-15 16:10:20.420958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.574 qpair failed and we were unable to recover it. 00:28:51.574 [2024-07-15 16:10:20.421200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-07-15 16:10:20.421212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.574 qpair failed and we were unable to recover it. 00:28:51.574 [2024-07-15 16:10:20.421457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-07-15 16:10:20.421469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.574 qpair failed and we were unable to recover it. 00:28:51.574 [2024-07-15 16:10:20.421581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-07-15 16:10:20.421592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.574 qpair failed and we were unable to recover it. 00:28:51.574 [2024-07-15 16:10:20.421754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-07-15 16:10:20.421766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.574 qpair failed and we were unable to recover it. 00:28:51.574 [2024-07-15 16:10:20.421865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-07-15 16:10:20.421877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.574 qpair failed and we were unable to recover it. 00:28:51.574 [2024-07-15 16:10:20.422035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-07-15 16:10:20.422048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.574 qpair failed and we were unable to recover it. 00:28:51.574 [2024-07-15 16:10:20.422149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-07-15 16:10:20.422160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.574 qpair failed and we were unable to recover it. 00:28:51.574 [2024-07-15 16:10:20.422315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-07-15 16:10:20.422327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.574 qpair failed and we were unable to recover it. 00:28:51.574 [2024-07-15 16:10:20.422495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-07-15 16:10:20.422507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.574 qpair failed and we were unable to recover it. 00:28:51.871 [2024-07-15 16:10:20.422599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.872 [2024-07-15 16:10:20.422611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.872 qpair failed and we were unable to recover it. 00:28:51.872 [2024-07-15 16:10:20.422715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.872 [2024-07-15 16:10:20.422727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.872 qpair failed and we were unable to recover it. 00:28:51.872 [2024-07-15 16:10:20.422892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.872 [2024-07-15 16:10:20.422904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.872 qpair failed and we were unable to recover it. 00:28:51.872 [2024-07-15 16:10:20.423072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.872 [2024-07-15 16:10:20.423083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.872 qpair failed and we were unable to recover it. 00:28:51.872 [2024-07-15 16:10:20.423306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.872 [2024-07-15 16:10:20.423319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.872 qpair failed and we were unable to recover it. 00:28:51.872 [2024-07-15 16:10:20.423472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.872 [2024-07-15 16:10:20.423485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.872 qpair failed and we were unable to recover it. 00:28:51.872 [2024-07-15 16:10:20.423666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.872 [2024-07-15 16:10:20.423678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.872 qpair failed and we were unable to recover it. 00:28:51.872 [2024-07-15 16:10:20.423844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.872 [2024-07-15 16:10:20.423856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.872 qpair failed and we were unable to recover it. 00:28:51.872 [2024-07-15 16:10:20.423940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.872 [2024-07-15 16:10:20.423951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.872 qpair failed and we were unable to recover it. 00:28:51.872 [2024-07-15 16:10:20.424122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.872 [2024-07-15 16:10:20.424133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.872 qpair failed and we were unable to recover it. 00:28:51.872 [2024-07-15 16:10:20.424239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.872 [2024-07-15 16:10:20.424252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.872 qpair failed and we were unable to recover it. 00:28:51.872 [2024-07-15 16:10:20.424414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.872 [2024-07-15 16:10:20.424425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.872 qpair failed and we were unable to recover it. 00:28:51.872 [2024-07-15 16:10:20.424557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.872 [2024-07-15 16:10:20.424569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.872 qpair failed and we were unable to recover it. 00:28:51.872 [2024-07-15 16:10:20.424680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.872 [2024-07-15 16:10:20.424691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.872 qpair failed and we were unable to recover it. 00:28:51.872 [2024-07-15 16:10:20.424803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.872 [2024-07-15 16:10:20.424814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.872 qpair failed and we were unable to recover it. 00:28:51.872 [2024-07-15 16:10:20.424907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.872 [2024-07-15 16:10:20.424918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.872 qpair failed and we were unable to recover it. 00:28:51.872 [2024-07-15 16:10:20.425091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.872 [2024-07-15 16:10:20.425102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.872 qpair failed and we were unable to recover it. 00:28:51.872 [2024-07-15 16:10:20.425328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.872 [2024-07-15 16:10:20.425339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.872 qpair failed and we were unable to recover it. 00:28:51.872 [2024-07-15 16:10:20.425429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.872 [2024-07-15 16:10:20.425440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.872 qpair failed and we were unable to recover it. 00:28:51.872 [2024-07-15 16:10:20.425521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.872 [2024-07-15 16:10:20.425532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.872 qpair failed and we were unable to recover it. 00:28:51.872 [2024-07-15 16:10:20.425650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.872 [2024-07-15 16:10:20.425661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.872 qpair failed and we were unable to recover it. 00:28:51.872 [2024-07-15 16:10:20.425766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.872 [2024-07-15 16:10:20.425777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.872 qpair failed and we were unable to recover it. 00:28:51.872 [2024-07-15 16:10:20.425940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.872 [2024-07-15 16:10:20.425951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.872 qpair failed and we were unable to recover it. 00:28:51.872 [2024-07-15 16:10:20.426069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.872 [2024-07-15 16:10:20.426081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.872 qpair failed and we were unable to recover it. 00:28:51.872 [2024-07-15 16:10:20.426182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.872 [2024-07-15 16:10:20.426195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.872 qpair failed and we were unable to recover it. 00:28:51.872 [2024-07-15 16:10:20.426368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.872 [2024-07-15 16:10:20.426380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.872 qpair failed and we were unable to recover it. 00:28:51.872 [2024-07-15 16:10:20.426479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.872 [2024-07-15 16:10:20.426490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.872 qpair failed and we were unable to recover it. 00:28:51.872 [2024-07-15 16:10:20.426593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.872 [2024-07-15 16:10:20.426604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.872 qpair failed and we were unable to recover it. 00:28:51.872 [2024-07-15 16:10:20.426719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.872 [2024-07-15 16:10:20.426731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.872 qpair failed and we were unable to recover it. 00:28:51.872 [2024-07-15 16:10:20.426892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.872 [2024-07-15 16:10:20.426904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.872 qpair failed and we were unable to recover it. 00:28:51.872 [2024-07-15 16:10:20.427008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.872 [2024-07-15 16:10:20.427020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.872 qpair failed and we were unable to recover it. 00:28:51.872 [2024-07-15 16:10:20.427221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.872 [2024-07-15 16:10:20.427238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.872 qpair failed and we were unable to recover it. 00:28:51.872 [2024-07-15 16:10:20.427417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.872 [2024-07-15 16:10:20.427429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.872 qpair failed and we were unable to recover it. 00:28:51.872 [2024-07-15 16:10:20.427686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.873 [2024-07-15 16:10:20.427698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.873 qpair failed and we were unable to recover it. 00:28:51.873 [2024-07-15 16:10:20.427801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.873 [2024-07-15 16:10:20.427812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.873 qpair failed and we were unable to recover it. 00:28:51.873 [2024-07-15 16:10:20.427930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.873 [2024-07-15 16:10:20.427942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.873 qpair failed and we were unable to recover it. 00:28:51.873 [2024-07-15 16:10:20.428056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.873 [2024-07-15 16:10:20.428069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.873 qpair failed and we were unable to recover it. 00:28:51.873 [2024-07-15 16:10:20.428261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.873 [2024-07-15 16:10:20.428273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.873 qpair failed and we were unable to recover it. 00:28:51.873 [2024-07-15 16:10:20.428522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.873 [2024-07-15 16:10:20.428534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.873 qpair failed and we were unable to recover it. 00:28:51.873 [2024-07-15 16:10:20.428699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.873 [2024-07-15 16:10:20.428711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.873 qpair failed and we were unable to recover it. 00:28:51.873 [2024-07-15 16:10:20.428883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.873 [2024-07-15 16:10:20.428895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.873 qpair failed and we were unable to recover it. 00:28:51.873 [2024-07-15 16:10:20.429062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.873 [2024-07-15 16:10:20.429074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.873 qpair failed and we were unable to recover it. 00:28:51.873 [2024-07-15 16:10:20.429237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.873 [2024-07-15 16:10:20.429248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.873 qpair failed and we were unable to recover it. 00:28:51.873 [2024-07-15 16:10:20.429349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.873 [2024-07-15 16:10:20.429361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.873 qpair failed and we were unable to recover it. 00:28:51.873 [2024-07-15 16:10:20.429580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.873 [2024-07-15 16:10:20.429592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.873 qpair failed and we were unable to recover it. 00:28:51.873 [2024-07-15 16:10:20.429754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.873 [2024-07-15 16:10:20.429766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.873 qpair failed and we were unable to recover it. 00:28:51.873 [2024-07-15 16:10:20.429956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.873 [2024-07-15 16:10:20.429968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.873 qpair failed and we were unable to recover it. 00:28:51.873 [2024-07-15 16:10:20.430073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.873 [2024-07-15 16:10:20.430084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.873 qpair failed and we were unable to recover it. 00:28:51.873 [2024-07-15 16:10:20.430333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.873 [2024-07-15 16:10:20.430345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.873 qpair failed and we were unable to recover it. 00:28:51.873 [2024-07-15 16:10:20.430451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.873 [2024-07-15 16:10:20.430462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.873 qpair failed and we were unable to recover it. 00:28:51.873 [2024-07-15 16:10:20.430649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.873 [2024-07-15 16:10:20.430660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.873 qpair failed and we were unable to recover it. 00:28:51.873 [2024-07-15 16:10:20.430904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.873 [2024-07-15 16:10:20.430915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.873 qpair failed and we were unable to recover it. 00:28:51.873 [2024-07-15 16:10:20.431074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.873 [2024-07-15 16:10:20.431085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.873 qpair failed and we were unable to recover it. 00:28:51.873 [2024-07-15 16:10:20.431251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.873 [2024-07-15 16:10:20.431263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.873 qpair failed and we were unable to recover it. 00:28:51.873 [2024-07-15 16:10:20.431421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.873 [2024-07-15 16:10:20.431432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.873 qpair failed and we were unable to recover it. 00:28:51.873 [2024-07-15 16:10:20.431593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.873 [2024-07-15 16:10:20.431605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.873 qpair failed and we were unable to recover it. 00:28:51.873 [2024-07-15 16:10:20.431772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.873 [2024-07-15 16:10:20.431783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.873 qpair failed and we were unable to recover it. 00:28:51.873 [2024-07-15 16:10:20.431955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.873 [2024-07-15 16:10:20.431967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.873 qpair failed and we were unable to recover it. 00:28:51.873 [2024-07-15 16:10:20.432132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.873 [2024-07-15 16:10:20.432144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.873 qpair failed and we were unable to recover it. 00:28:51.873 [2024-07-15 16:10:20.432325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.873 [2024-07-15 16:10:20.432336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.873 qpair failed and we were unable to recover it. 00:28:51.873 [2024-07-15 16:10:20.432507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.873 [2024-07-15 16:10:20.432519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.873 qpair failed and we were unable to recover it. 00:28:51.873 [2024-07-15 16:10:20.432625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.873 [2024-07-15 16:10:20.432637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.873 qpair failed and we were unable to recover it. 00:28:51.873 [2024-07-15 16:10:20.432823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.873 [2024-07-15 16:10:20.432835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.873 qpair failed and we were unable to recover it. 00:28:51.873 [2024-07-15 16:10:20.433078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.873 [2024-07-15 16:10:20.433090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.873 qpair failed and we were unable to recover it. 00:28:51.873 [2024-07-15 16:10:20.433315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.873 [2024-07-15 16:10:20.433327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.873 qpair failed and we were unable to recover it. 00:28:51.873 [2024-07-15 16:10:20.433577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.873 [2024-07-15 16:10:20.433588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.873 qpair failed and we were unable to recover it. 00:28:51.873 [2024-07-15 16:10:20.433695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.873 [2024-07-15 16:10:20.433707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.873 qpair failed and we were unable to recover it. 00:28:51.873 [2024-07-15 16:10:20.433881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.873 [2024-07-15 16:10:20.433892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.873 qpair failed and we were unable to recover it. 00:28:51.873 [2024-07-15 16:10:20.433994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.873 [2024-07-15 16:10:20.434006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.873 qpair failed and we were unable to recover it. 00:28:51.873 [2024-07-15 16:10:20.434188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.874 [2024-07-15 16:10:20.434199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.874 qpair failed and we were unable to recover it. 00:28:51.874 [2024-07-15 16:10:20.434301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.874 [2024-07-15 16:10:20.434313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.874 qpair failed and we were unable to recover it. 00:28:51.874 [2024-07-15 16:10:20.434469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.874 [2024-07-15 16:10:20.434480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.874 qpair failed and we were unable to recover it. 00:28:51.874 [2024-07-15 16:10:20.434656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.874 [2024-07-15 16:10:20.434667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.874 qpair failed and we were unable to recover it. 00:28:51.874 [2024-07-15 16:10:20.434799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.874 [2024-07-15 16:10:20.434810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.874 qpair failed and we were unable to recover it. 00:28:51.874 [2024-07-15 16:10:20.434970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.874 [2024-07-15 16:10:20.434981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.874 qpair failed and we were unable to recover it. 00:28:51.874 [2024-07-15 16:10:20.435138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.874 [2024-07-15 16:10:20.435150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.874 qpair failed and we were unable to recover it. 00:28:51.874 [2024-07-15 16:10:20.435259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.874 [2024-07-15 16:10:20.435274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.874 qpair failed and we were unable to recover it. 00:28:51.874 [2024-07-15 16:10:20.435383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.874 [2024-07-15 16:10:20.435394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.874 qpair failed and we were unable to recover it. 00:28:51.874 [2024-07-15 16:10:20.435559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.874 [2024-07-15 16:10:20.435571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.874 qpair failed and we were unable to recover it. 00:28:51.874 [2024-07-15 16:10:20.435811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.874 [2024-07-15 16:10:20.435823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.874 qpair failed and we were unable to recover it. 00:28:51.874 [2024-07-15 16:10:20.435997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.874 [2024-07-15 16:10:20.436010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.874 qpair failed and we were unable to recover it. 00:28:51.874 [2024-07-15 16:10:20.436102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.874 [2024-07-15 16:10:20.436114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.874 qpair failed and we were unable to recover it. 00:28:51.874 [2024-07-15 16:10:20.436299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.874 [2024-07-15 16:10:20.436310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.874 qpair failed and we were unable to recover it. 00:28:51.874 [2024-07-15 16:10:20.436464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.874 [2024-07-15 16:10:20.436476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.874 qpair failed and we were unable to recover it. 00:28:51.874 [2024-07-15 16:10:20.436699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.874 [2024-07-15 16:10:20.436710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.874 qpair failed and we were unable to recover it. 00:28:51.874 [2024-07-15 16:10:20.436826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.874 [2024-07-15 16:10:20.436839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.874 qpair failed and we were unable to recover it. 00:28:51.874 [2024-07-15 16:10:20.437004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.874 [2024-07-15 16:10:20.437015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.874 qpair failed and we were unable to recover it. 00:28:51.874 [2024-07-15 16:10:20.437103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.874 [2024-07-15 16:10:20.437115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.874 qpair failed and we were unable to recover it. 00:28:51.874 [2024-07-15 16:10:20.437381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.874 [2024-07-15 16:10:20.437394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.874 qpair failed and we were unable to recover it. 00:28:51.874 [2024-07-15 16:10:20.437565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.874 [2024-07-15 16:10:20.437576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.874 qpair failed and we were unable to recover it. 00:28:51.874 [2024-07-15 16:10:20.437680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.874 [2024-07-15 16:10:20.437692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.874 qpair failed and we were unable to recover it. 00:28:51.874 [2024-07-15 16:10:20.437961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.874 [2024-07-15 16:10:20.437973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.874 qpair failed and we were unable to recover it. 00:28:51.874 [2024-07-15 16:10:20.438145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.874 [2024-07-15 16:10:20.438156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.874 qpair failed and we were unable to recover it. 00:28:51.874 [2024-07-15 16:10:20.438349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.874 [2024-07-15 16:10:20.438361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.874 qpair failed and we were unable to recover it. 00:28:51.874 [2024-07-15 16:10:20.438588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.874 [2024-07-15 16:10:20.438599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.874 qpair failed and we were unable to recover it. 00:28:51.874 [2024-07-15 16:10:20.438702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.874 [2024-07-15 16:10:20.438713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.874 qpair failed and we were unable to recover it. 00:28:51.874 [2024-07-15 16:10:20.438887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.874 [2024-07-15 16:10:20.438898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.874 qpair failed and we were unable to recover it. 00:28:51.874 [2024-07-15 16:10:20.439064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.874 [2024-07-15 16:10:20.439076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.874 qpair failed and we were unable to recover it. 00:28:51.874 [2024-07-15 16:10:20.439265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.874 [2024-07-15 16:10:20.439276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.874 qpair failed and we were unable to recover it. 00:28:51.874 [2024-07-15 16:10:20.439445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.874 [2024-07-15 16:10:20.439456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.874 qpair failed and we were unable to recover it. 00:28:51.874 [2024-07-15 16:10:20.439634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.874 [2024-07-15 16:10:20.439647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.874 qpair failed and we were unable to recover it. 00:28:51.874 [2024-07-15 16:10:20.439718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.875 [2024-07-15 16:10:20.439729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.875 qpair failed and we were unable to recover it. 00:28:51.875 [2024-07-15 16:10:20.439904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.875 [2024-07-15 16:10:20.439915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.875 qpair failed and we were unable to recover it. 00:28:51.875 [2024-07-15 16:10:20.439997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.875 [2024-07-15 16:10:20.440010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.875 qpair failed and we were unable to recover it. 00:28:51.875 [2024-07-15 16:10:20.440198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.875 [2024-07-15 16:10:20.440211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.875 qpair failed and we were unable to recover it. 00:28:51.875 [2024-07-15 16:10:20.440392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.875 [2024-07-15 16:10:20.440404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.875 qpair failed and we were unable to recover it. 00:28:51.875 [2024-07-15 16:10:20.440495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.875 [2024-07-15 16:10:20.440507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.875 qpair failed and we were unable to recover it. 00:28:51.875 [2024-07-15 16:10:20.440608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.875 [2024-07-15 16:10:20.440621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.875 qpair failed and we were unable to recover it. 00:28:51.875 [2024-07-15 16:10:20.440791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.875 [2024-07-15 16:10:20.440803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.875 qpair failed and we were unable to recover it. 00:28:51.875 [2024-07-15 16:10:20.440977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.875 [2024-07-15 16:10:20.440989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.875 qpair failed and we were unable to recover it. 00:28:51.875 [2024-07-15 16:10:20.441087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.875 [2024-07-15 16:10:20.441100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.875 qpair failed and we were unable to recover it. 00:28:51.875 [2024-07-15 16:10:20.441210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.875 [2024-07-15 16:10:20.441222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.875 qpair failed and we were unable to recover it. 00:28:51.875 [2024-07-15 16:10:20.441488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.875 [2024-07-15 16:10:20.441500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.875 qpair failed and we were unable to recover it. 00:28:51.875 [2024-07-15 16:10:20.441572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.875 [2024-07-15 16:10:20.441584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.875 qpair failed and we were unable to recover it. 00:28:51.875 [2024-07-15 16:10:20.441679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.875 [2024-07-15 16:10:20.441690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.875 qpair failed and we were unable to recover it. 00:28:51.875 [2024-07-15 16:10:20.441857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.875 [2024-07-15 16:10:20.441868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.875 qpair failed and we were unable to recover it. 00:28:51.875 [2024-07-15 16:10:20.442090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.875 [2024-07-15 16:10:20.442103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.875 qpair failed and we were unable to recover it. 00:28:51.875 [2024-07-15 16:10:20.442287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.875 [2024-07-15 16:10:20.442299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.875 qpair failed and we were unable to recover it. 00:28:51.875 [2024-07-15 16:10:20.442438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.875 [2024-07-15 16:10:20.442449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.875 qpair failed and we were unable to recover it. 00:28:51.875 [2024-07-15 16:10:20.442535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.875 [2024-07-15 16:10:20.442546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.875 qpair failed and we were unable to recover it. 00:28:51.875 [2024-07-15 16:10:20.442648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.875 [2024-07-15 16:10:20.442659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.875 qpair failed and we were unable to recover it. 00:28:51.875 [2024-07-15 16:10:20.442831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.875 [2024-07-15 16:10:20.442843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.875 qpair failed and we were unable to recover it. 00:28:51.875 [2024-07-15 16:10:20.443014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.875 [2024-07-15 16:10:20.443025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.875 qpair failed and we were unable to recover it. 00:28:51.875 [2024-07-15 16:10:20.443145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.875 [2024-07-15 16:10:20.443157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.875 qpair failed and we were unable to recover it. 00:28:51.875 [2024-07-15 16:10:20.443319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.875 [2024-07-15 16:10:20.443331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.875 qpair failed and we were unable to recover it. 00:28:51.875 [2024-07-15 16:10:20.443498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.875 [2024-07-15 16:10:20.443510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.875 qpair failed and we were unable to recover it. 00:28:51.875 [2024-07-15 16:10:20.443702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.875 [2024-07-15 16:10:20.443713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.875 qpair failed and we were unable to recover it. 00:28:51.875 [2024-07-15 16:10:20.443882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.875 [2024-07-15 16:10:20.443893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.875 qpair failed and we were unable to recover it. 00:28:51.875 [2024-07-15 16:10:20.443995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.875 [2024-07-15 16:10:20.444007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.875 qpair failed and we were unable to recover it. 00:28:51.875 [2024-07-15 16:10:20.444281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.875 [2024-07-15 16:10:20.444293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.875 qpair failed and we were unable to recover it. 00:28:51.875 [2024-07-15 16:10:20.444397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.875 [2024-07-15 16:10:20.444409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.875 qpair failed and we were unable to recover it. 00:28:51.875 [2024-07-15 16:10:20.444519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.875 [2024-07-15 16:10:20.444531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.875 qpair failed and we were unable to recover it. 00:28:51.875 [2024-07-15 16:10:20.444690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.875 [2024-07-15 16:10:20.444702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.875 qpair failed and we were unable to recover it. 00:28:51.875 [2024-07-15 16:10:20.444866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.875 [2024-07-15 16:10:20.444878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.875 qpair failed and we were unable to recover it. 00:28:51.875 [2024-07-15 16:10:20.445042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.875 [2024-07-15 16:10:20.445053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.875 qpair failed and we were unable to recover it. 00:28:51.875 [2024-07-15 16:10:20.445145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.875 [2024-07-15 16:10:20.445158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.875 qpair failed and we were unable to recover it. 00:28:51.875 [2024-07-15 16:10:20.445266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.875 [2024-07-15 16:10:20.445279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.876 qpair failed and we were unable to recover it. 00:28:51.876 [2024-07-15 16:10:20.445382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.876 [2024-07-15 16:10:20.445394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.876 qpair failed and we were unable to recover it. 00:28:51.876 [2024-07-15 16:10:20.445618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.876 [2024-07-15 16:10:20.445629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.876 qpair failed and we were unable to recover it. 00:28:51.876 [2024-07-15 16:10:20.445721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.876 [2024-07-15 16:10:20.445733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.876 qpair failed and we were unable to recover it. 00:28:51.876 [2024-07-15 16:10:20.445887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.876 [2024-07-15 16:10:20.445898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.876 qpair failed and we were unable to recover it. 00:28:51.876 [2024-07-15 16:10:20.446090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.876 [2024-07-15 16:10:20.446102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.876 qpair failed and we were unable to recover it. 00:28:51.876 [2024-07-15 16:10:20.446196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.876 [2024-07-15 16:10:20.446208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.876 qpair failed and we were unable to recover it. 00:28:51.876 [2024-07-15 16:10:20.446314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.876 [2024-07-15 16:10:20.446326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.876 qpair failed and we were unable to recover it. 00:28:51.876 [2024-07-15 16:10:20.446434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.876 [2024-07-15 16:10:20.446445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.876 qpair failed and we were unable to recover it. 00:28:51.876 [2024-07-15 16:10:20.446533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.876 [2024-07-15 16:10:20.446544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.876 qpair failed and we were unable to recover it. 00:28:51.876 [2024-07-15 16:10:20.446713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.876 [2024-07-15 16:10:20.446724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.876 qpair failed and we were unable to recover it. 00:28:51.876 [2024-07-15 16:10:20.446892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.876 [2024-07-15 16:10:20.446904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.876 qpair failed and we were unable to recover it. 00:28:51.876 [2024-07-15 16:10:20.446996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.876 [2024-07-15 16:10:20.447008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.876 qpair failed and we were unable to recover it. 00:28:51.876 [2024-07-15 16:10:20.447113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.876 [2024-07-15 16:10:20.447124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.876 qpair failed and we were unable to recover it. 00:28:51.876 [2024-07-15 16:10:20.447216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.876 [2024-07-15 16:10:20.447231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.876 qpair failed and we were unable to recover it. 00:28:51.876 [2024-07-15 16:10:20.447456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.876 [2024-07-15 16:10:20.447468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.876 qpair failed and we were unable to recover it. 00:28:51.876 [2024-07-15 16:10:20.447580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.876 [2024-07-15 16:10:20.447591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.876 qpair failed and we were unable to recover it. 00:28:51.876 [2024-07-15 16:10:20.447692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.876 [2024-07-15 16:10:20.447703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.876 qpair failed and we were unable to recover it. 00:28:51.876 [2024-07-15 16:10:20.447871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.876 [2024-07-15 16:10:20.447882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.876 qpair failed and we were unable to recover it. 00:28:51.876 [2024-07-15 16:10:20.448047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.876 [2024-07-15 16:10:20.448059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.876 qpair failed and we were unable to recover it. 00:28:51.876 [2024-07-15 16:10:20.448150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.876 [2024-07-15 16:10:20.448162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.876 qpair failed and we were unable to recover it. 00:28:51.876 [2024-07-15 16:10:20.448323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.876 [2024-07-15 16:10:20.448335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.876 qpair failed and we were unable to recover it. 00:28:51.876 [2024-07-15 16:10:20.448421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.876 [2024-07-15 16:10:20.448433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.876 qpair failed and we were unable to recover it. 00:28:51.876 [2024-07-15 16:10:20.448590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.876 [2024-07-15 16:10:20.448601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.876 qpair failed and we were unable to recover it. 00:28:51.876 [2024-07-15 16:10:20.448719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.876 [2024-07-15 16:10:20.448730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.876 qpair failed and we were unable to recover it. 00:28:51.876 [2024-07-15 16:10:20.448837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.876 [2024-07-15 16:10:20.448849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.876 qpair failed and we were unable to recover it. 00:28:51.876 [2024-07-15 16:10:20.449070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.876 [2024-07-15 16:10:20.449081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.876 qpair failed and we were unable to recover it. 00:28:51.876 [2024-07-15 16:10:20.449203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.876 [2024-07-15 16:10:20.449215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.876 qpair failed and we were unable to recover it. 00:28:51.876 [2024-07-15 16:10:20.449470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.876 [2024-07-15 16:10:20.449482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.876 qpair failed and we were unable to recover it. 00:28:51.876 [2024-07-15 16:10:20.449590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.876 [2024-07-15 16:10:20.449600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.876 qpair failed and we were unable to recover it. 00:28:51.876 [2024-07-15 16:10:20.449688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.876 [2024-07-15 16:10:20.449700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.876 qpair failed and we were unable to recover it. 00:28:51.876 [2024-07-15 16:10:20.449787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.876 [2024-07-15 16:10:20.449799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.876 qpair failed and we were unable to recover it. 00:28:51.876 [2024-07-15 16:10:20.449954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.876 [2024-07-15 16:10:20.449965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.876 qpair failed and we were unable to recover it. 00:28:51.877 [2024-07-15 16:10:20.450144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.877 [2024-07-15 16:10:20.450156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.877 qpair failed and we were unable to recover it. 00:28:51.877 [2024-07-15 16:10:20.450248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.877 [2024-07-15 16:10:20.450260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.877 qpair failed and we were unable to recover it. 00:28:51.877 [2024-07-15 16:10:20.450425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.877 [2024-07-15 16:10:20.450437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.877 qpair failed and we were unable to recover it. 00:28:51.877 [2024-07-15 16:10:20.450605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.877 [2024-07-15 16:10:20.450617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.877 qpair failed and we were unable to recover it. 00:28:51.877 [2024-07-15 16:10:20.450807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.877 [2024-07-15 16:10:20.450820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.877 qpair failed and we were unable to recover it. 00:28:51.877 [2024-07-15 16:10:20.450993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.877 [2024-07-15 16:10:20.451004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.877 qpair failed and we were unable to recover it. 00:28:51.877 [2024-07-15 16:10:20.451110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.877 [2024-07-15 16:10:20.451122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.877 qpair failed and we were unable to recover it. 00:28:51.877 [2024-07-15 16:10:20.451219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.877 [2024-07-15 16:10:20.451240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.877 qpair failed and we were unable to recover it. 00:28:51.877 [2024-07-15 16:10:20.451335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.877 [2024-07-15 16:10:20.451346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.877 qpair failed and we were unable to recover it. 00:28:51.877 [2024-07-15 16:10:20.451467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.877 [2024-07-15 16:10:20.451478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.877 qpair failed and we were unable to recover it. 00:28:51.877 [2024-07-15 16:10:20.451583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.877 [2024-07-15 16:10:20.451594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.877 qpair failed and we were unable to recover it. 00:28:51.877 [2024-07-15 16:10:20.451695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.877 [2024-07-15 16:10:20.451707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.877 qpair failed and we were unable to recover it. 00:28:51.877 [2024-07-15 16:10:20.451782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.877 [2024-07-15 16:10:20.451793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.877 qpair failed and we were unable to recover it. 00:28:51.877 [2024-07-15 16:10:20.451959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.877 [2024-07-15 16:10:20.451970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.877 qpair failed and we were unable to recover it. 00:28:51.877 [2024-07-15 16:10:20.452076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.877 [2024-07-15 16:10:20.452090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.877 qpair failed and we were unable to recover it. 00:28:51.877 [2024-07-15 16:10:20.452246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.877 [2024-07-15 16:10:20.452258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.877 qpair failed and we were unable to recover it. 00:28:51.877 [2024-07-15 16:10:20.452413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.877 [2024-07-15 16:10:20.452428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.877 qpair failed and we were unable to recover it. 00:28:51.877 [2024-07-15 16:10:20.452526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.877 [2024-07-15 16:10:20.452543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.877 qpair failed and we were unable to recover it. 00:28:51.877 [2024-07-15 16:10:20.452725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.877 [2024-07-15 16:10:20.452745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.877 qpair failed and we were unable to recover it. 00:28:51.877 [2024-07-15 16:10:20.453007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.877 [2024-07-15 16:10:20.453024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.877 qpair failed and we were unable to recover it. 00:28:51.877 [2024-07-15 16:10:20.453193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.877 [2024-07-15 16:10:20.453204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.877 qpair failed and we were unable to recover it. 00:28:51.877 [2024-07-15 16:10:20.453310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.877 [2024-07-15 16:10:20.453321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.877 qpair failed and we were unable to recover it. 00:28:51.877 [2024-07-15 16:10:20.453480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.877 [2024-07-15 16:10:20.453491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.877 qpair failed and we were unable to recover it. 00:28:51.877 [2024-07-15 16:10:20.453610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.877 [2024-07-15 16:10:20.453620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.877 qpair failed and we were unable to recover it. 00:28:51.877 [2024-07-15 16:10:20.453727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.877 [2024-07-15 16:10:20.453736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.877 qpair failed and we were unable to recover it. 00:28:51.877 [2024-07-15 16:10:20.453909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.877 [2024-07-15 16:10:20.453919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.877 qpair failed and we were unable to recover it. 00:28:51.877 [2024-07-15 16:10:20.454090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.877 [2024-07-15 16:10:20.454099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.877 qpair failed and we were unable to recover it. 00:28:51.877 [2024-07-15 16:10:20.454325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.877 [2024-07-15 16:10:20.454336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.877 qpair failed and we were unable to recover it. 00:28:51.877 [2024-07-15 16:10:20.454554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.877 [2024-07-15 16:10:20.454564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.877 qpair failed and we were unable to recover it. 00:28:51.877 [2024-07-15 16:10:20.454665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.877 [2024-07-15 16:10:20.454675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.877 qpair failed and we were unable to recover it. 00:28:51.877 [2024-07-15 16:10:20.454841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.877 [2024-07-15 16:10:20.454852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.877 qpair failed and we were unable to recover it. 00:28:51.877 [2024-07-15 16:10:20.454944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.877 [2024-07-15 16:10:20.454954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.877 qpair failed and we were unable to recover it. 00:28:51.877 [2024-07-15 16:10:20.455165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.877 [2024-07-15 16:10:20.455175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.877 qpair failed and we were unable to recover it. 00:28:51.877 [2024-07-15 16:10:20.455290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.877 [2024-07-15 16:10:20.455301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.877 qpair failed and we were unable to recover it. 00:28:51.877 [2024-07-15 16:10:20.455470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.877 [2024-07-15 16:10:20.455479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.877 qpair failed and we were unable to recover it. 00:28:51.878 [2024-07-15 16:10:20.455653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.878 [2024-07-15 16:10:20.455663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.878 qpair failed and we were unable to recover it. 00:28:51.878 [2024-07-15 16:10:20.455931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.878 [2024-07-15 16:10:20.455941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.878 qpair failed and we were unable to recover it. 00:28:51.878 [2024-07-15 16:10:20.456135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.878 [2024-07-15 16:10:20.456144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.878 qpair failed and we were unable to recover it. 00:28:51.878 [2024-07-15 16:10:20.456308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.878 [2024-07-15 16:10:20.456318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.878 qpair failed and we were unable to recover it. 00:28:51.878 [2024-07-15 16:10:20.456501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.878 [2024-07-15 16:10:20.456511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.878 qpair failed and we were unable to recover it. 00:28:51.878 [2024-07-15 16:10:20.456615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.878 [2024-07-15 16:10:20.456627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.878 qpair failed and we were unable to recover it. 00:28:51.878 [2024-07-15 16:10:20.456734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.878 [2024-07-15 16:10:20.456745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.878 qpair failed and we were unable to recover it. 00:28:51.878 [2024-07-15 16:10:20.456967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.878 [2024-07-15 16:10:20.456976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.878 qpair failed and we were unable to recover it. 00:28:51.878 [2024-07-15 16:10:20.457102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.878 [2024-07-15 16:10:20.457111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.878 qpair failed and we were unable to recover it. 00:28:51.878 [2024-07-15 16:10:20.457268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.878 [2024-07-15 16:10:20.457279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.878 qpair failed and we were unable to recover it. 00:28:51.878 [2024-07-15 16:10:20.457443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.878 [2024-07-15 16:10:20.457453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.878 qpair failed and we were unable to recover it. 00:28:51.878 [2024-07-15 16:10:20.457540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.878 [2024-07-15 16:10:20.457550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.878 qpair failed and we were unable to recover it. 00:28:51.878 [2024-07-15 16:10:20.457719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.878 [2024-07-15 16:10:20.457729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.878 qpair failed and we were unable to recover it. 00:28:51.878 [2024-07-15 16:10:20.457840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.878 [2024-07-15 16:10:20.457852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.878 qpair failed and we were unable to recover it. 00:28:51.878 [2024-07-15 16:10:20.457962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.878 [2024-07-15 16:10:20.457972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.878 qpair failed and we were unable to recover it. 00:28:51.878 [2024-07-15 16:10:20.458066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.878 [2024-07-15 16:10:20.458076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.878 qpair failed and we were unable to recover it. 00:28:51.878 [2024-07-15 16:10:20.458253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.878 [2024-07-15 16:10:20.458263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.878 qpair failed and we were unable to recover it. 00:28:51.878 [2024-07-15 16:10:20.458421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.878 [2024-07-15 16:10:20.458431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.878 qpair failed and we were unable to recover it. 00:28:51.878 [2024-07-15 16:10:20.458553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.878 [2024-07-15 16:10:20.458563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.878 qpair failed and we were unable to recover it. 00:28:51.878 [2024-07-15 16:10:20.458731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.878 [2024-07-15 16:10:20.458743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.878 qpair failed and we were unable to recover it. 00:28:51.878 [2024-07-15 16:10:20.458983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.878 [2024-07-15 16:10:20.458992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.878 qpair failed and we were unable to recover it. 00:28:51.878 [2024-07-15 16:10:20.459146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.878 [2024-07-15 16:10:20.459156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.878 qpair failed and we were unable to recover it. 00:28:51.878 [2024-07-15 16:10:20.459272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.878 [2024-07-15 16:10:20.459282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.878 qpair failed and we were unable to recover it. 00:28:51.878 [2024-07-15 16:10:20.459450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.878 [2024-07-15 16:10:20.459459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.878 qpair failed and we were unable to recover it. 00:28:51.878 [2024-07-15 16:10:20.459622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.878 [2024-07-15 16:10:20.459632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.878 qpair failed and we were unable to recover it. 00:28:51.878 [2024-07-15 16:10:20.459738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.878 [2024-07-15 16:10:20.459748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.878 qpair failed and we were unable to recover it. 00:28:51.878 [2024-07-15 16:10:20.459916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.878 [2024-07-15 16:10:20.459925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.878 qpair failed and we were unable to recover it. 00:28:51.878 [2024-07-15 16:10:20.460023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.878 [2024-07-15 16:10:20.460032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.878 qpair failed and we were unable to recover it. 00:28:51.878 [2024-07-15 16:10:20.460144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.878 [2024-07-15 16:10:20.460155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.878 qpair failed and we were unable to recover it. 00:28:51.878 [2024-07-15 16:10:20.460263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.878 [2024-07-15 16:10:20.460274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.878 qpair failed and we were unable to recover it. 00:28:51.878 [2024-07-15 16:10:20.460508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.878 [2024-07-15 16:10:20.460518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.878 qpair failed and we were unable to recover it. 00:28:51.878 [2024-07-15 16:10:20.460627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.878 [2024-07-15 16:10:20.460637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.878 qpair failed and we were unable to recover it. 00:28:51.878 [2024-07-15 16:10:20.460795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.878 [2024-07-15 16:10:20.460805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.878 qpair failed and we were unable to recover it. 00:28:51.878 [2024-07-15 16:10:20.460919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.878 [2024-07-15 16:10:20.460929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.878 qpair failed and we were unable to recover it. 00:28:51.878 [2024-07-15 16:10:20.461039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.878 [2024-07-15 16:10:20.461049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.878 qpair failed and we were unable to recover it. 00:28:51.878 [2024-07-15 16:10:20.461139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.878 [2024-07-15 16:10:20.461148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.878 qpair failed and we were unable to recover it. 00:28:51.878 [2024-07-15 16:10:20.461309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.878 [2024-07-15 16:10:20.461320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.878 qpair failed and we were unable to recover it. 00:28:51.878 [2024-07-15 16:10:20.461423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.878 [2024-07-15 16:10:20.461433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.878 qpair failed and we were unable to recover it. 00:28:51.878 [2024-07-15 16:10:20.461601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.878 [2024-07-15 16:10:20.461610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.878 qpair failed and we were unable to recover it. 00:28:51.878 [2024-07-15 16:10:20.461704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.878 [2024-07-15 16:10:20.461714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.878 qpair failed and we were unable to recover it. 00:28:51.878 [2024-07-15 16:10:20.461832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.879 [2024-07-15 16:10:20.461842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.879 qpair failed and we were unable to recover it. 00:28:51.879 [2024-07-15 16:10:20.462022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.879 [2024-07-15 16:10:20.462032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.879 qpair failed and we were unable to recover it. 00:28:51.879 [2024-07-15 16:10:20.462135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.879 [2024-07-15 16:10:20.462145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.879 qpair failed and we were unable to recover it. 00:28:51.879 [2024-07-15 16:10:20.462365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.879 [2024-07-15 16:10:20.462376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.879 qpair failed and we were unable to recover it. 00:28:51.879 [2024-07-15 16:10:20.462535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.879 [2024-07-15 16:10:20.462545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.879 qpair failed and we were unable to recover it. 00:28:51.879 [2024-07-15 16:10:20.462701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.879 [2024-07-15 16:10:20.462711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.879 qpair failed and we were unable to recover it. 00:28:51.879 [2024-07-15 16:10:20.462884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.879 [2024-07-15 16:10:20.462894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.879 qpair failed and we were unable to recover it. 00:28:51.879 [2024-07-15 16:10:20.463152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.879 [2024-07-15 16:10:20.463162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.879 qpair failed and we were unable to recover it. 00:28:51.879 [2024-07-15 16:10:20.463274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.879 [2024-07-15 16:10:20.463285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.879 qpair failed and we were unable to recover it. 00:28:51.879 [2024-07-15 16:10:20.463444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.879 [2024-07-15 16:10:20.463454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.879 qpair failed and we were unable to recover it. 00:28:51.879 [2024-07-15 16:10:20.463626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.879 [2024-07-15 16:10:20.463636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.879 qpair failed and we were unable to recover it. 00:28:51.879 [2024-07-15 16:10:20.463738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.879 [2024-07-15 16:10:20.463749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.879 qpair failed and we were unable to recover it. 00:28:51.879 [2024-07-15 16:10:20.463851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.879 [2024-07-15 16:10:20.463861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.879 qpair failed and we were unable to recover it. 00:28:51.879 [2024-07-15 16:10:20.463974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.879 [2024-07-15 16:10:20.463984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.879 qpair failed and we were unable to recover it. 00:28:51.879 [2024-07-15 16:10:20.464150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.879 [2024-07-15 16:10:20.464160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.879 qpair failed and we were unable to recover it. 00:28:51.879 [2024-07-15 16:10:20.464270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.879 [2024-07-15 16:10:20.464281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.879 qpair failed and we were unable to recover it. 00:28:51.879 [2024-07-15 16:10:20.464398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.879 [2024-07-15 16:10:20.464408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.879 qpair failed and we were unable to recover it. 00:28:51.879 [2024-07-15 16:10:20.464541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.879 [2024-07-15 16:10:20.464551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.879 qpair failed and we were unable to recover it. 00:28:51.879 [2024-07-15 16:10:20.464750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.879 [2024-07-15 16:10:20.464760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.879 qpair failed and we were unable to recover it. 00:28:51.879 [2024-07-15 16:10:20.464874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.879 [2024-07-15 16:10:20.464886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.879 qpair failed and we were unable to recover it. 00:28:51.879 [2024-07-15 16:10:20.465081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.879 [2024-07-15 16:10:20.465091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.879 qpair failed and we were unable to recover it. 00:28:51.879 [2024-07-15 16:10:20.465185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.879 [2024-07-15 16:10:20.465195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.879 qpair failed and we were unable to recover it. 00:28:51.879 [2024-07-15 16:10:20.465290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.879 [2024-07-15 16:10:20.465301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.879 qpair failed and we were unable to recover it. 00:28:51.879 [2024-07-15 16:10:20.465507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.879 [2024-07-15 16:10:20.465517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.879 qpair failed and we were unable to recover it. 00:28:51.879 [2024-07-15 16:10:20.465636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.879 [2024-07-15 16:10:20.465646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.879 qpair failed and we were unable to recover it. 00:28:51.879 [2024-07-15 16:10:20.465755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.879 [2024-07-15 16:10:20.465765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.879 qpair failed and we were unable to recover it. 00:28:51.879 [2024-07-15 16:10:20.465949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.879 [2024-07-15 16:10:20.465959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.879 qpair failed and we were unable to recover it. 00:28:51.879 [2024-07-15 16:10:20.466082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.879 [2024-07-15 16:10:20.466092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.879 qpair failed and we were unable to recover it. 00:28:51.879 [2024-07-15 16:10:20.466192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.879 [2024-07-15 16:10:20.466202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.879 qpair failed and we were unable to recover it. 00:28:51.879 [2024-07-15 16:10:20.466338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.879 [2024-07-15 16:10:20.466349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.879 qpair failed and we were unable to recover it. 00:28:51.879 [2024-07-15 16:10:20.466605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.879 [2024-07-15 16:10:20.466614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.879 qpair failed and we were unable to recover it. 00:28:51.879 [2024-07-15 16:10:20.466771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.879 [2024-07-15 16:10:20.466781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.879 qpair failed and we were unable to recover it. 00:28:51.879 [2024-07-15 16:10:20.466878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.879 [2024-07-15 16:10:20.466888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.879 qpair failed and we were unable to recover it. 00:28:51.879 [2024-07-15 16:10:20.467081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.879 [2024-07-15 16:10:20.467091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.879 qpair failed and we were unable to recover it. 00:28:51.879 [2024-07-15 16:10:20.467155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.879 [2024-07-15 16:10:20.467165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.879 qpair failed and we were unable to recover it. 00:28:51.879 [2024-07-15 16:10:20.467317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.879 [2024-07-15 16:10:20.467327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.879 qpair failed and we were unable to recover it. 00:28:51.879 [2024-07-15 16:10:20.467514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.879 [2024-07-15 16:10:20.467524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.879 qpair failed and we were unable to recover it. 00:28:51.879 [2024-07-15 16:10:20.467637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.879 [2024-07-15 16:10:20.467647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.879 qpair failed and we were unable to recover it. 00:28:51.879 [2024-07-15 16:10:20.467760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.879 [2024-07-15 16:10:20.467769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.879 qpair failed and we were unable to recover it. 00:28:51.879 [2024-07-15 16:10:20.467990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.879 [2024-07-15 16:10:20.468000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.879 qpair failed and we were unable to recover it. 00:28:51.879 [2024-07-15 16:10:20.468117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.879 [2024-07-15 16:10:20.468127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.880 qpair failed and we were unable to recover it. 00:28:51.880 [2024-07-15 16:10:20.468216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.880 [2024-07-15 16:10:20.468230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.880 qpair failed and we were unable to recover it. 00:28:51.880 [2024-07-15 16:10:20.468403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.880 [2024-07-15 16:10:20.468413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.880 qpair failed and we were unable to recover it. 00:28:51.880 [2024-07-15 16:10:20.468515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.880 [2024-07-15 16:10:20.468525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.880 qpair failed and we were unable to recover it. 00:28:51.880 [2024-07-15 16:10:20.468693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.880 [2024-07-15 16:10:20.468703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.880 qpair failed and we were unable to recover it. 00:28:51.880 [2024-07-15 16:10:20.468803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.880 [2024-07-15 16:10:20.468814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.880 qpair failed and we were unable to recover it. 00:28:51.880 [2024-07-15 16:10:20.469008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.880 [2024-07-15 16:10:20.469017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.880 qpair failed and we were unable to recover it. 00:28:51.880 [2024-07-15 16:10:20.469180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.880 [2024-07-15 16:10:20.469189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.880 qpair failed and we were unable to recover it. 00:28:51.880 [2024-07-15 16:10:20.469291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.880 [2024-07-15 16:10:20.469302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.880 qpair failed and we were unable to recover it. 00:28:51.880 [2024-07-15 16:10:20.469455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.880 [2024-07-15 16:10:20.469465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.880 qpair failed and we were unable to recover it. 00:28:51.880 [2024-07-15 16:10:20.469552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.880 [2024-07-15 16:10:20.469562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.880 qpair failed and we were unable to recover it. 00:28:51.880 [2024-07-15 16:10:20.469667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.880 [2024-07-15 16:10:20.469677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.880 qpair failed and we were unable to recover it. 00:28:51.880 [2024-07-15 16:10:20.469793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.880 [2024-07-15 16:10:20.469803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.880 qpair failed and we were unable to recover it. 00:28:51.880 [2024-07-15 16:10:20.469973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.880 [2024-07-15 16:10:20.469982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.880 qpair failed and we were unable to recover it. 00:28:51.880 [2024-07-15 16:10:20.470085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.880 [2024-07-15 16:10:20.470095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.880 qpair failed and we were unable to recover it. 00:28:51.880 [2024-07-15 16:10:20.470281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.880 [2024-07-15 16:10:20.470291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.880 qpair failed and we were unable to recover it. 00:28:51.880 [2024-07-15 16:10:20.470449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.880 [2024-07-15 16:10:20.470459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.880 qpair failed and we were unable to recover it. 00:28:51.880 [2024-07-15 16:10:20.470569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.880 [2024-07-15 16:10:20.470579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.880 qpair failed and we were unable to recover it. 00:28:51.880 [2024-07-15 16:10:20.470684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.880 [2024-07-15 16:10:20.470694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.880 qpair failed and we were unable to recover it. 00:28:51.880 [2024-07-15 16:10:20.470791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.880 [2024-07-15 16:10:20.470808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.880 qpair failed and we were unable to recover it. 00:28:51.880 [2024-07-15 16:10:20.470909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.880 [2024-07-15 16:10:20.470919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.880 qpair failed and we were unable to recover it. 00:28:51.880 [2024-07-15 16:10:20.471078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.880 [2024-07-15 16:10:20.471088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.880 qpair failed and we were unable to recover it. 00:28:51.880 [2024-07-15 16:10:20.471197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.880 [2024-07-15 16:10:20.471207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.880 qpair failed and we were unable to recover it. 00:28:51.880 [2024-07-15 16:10:20.471436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.880 [2024-07-15 16:10:20.471447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.880 qpair failed and we were unable to recover it. 00:28:51.880 [2024-07-15 16:10:20.471620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.880 [2024-07-15 16:10:20.471630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.880 qpair failed and we were unable to recover it. 00:28:51.880 [2024-07-15 16:10:20.471878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.880 [2024-07-15 16:10:20.471888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.880 qpair failed and we were unable to recover it. 00:28:51.880 [2024-07-15 16:10:20.472043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.880 [2024-07-15 16:10:20.472053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.880 qpair failed and we were unable to recover it. 00:28:51.880 [2024-07-15 16:10:20.472241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.880 [2024-07-15 16:10:20.472254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.880 qpair failed and we were unable to recover it. 00:28:51.880 [2024-07-15 16:10:20.472364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.880 [2024-07-15 16:10:20.472375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.880 qpair failed and we were unable to recover it. 00:28:51.880 [2024-07-15 16:10:20.472582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.880 [2024-07-15 16:10:20.472592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.880 qpair failed and we were unable to recover it. 00:28:51.880 [2024-07-15 16:10:20.472769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.880 [2024-07-15 16:10:20.472779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.880 qpair failed and we were unable to recover it. 00:28:51.880 [2024-07-15 16:10:20.472949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.880 [2024-07-15 16:10:20.472959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.880 qpair failed and we were unable to recover it. 00:28:51.880 [2024-07-15 16:10:20.473079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.880 [2024-07-15 16:10:20.473088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.880 qpair failed and we were unable to recover it. 00:28:51.880 [2024-07-15 16:10:20.473195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.880 [2024-07-15 16:10:20.473206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.880 qpair failed and we were unable to recover it. 00:28:51.880 [2024-07-15 16:10:20.473454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.880 [2024-07-15 16:10:20.473465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.880 qpair failed and we were unable to recover it. 00:28:51.880 [2024-07-15 16:10:20.473545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.880 [2024-07-15 16:10:20.473555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.880 qpair failed and we were unable to recover it. 00:28:51.880 [2024-07-15 16:10:20.473711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.880 [2024-07-15 16:10:20.473721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.880 qpair failed and we were unable to recover it. 00:28:51.880 [2024-07-15 16:10:20.473821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.880 [2024-07-15 16:10:20.473831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.880 qpair failed and we were unable to recover it. 00:28:51.880 [2024-07-15 16:10:20.473986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.880 [2024-07-15 16:10:20.473995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.880 qpair failed and we were unable to recover it. 00:28:51.880 [2024-07-15 16:10:20.474160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.880 [2024-07-15 16:10:20.474169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.880 qpair failed and we were unable to recover it. 00:28:51.880 [2024-07-15 16:10:20.474273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.880 [2024-07-15 16:10:20.474284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.880 qpair failed and we were unable to recover it. 00:28:51.881 [2024-07-15 16:10:20.474383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.881 [2024-07-15 16:10:20.474394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.881 qpair failed and we were unable to recover it. 00:28:51.881 [2024-07-15 16:10:20.474491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.881 [2024-07-15 16:10:20.474500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.881 qpair failed and we were unable to recover it. 00:28:51.881 [2024-07-15 16:10:20.474766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.881 [2024-07-15 16:10:20.474776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.881 qpair failed and we were unable to recover it. 00:28:51.881 [2024-07-15 16:10:20.474892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.881 [2024-07-15 16:10:20.474902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.881 qpair failed and we were unable to recover it. 00:28:51.881 [2024-07-15 16:10:20.475059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.881 [2024-07-15 16:10:20.475068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.881 qpair failed and we were unable to recover it. 00:28:51.881 [2024-07-15 16:10:20.475162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.881 [2024-07-15 16:10:20.475171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.881 qpair failed and we were unable to recover it. 00:28:51.881 [2024-07-15 16:10:20.475295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.881 [2024-07-15 16:10:20.475309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.881 qpair failed and we were unable to recover it. 00:28:51.881 [2024-07-15 16:10:20.475416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.881 [2024-07-15 16:10:20.475431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.881 qpair failed and we were unable to recover it. 00:28:51.881 [2024-07-15 16:10:20.475635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.881 [2024-07-15 16:10:20.475649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.881 qpair failed and we were unable to recover it. 00:28:51.881 [2024-07-15 16:10:20.475744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.881 [2024-07-15 16:10:20.475758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.881 qpair failed and we were unable to recover it. 00:28:51.881 [2024-07-15 16:10:20.475983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.881 [2024-07-15 16:10:20.475992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.881 qpair failed and we were unable to recover it. 00:28:51.881 [2024-07-15 16:10:20.476180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.881 [2024-07-15 16:10:20.476191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.881 qpair failed and we were unable to recover it. 00:28:51.881 [2024-07-15 16:10:20.476378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.881 [2024-07-15 16:10:20.476393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.881 qpair failed and we were unable to recover it. 00:28:51.881 [2024-07-15 16:10:20.476616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.881 [2024-07-15 16:10:20.476626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.881 qpair failed and we were unable to recover it. 00:28:51.881 [2024-07-15 16:10:20.476782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.881 [2024-07-15 16:10:20.476792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.881 qpair failed and we were unable to recover it. 00:28:51.881 [2024-07-15 16:10:20.476910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.881 [2024-07-15 16:10:20.476921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.881 qpair failed and we were unable to recover it. 00:28:51.881 [2024-07-15 16:10:20.477101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.881 [2024-07-15 16:10:20.477111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.881 qpair failed and we were unable to recover it. 00:28:51.881 [2024-07-15 16:10:20.477278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.881 [2024-07-15 16:10:20.477288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.881 qpair failed and we were unable to recover it. 00:28:51.881 [2024-07-15 16:10:20.477534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.881 [2024-07-15 16:10:20.477546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.881 qpair failed and we were unable to recover it. 00:28:51.881 [2024-07-15 16:10:20.477640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.881 [2024-07-15 16:10:20.477650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.881 qpair failed and we were unable to recover it. 00:28:51.881 [2024-07-15 16:10:20.477764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.881 [2024-07-15 16:10:20.477774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.881 qpair failed and we were unable to recover it. 00:28:51.881 [2024-07-15 16:10:20.477950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.881 [2024-07-15 16:10:20.477960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.881 qpair failed and we were unable to recover it. 00:28:51.881 [2024-07-15 16:10:20.478122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.881 [2024-07-15 16:10:20.478132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.881 qpair failed and we were unable to recover it. 00:28:51.881 [2024-07-15 16:10:20.478238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.881 [2024-07-15 16:10:20.478248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.881 qpair failed and we were unable to recover it. 00:28:51.881 [2024-07-15 16:10:20.478370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.881 [2024-07-15 16:10:20.478380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.881 qpair failed and we were unable to recover it. 00:28:51.881 [2024-07-15 16:10:20.478558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.881 [2024-07-15 16:10:20.478569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.881 qpair failed and we were unable to recover it. 00:28:51.881 [2024-07-15 16:10:20.478656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.881 [2024-07-15 16:10:20.478666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.881 qpair failed and we were unable to recover it. 00:28:51.881 [2024-07-15 16:10:20.478771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.881 [2024-07-15 16:10:20.478781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.881 qpair failed and we were unable to recover it. 00:28:51.881 [2024-07-15 16:10:20.478884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.881 [2024-07-15 16:10:20.478894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.881 qpair failed and we were unable to recover it. 00:28:51.881 [2024-07-15 16:10:20.478997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.881 [2024-07-15 16:10:20.479007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.881 qpair failed and we were unable to recover it. 00:28:51.881 [2024-07-15 16:10:20.479105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.881 [2024-07-15 16:10:20.479115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.881 qpair failed and we were unable to recover it. 00:28:51.881 [2024-07-15 16:10:20.479269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.881 [2024-07-15 16:10:20.479282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.881 qpair failed and we were unable to recover it. 00:28:51.881 [2024-07-15 16:10:20.479461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.881 [2024-07-15 16:10:20.479471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.881 qpair failed and we were unable to recover it. 00:28:51.881 [2024-07-15 16:10:20.479651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.881 [2024-07-15 16:10:20.479661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.881 qpair failed and we were unable to recover it. 00:28:51.881 [2024-07-15 16:10:20.479818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.881 [2024-07-15 16:10:20.479828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.882 qpair failed and we were unable to recover it. 00:28:51.882 [2024-07-15 16:10:20.480001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.882 [2024-07-15 16:10:20.480012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.882 qpair failed and we were unable to recover it. 00:28:51.882 [2024-07-15 16:10:20.480240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.882 [2024-07-15 16:10:20.480251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.882 qpair failed and we were unable to recover it. 00:28:51.882 [2024-07-15 16:10:20.480397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.882 [2024-07-15 16:10:20.480409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.882 qpair failed and we were unable to recover it. 00:28:51.882 [2024-07-15 16:10:20.480562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.882 [2024-07-15 16:10:20.480572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.882 qpair failed and we were unable to recover it. 00:28:51.882 [2024-07-15 16:10:20.480673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.882 [2024-07-15 16:10:20.480683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.882 qpair failed and we were unable to recover it. 00:28:51.882 [2024-07-15 16:10:20.480787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.882 [2024-07-15 16:10:20.480797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.882 qpair failed and we were unable to recover it. 00:28:51.882 [2024-07-15 16:10:20.481041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.882 [2024-07-15 16:10:20.481051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.882 qpair failed and we were unable to recover it. 00:28:51.882 [2024-07-15 16:10:20.481149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.882 [2024-07-15 16:10:20.481159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.882 qpair failed and we were unable to recover it. 00:28:51.882 [2024-07-15 16:10:20.481268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.882 [2024-07-15 16:10:20.481279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.882 qpair failed and we were unable to recover it. 00:28:51.882 [2024-07-15 16:10:20.481437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.882 [2024-07-15 16:10:20.481447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.882 qpair failed and we were unable to recover it. 00:28:51.882 [2024-07-15 16:10:20.481564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.882 [2024-07-15 16:10:20.481574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.882 qpair failed and we were unable to recover it. 00:28:51.882 [2024-07-15 16:10:20.481650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.882 [2024-07-15 16:10:20.481660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.882 qpair failed and we were unable to recover it. 00:28:51.882 [2024-07-15 16:10:20.481819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.882 [2024-07-15 16:10:20.481830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.882 qpair failed and we were unable to recover it. 00:28:51.882 [2024-07-15 16:10:20.481938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.882 [2024-07-15 16:10:20.481948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.882 qpair failed and we were unable to recover it. 00:28:51.882 [2024-07-15 16:10:20.482120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.882 [2024-07-15 16:10:20.482130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.882 qpair failed and we were unable to recover it. 00:28:51.882 [2024-07-15 16:10:20.482231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.882 [2024-07-15 16:10:20.482241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.882 qpair failed and we were unable to recover it. 00:28:51.882 [2024-07-15 16:10:20.482402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.882 [2024-07-15 16:10:20.482412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.882 qpair failed and we were unable to recover it. 00:28:51.882 [2024-07-15 16:10:20.482495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.882 [2024-07-15 16:10:20.482506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.882 qpair failed and we were unable to recover it. 00:28:51.882 [2024-07-15 16:10:20.482676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.882 [2024-07-15 16:10:20.482685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.882 qpair failed and we were unable to recover it. 00:28:51.882 [2024-07-15 16:10:20.482852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.882 [2024-07-15 16:10:20.482862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.882 qpair failed and we were unable to recover it. 00:28:51.882 [2024-07-15 16:10:20.482954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.882 [2024-07-15 16:10:20.482964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.882 qpair failed and we were unable to recover it. 00:28:51.882 [2024-07-15 16:10:20.483135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.882 [2024-07-15 16:10:20.483145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.882 qpair failed and we were unable to recover it. 00:28:51.882 [2024-07-15 16:10:20.483300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.882 [2024-07-15 16:10:20.483313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.882 qpair failed and we were unable to recover it. 00:28:51.882 [2024-07-15 16:10:20.483480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.882 [2024-07-15 16:10:20.483492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.882 qpair failed and we were unable to recover it. 00:28:51.882 [2024-07-15 16:10:20.483571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.882 [2024-07-15 16:10:20.483581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.882 qpair failed and we were unable to recover it. 00:28:51.882 [2024-07-15 16:10:20.483746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.882 [2024-07-15 16:10:20.483756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.882 qpair failed and we were unable to recover it. 00:28:51.882 [2024-07-15 16:10:20.483857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.882 [2024-07-15 16:10:20.483868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.882 qpair failed and we were unable to recover it. 00:28:51.882 [2024-07-15 16:10:20.483970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.882 [2024-07-15 16:10:20.483981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.882 qpair failed and we were unable to recover it. 00:28:51.882 [2024-07-15 16:10:20.484079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.882 [2024-07-15 16:10:20.484090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.882 qpair failed and we were unable to recover it. 00:28:51.882 [2024-07-15 16:10:20.484319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.882 [2024-07-15 16:10:20.484329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.882 qpair failed and we were unable to recover it. 00:28:51.882 [2024-07-15 16:10:20.484499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.882 [2024-07-15 16:10:20.484510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.882 qpair failed and we were unable to recover it. 00:28:51.882 [2024-07-15 16:10:20.484612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.882 [2024-07-15 16:10:20.484622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.882 qpair failed and we were unable to recover it. 00:28:51.882 [2024-07-15 16:10:20.484710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.882 [2024-07-15 16:10:20.484720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.882 qpair failed and we were unable to recover it. 00:28:51.882 [2024-07-15 16:10:20.484879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.882 [2024-07-15 16:10:20.484888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.882 qpair failed and we were unable to recover it. 00:28:51.882 [2024-07-15 16:10:20.485057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.882 [2024-07-15 16:10:20.485067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.882 qpair failed and we were unable to recover it. 00:28:51.882 [2024-07-15 16:10:20.485243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.882 [2024-07-15 16:10:20.485253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.882 qpair failed and we were unable to recover it. 00:28:51.882 [2024-07-15 16:10:20.485427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.882 [2024-07-15 16:10:20.485437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.882 qpair failed and we were unable to recover it. 00:28:51.882 [2024-07-15 16:10:20.485545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.882 [2024-07-15 16:10:20.485554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.882 qpair failed and we were unable to recover it. 00:28:51.882 [2024-07-15 16:10:20.485691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.882 [2024-07-15 16:10:20.485700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.882 qpair failed and we were unable to recover it. 00:28:51.882 [2024-07-15 16:10:20.485814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.882 [2024-07-15 16:10:20.485823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.882 qpair failed and we were unable to recover it. 00:28:51.882 [2024-07-15 16:10:20.485983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.883 [2024-07-15 16:10:20.485993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.883 qpair failed and we were unable to recover it. 00:28:51.883 [2024-07-15 16:10:20.486099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.883 [2024-07-15 16:10:20.486109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.883 qpair failed and we were unable to recover it. 00:28:51.883 [2024-07-15 16:10:20.486289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.883 [2024-07-15 16:10:20.486299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.883 qpair failed and we were unable to recover it. 00:28:51.883 [2024-07-15 16:10:20.486457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.883 [2024-07-15 16:10:20.486467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.883 qpair failed and we were unable to recover it. 00:28:51.883 [2024-07-15 16:10:20.486583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.883 [2024-07-15 16:10:20.486593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.883 qpair failed and we were unable to recover it. 00:28:51.883 [2024-07-15 16:10:20.486689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.883 [2024-07-15 16:10:20.486698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.883 qpair failed and we were unable to recover it. 00:28:51.883 [2024-07-15 16:10:20.486919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.883 [2024-07-15 16:10:20.486928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.883 qpair failed and we were unable to recover it. 00:28:51.883 [2024-07-15 16:10:20.487093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.883 [2024-07-15 16:10:20.487103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.883 qpair failed and we were unable to recover it. 00:28:51.883 [2024-07-15 16:10:20.487203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.883 [2024-07-15 16:10:20.487212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.883 qpair failed and we were unable to recover it. 00:28:51.883 [2024-07-15 16:10:20.487374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.883 [2024-07-15 16:10:20.487385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.883 qpair failed and we were unable to recover it. 00:28:51.883 [2024-07-15 16:10:20.487529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.883 [2024-07-15 16:10:20.487539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.883 qpair failed and we were unable to recover it. 00:28:51.883 [2024-07-15 16:10:20.487617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.883 [2024-07-15 16:10:20.487627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.883 qpair failed and we were unable to recover it. 00:28:51.883 [2024-07-15 16:10:20.487786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.883 [2024-07-15 16:10:20.487796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.883 qpair failed and we were unable to recover it. 00:28:51.883 [2024-07-15 16:10:20.487897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.883 [2024-07-15 16:10:20.487907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.883 qpair failed and we were unable to recover it. 00:28:51.883 [2024-07-15 16:10:20.487992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.883 [2024-07-15 16:10:20.488002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.883 qpair failed and we were unable to recover it. 00:28:51.883 [2024-07-15 16:10:20.488115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.883 [2024-07-15 16:10:20.488125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.883 qpair failed and we were unable to recover it. 00:28:51.883 [2024-07-15 16:10:20.488229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.883 [2024-07-15 16:10:20.488240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.883 qpair failed and we were unable to recover it. 00:28:51.883 [2024-07-15 16:10:20.488343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.883 [2024-07-15 16:10:20.488353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.883 qpair failed and we were unable to recover it. 00:28:51.883 [2024-07-15 16:10:20.488486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.883 [2024-07-15 16:10:20.488495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.883 qpair failed and we were unable to recover it. 00:28:51.883 [2024-07-15 16:10:20.488588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.883 [2024-07-15 16:10:20.488597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.883 qpair failed and we were unable to recover it. 00:28:51.883 [2024-07-15 16:10:20.488703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.883 [2024-07-15 16:10:20.488713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.883 qpair failed and we were unable to recover it. 00:28:51.883 [2024-07-15 16:10:20.488867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.883 [2024-07-15 16:10:20.488877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.883 qpair failed and we were unable to recover it. 00:28:51.883 [2024-07-15 16:10:20.488976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.883 [2024-07-15 16:10:20.488986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.883 qpair failed and we were unable to recover it. 00:28:51.883 [2024-07-15 16:10:20.489098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.883 [2024-07-15 16:10:20.489110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.883 qpair failed and we were unable to recover it. 00:28:51.883 [2024-07-15 16:10:20.489272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.883 [2024-07-15 16:10:20.489282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.883 qpair failed and we were unable to recover it. 00:28:51.883 [2024-07-15 16:10:20.489369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.883 [2024-07-15 16:10:20.489379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.883 qpair failed and we were unable to recover it. 00:28:51.883 [2024-07-15 16:10:20.489497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.883 [2024-07-15 16:10:20.489507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.883 qpair failed and we were unable to recover it. 00:28:51.883 [2024-07-15 16:10:20.489672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.883 [2024-07-15 16:10:20.489683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.883 qpair failed and we were unable to recover it. 00:28:51.883 [2024-07-15 16:10:20.489792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.883 [2024-07-15 16:10:20.489802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.883 qpair failed and we were unable to recover it. 00:28:51.883 [2024-07-15 16:10:20.489960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.883 [2024-07-15 16:10:20.489971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.883 qpair failed and we were unable to recover it. 00:28:51.883 [2024-07-15 16:10:20.490071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.883 [2024-07-15 16:10:20.490080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.883 qpair failed and we were unable to recover it. 00:28:51.883 [2024-07-15 16:10:20.490252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.883 [2024-07-15 16:10:20.490262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.883 qpair failed and we were unable to recover it. 00:28:51.883 [2024-07-15 16:10:20.490419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.883 [2024-07-15 16:10:20.490429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.883 qpair failed and we were unable to recover it. 00:28:51.883 [2024-07-15 16:10:20.490534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.883 [2024-07-15 16:10:20.490544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.883 qpair failed and we were unable to recover it. 00:28:51.883 [2024-07-15 16:10:20.490781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.883 [2024-07-15 16:10:20.490791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.883 qpair failed and we were unable to recover it. 00:28:51.883 [2024-07-15 16:10:20.491022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.883 [2024-07-15 16:10:20.491032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.883 qpair failed and we were unable to recover it. 00:28:51.883 [2024-07-15 16:10:20.491140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.883 [2024-07-15 16:10:20.491150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.883 qpair failed and we were unable to recover it. 00:28:51.883 [2024-07-15 16:10:20.491338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.883 [2024-07-15 16:10:20.491349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.883 qpair failed and we were unable to recover it. 00:28:51.883 [2024-07-15 16:10:20.491456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.883 [2024-07-15 16:10:20.491465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.883 qpair failed and we were unable to recover it. 00:28:51.883 [2024-07-15 16:10:20.491564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.883 [2024-07-15 16:10:20.491573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.883 qpair failed and we were unable to recover it. 00:28:51.883 [2024-07-15 16:10:20.491757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.883 [2024-07-15 16:10:20.491767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.883 qpair failed and we were unable to recover it. 00:28:51.884 [2024-07-15 16:10:20.491924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.884 [2024-07-15 16:10:20.491934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.884 qpair failed and we were unable to recover it. 00:28:51.884 [2024-07-15 16:10:20.492018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.884 [2024-07-15 16:10:20.492028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.884 qpair failed and we were unable to recover it. 00:28:51.884 [2024-07-15 16:10:20.492220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.884 [2024-07-15 16:10:20.492237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.884 qpair failed and we were unable to recover it. 00:28:51.884 [2024-07-15 16:10:20.492335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.884 [2024-07-15 16:10:20.492345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.884 qpair failed and we were unable to recover it. 00:28:51.884 [2024-07-15 16:10:20.492454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.884 [2024-07-15 16:10:20.492464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.884 qpair failed and we were unable to recover it. 00:28:51.884 [2024-07-15 16:10:20.492571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.884 [2024-07-15 16:10:20.492581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.884 qpair failed and we were unable to recover it. 00:28:51.884 [2024-07-15 16:10:20.492703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.884 [2024-07-15 16:10:20.492712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.884 qpair failed and we were unable to recover it. 00:28:51.884 [2024-07-15 16:10:20.492878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.884 [2024-07-15 16:10:20.492891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.884 qpair failed and we were unable to recover it. 00:28:51.884 [2024-07-15 16:10:20.493067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.884 [2024-07-15 16:10:20.493076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.884 qpair failed and we were unable to recover it. 00:28:51.884 [2024-07-15 16:10:20.493168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.884 [2024-07-15 16:10:20.493178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.884 qpair failed and we were unable to recover it. 00:28:51.884 [2024-07-15 16:10:20.493278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.884 [2024-07-15 16:10:20.493289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.884 qpair failed and we were unable to recover it. 00:28:51.884 [2024-07-15 16:10:20.493369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.884 [2024-07-15 16:10:20.493380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.884 qpair failed and we were unable to recover it. 00:28:51.884 [2024-07-15 16:10:20.493454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.884 [2024-07-15 16:10:20.493464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.884 qpair failed and we were unable to recover it. 00:28:51.884 [2024-07-15 16:10:20.493582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.884 [2024-07-15 16:10:20.493592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.884 qpair failed and we were unable to recover it. 00:28:51.884 [2024-07-15 16:10:20.493698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.884 [2024-07-15 16:10:20.493710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.884 qpair failed and we were unable to recover it. 00:28:51.884 [2024-07-15 16:10:20.493830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.884 [2024-07-15 16:10:20.493840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.884 qpair failed and we were unable to recover it. 00:28:51.884 [2024-07-15 16:10:20.494014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.884 [2024-07-15 16:10:20.494024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.884 qpair failed and we were unable to recover it. 00:28:51.884 [2024-07-15 16:10:20.494179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.884 [2024-07-15 16:10:20.494189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.884 qpair failed and we were unable to recover it. 00:28:51.884 [2024-07-15 16:10:20.494275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.884 [2024-07-15 16:10:20.494285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.884 qpair failed and we were unable to recover it. 00:28:51.884 [2024-07-15 16:10:20.494479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.884 [2024-07-15 16:10:20.494489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.884 qpair failed and we were unable to recover it. 00:28:51.884 [2024-07-15 16:10:20.494574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.884 [2024-07-15 16:10:20.494584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.884 qpair failed and we were unable to recover it. 00:28:51.884 [2024-07-15 16:10:20.494693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.884 [2024-07-15 16:10:20.494702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.884 qpair failed and we were unable to recover it. 00:28:51.884 [2024-07-15 16:10:20.494872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.884 [2024-07-15 16:10:20.494883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.884 qpair failed and we were unable to recover it. 00:28:51.884 [2024-07-15 16:10:20.494993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.884 [2024-07-15 16:10:20.495003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.884 qpair failed and we were unable to recover it. 00:28:51.884 [2024-07-15 16:10:20.495105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.884 [2024-07-15 16:10:20.495115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.884 qpair failed and we were unable to recover it. 00:28:51.884 [2024-07-15 16:10:20.495214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.884 [2024-07-15 16:10:20.495229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.884 qpair failed and we were unable to recover it. 00:28:51.884 [2024-07-15 16:10:20.495321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.884 [2024-07-15 16:10:20.495330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.884 qpair failed and we were unable to recover it. 00:28:51.884 [2024-07-15 16:10:20.495426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.884 [2024-07-15 16:10:20.495436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.884 qpair failed and we were unable to recover it. 00:28:51.884 [2024-07-15 16:10:20.495670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.884 [2024-07-15 16:10:20.495680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.884 qpair failed and we were unable to recover it. 00:28:51.884 [2024-07-15 16:10:20.495836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.884 [2024-07-15 16:10:20.495846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.884 qpair failed and we were unable to recover it. 00:28:51.884 [2024-07-15 16:10:20.496044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.884 [2024-07-15 16:10:20.496053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.884 qpair failed and we were unable to recover it. 00:28:51.884 [2024-07-15 16:10:20.496156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.884 [2024-07-15 16:10:20.496165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.884 qpair failed and we were unable to recover it. 00:28:51.884 [2024-07-15 16:10:20.496298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.884 [2024-07-15 16:10:20.496309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.884 qpair failed and we were unable to recover it. 00:28:51.884 [2024-07-15 16:10:20.496399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.884 [2024-07-15 16:10:20.496410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.884 qpair failed and we were unable to recover it. 00:28:51.884 [2024-07-15 16:10:20.496514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.884 [2024-07-15 16:10:20.496524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.884 qpair failed and we were unable to recover it. 00:28:51.884 [2024-07-15 16:10:20.496635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.884 [2024-07-15 16:10:20.496645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.884 qpair failed and we were unable to recover it. 00:28:51.884 [2024-07-15 16:10:20.496789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.884 [2024-07-15 16:10:20.496799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.884 qpair failed and we were unable to recover it. 00:28:51.884 [2024-07-15 16:10:20.496906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.884 [2024-07-15 16:10:20.496916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.884 qpair failed and we were unable to recover it. 00:28:51.884 [2024-07-15 16:10:20.497067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.884 [2024-07-15 16:10:20.497077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.884 qpair failed and we were unable to recover it. 00:28:51.884 [2024-07-15 16:10:20.497169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.884 [2024-07-15 16:10:20.497179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.884 qpair failed and we were unable to recover it. 00:28:51.884 [2024-07-15 16:10:20.497349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.885 [2024-07-15 16:10:20.497360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.885 qpair failed and we were unable to recover it. 00:28:51.885 [2024-07-15 16:10:20.497514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.885 [2024-07-15 16:10:20.497523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.885 qpair failed and we were unable to recover it. 00:28:51.885 [2024-07-15 16:10:20.497677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.885 [2024-07-15 16:10:20.497687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.885 qpair failed and we were unable to recover it. 00:28:51.885 [2024-07-15 16:10:20.497806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.885 [2024-07-15 16:10:20.497817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.885 qpair failed and we were unable to recover it. 00:28:51.885 [2024-07-15 16:10:20.497917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.885 [2024-07-15 16:10:20.497927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.885 qpair failed and we were unable to recover it. 00:28:51.885 [2024-07-15 16:10:20.498018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.885 [2024-07-15 16:10:20.498029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.885 qpair failed and we were unable to recover it. 00:28:51.885 [2024-07-15 16:10:20.498133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.885 [2024-07-15 16:10:20.498143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.885 qpair failed and we were unable to recover it. 00:28:51.885 [2024-07-15 16:10:20.498304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.885 [2024-07-15 16:10:20.498315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.885 qpair failed and we were unable to recover it. 00:28:51.885 [2024-07-15 16:10:20.498473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.885 [2024-07-15 16:10:20.498483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.885 qpair failed and we were unable to recover it. 00:28:51.885 [2024-07-15 16:10:20.498578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.885 [2024-07-15 16:10:20.498588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.885 qpair failed and we were unable to recover it. 00:28:51.885 [2024-07-15 16:10:20.498694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.885 [2024-07-15 16:10:20.498704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.885 qpair failed and we were unable to recover it. 00:28:51.885 [2024-07-15 16:10:20.498945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.885 [2024-07-15 16:10:20.498955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.885 qpair failed and we were unable to recover it. 00:28:51.885 [2024-07-15 16:10:20.499061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.885 [2024-07-15 16:10:20.499070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.885 qpair failed and we were unable to recover it. 00:28:51.885 [2024-07-15 16:10:20.499174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.885 [2024-07-15 16:10:20.499183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.885 qpair failed and we were unable to recover it. 00:28:51.885 [2024-07-15 16:10:20.499287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.885 [2024-07-15 16:10:20.499297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.885 qpair failed and we were unable to recover it. 00:28:51.885 [2024-07-15 16:10:20.499469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.885 [2024-07-15 16:10:20.499480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.885 qpair failed and we were unable to recover it. 00:28:51.885 [2024-07-15 16:10:20.499590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.885 [2024-07-15 16:10:20.499600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.885 qpair failed and we were unable to recover it. 00:28:51.885 [2024-07-15 16:10:20.499693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.885 [2024-07-15 16:10:20.499704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.885 qpair failed and we were unable to recover it. 00:28:51.885 [2024-07-15 16:10:20.499929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.885 [2024-07-15 16:10:20.499941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.885 qpair failed and we were unable to recover it. 00:28:51.885 [2024-07-15 16:10:20.500060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.885 [2024-07-15 16:10:20.500070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.885 qpair failed and we were unable to recover it. 00:28:51.885 [2024-07-15 16:10:20.500168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.885 [2024-07-15 16:10:20.500178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.885 qpair failed and we were unable to recover it. 00:28:51.885 [2024-07-15 16:10:20.500274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.885 [2024-07-15 16:10:20.500285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.885 qpair failed and we were unable to recover it. 00:28:51.885 [2024-07-15 16:10:20.500358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.885 [2024-07-15 16:10:20.500371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.885 qpair failed and we were unable to recover it. 00:28:51.885 [2024-07-15 16:10:20.500559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.885 [2024-07-15 16:10:20.500569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.885 qpair failed and we were unable to recover it. 00:28:51.885 [2024-07-15 16:10:20.500743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.885 [2024-07-15 16:10:20.500752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.885 qpair failed and we were unable to recover it. 00:28:51.885 [2024-07-15 16:10:20.500980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.885 [2024-07-15 16:10:20.500990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.885 qpair failed and we were unable to recover it. 00:28:51.885 [2024-07-15 16:10:20.501093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.885 [2024-07-15 16:10:20.501104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.885 qpair failed and we were unable to recover it. 00:28:51.885 [2024-07-15 16:10:20.501195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.885 [2024-07-15 16:10:20.501204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.885 qpair failed and we were unable to recover it. 00:28:51.885 [2024-07-15 16:10:20.501363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.885 [2024-07-15 16:10:20.501375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.885 qpair failed and we were unable to recover it. 00:28:51.885 [2024-07-15 16:10:20.501541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.885 [2024-07-15 16:10:20.501550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.885 qpair failed and we were unable to recover it. 00:28:51.885 [2024-07-15 16:10:20.501783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.885 [2024-07-15 16:10:20.501793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.885 qpair failed and we were unable to recover it. 00:28:51.885 [2024-07-15 16:10:20.501959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.885 [2024-07-15 16:10:20.501968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.885 qpair failed and we were unable to recover it. 00:28:51.885 [2024-07-15 16:10:20.502151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.885 [2024-07-15 16:10:20.502161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.885 qpair failed and we were unable to recover it. 00:28:51.885 [2024-07-15 16:10:20.502258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.885 [2024-07-15 16:10:20.502268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.885 qpair failed and we were unable to recover it. 00:28:51.885 [2024-07-15 16:10:20.502389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.886 [2024-07-15 16:10:20.502398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.886 qpair failed and we were unable to recover it. 00:28:51.886 [2024-07-15 16:10:20.502505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.886 [2024-07-15 16:10:20.502515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.886 qpair failed and we were unable to recover it. 00:28:51.886 [2024-07-15 16:10:20.502637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.886 [2024-07-15 16:10:20.502646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.886 qpair failed and we were unable to recover it. 00:28:51.886 [2024-07-15 16:10:20.502807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.886 [2024-07-15 16:10:20.502816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.886 qpair failed and we were unable to recover it. 00:28:51.886 [2024-07-15 16:10:20.502916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.886 [2024-07-15 16:10:20.502928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.886 qpair failed and we were unable to recover it. 00:28:51.886 [2024-07-15 16:10:20.503022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.886 [2024-07-15 16:10:20.503032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.886 qpair failed and we were unable to recover it. 00:28:51.886 [2024-07-15 16:10:20.503128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.886 [2024-07-15 16:10:20.503137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.886 qpair failed and we were unable to recover it. 00:28:51.886 [2024-07-15 16:10:20.503248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.886 [2024-07-15 16:10:20.503259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.886 qpair failed and we were unable to recover it. 00:28:51.886 [2024-07-15 16:10:20.503380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.886 [2024-07-15 16:10:20.503390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.886 qpair failed and we were unable to recover it. 00:28:51.886 [2024-07-15 16:10:20.503558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.886 [2024-07-15 16:10:20.503569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.886 qpair failed and we were unable to recover it. 00:28:51.886 [2024-07-15 16:10:20.503671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.886 [2024-07-15 16:10:20.503681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.886 qpair failed and we were unable to recover it. 00:28:51.886 [2024-07-15 16:10:20.503842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.886 [2024-07-15 16:10:20.503851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.886 qpair failed and we were unable to recover it. 00:28:51.886 [2024-07-15 16:10:20.503941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.886 [2024-07-15 16:10:20.503951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.886 qpair failed and we were unable to recover it. 00:28:51.886 [2024-07-15 16:10:20.504043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.886 [2024-07-15 16:10:20.504053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.886 qpair failed and we were unable to recover it. 00:28:51.886 [2024-07-15 16:10:20.504211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.886 [2024-07-15 16:10:20.504222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.886 qpair failed and we were unable to recover it. 00:28:51.886 [2024-07-15 16:10:20.504343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.886 [2024-07-15 16:10:20.504353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.886 qpair failed and we were unable to recover it. 00:28:51.886 [2024-07-15 16:10:20.504503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.886 [2024-07-15 16:10:20.504513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.886 qpair failed and we were unable to recover it. 00:28:51.886 [2024-07-15 16:10:20.504618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.886 [2024-07-15 16:10:20.504628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.886 qpair failed and we were unable to recover it. 00:28:51.886 [2024-07-15 16:10:20.504761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.886 [2024-07-15 16:10:20.504771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.886 qpair failed and we were unable to recover it. 00:28:51.886 [2024-07-15 16:10:20.504950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.886 [2024-07-15 16:10:20.504959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.886 qpair failed and we were unable to recover it. 00:28:51.886 [2024-07-15 16:10:20.505062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.886 [2024-07-15 16:10:20.505071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.886 qpair failed and we were unable to recover it. 00:28:51.886 [2024-07-15 16:10:20.505237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.886 [2024-07-15 16:10:20.505247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.886 qpair failed and we were unable to recover it. 00:28:51.886 [2024-07-15 16:10:20.505413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.886 [2024-07-15 16:10:20.505423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.886 qpair failed and we were unable to recover it. 00:28:51.886 [2024-07-15 16:10:20.505595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.886 [2024-07-15 16:10:20.505605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.886 qpair failed and we were unable to recover it. 00:28:51.886 [2024-07-15 16:10:20.505706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.886 [2024-07-15 16:10:20.505716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.886 qpair failed and we were unable to recover it. 00:28:51.886 [2024-07-15 16:10:20.505821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.886 [2024-07-15 16:10:20.505831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.886 qpair failed and we were unable to recover it. 00:28:51.886 [2024-07-15 16:10:20.505906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.886 [2024-07-15 16:10:20.505915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.886 qpair failed and we were unable to recover it. 00:28:51.886 [2024-07-15 16:10:20.506000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.886 [2024-07-15 16:10:20.506010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.886 qpair failed and we were unable to recover it. 00:28:51.886 [2024-07-15 16:10:20.506098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.886 [2024-07-15 16:10:20.506110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.886 qpair failed and we were unable to recover it. 00:28:51.886 [2024-07-15 16:10:20.506204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.886 [2024-07-15 16:10:20.506214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.886 qpair failed and we were unable to recover it. 00:28:51.886 [2024-07-15 16:10:20.506323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.886 [2024-07-15 16:10:20.506333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.886 qpair failed and we were unable to recover it. 00:28:51.886 [2024-07-15 16:10:20.506529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.886 [2024-07-15 16:10:20.506538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.886 qpair failed and we were unable to recover it. 00:28:51.886 [2024-07-15 16:10:20.506710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.886 [2024-07-15 16:10:20.506720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.886 qpair failed and we were unable to recover it. 00:28:51.886 [2024-07-15 16:10:20.506874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.886 [2024-07-15 16:10:20.506885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.886 qpair failed and we were unable to recover it. 00:28:51.886 [2024-07-15 16:10:20.506995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.886 [2024-07-15 16:10:20.507004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.886 qpair failed and we were unable to recover it. 00:28:51.886 [2024-07-15 16:10:20.507089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.886 [2024-07-15 16:10:20.507098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.886 qpair failed and we were unable to recover it. 00:28:51.886 [2024-07-15 16:10:20.507264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.886 [2024-07-15 16:10:20.507275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.886 qpair failed and we were unable to recover it. 00:28:51.887 [2024-07-15 16:10:20.507379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.887 [2024-07-15 16:10:20.507389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.887 qpair failed and we were unable to recover it. 00:28:51.887 [2024-07-15 16:10:20.507501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.887 [2024-07-15 16:10:20.507510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.887 qpair failed and we were unable to recover it. 00:28:51.887 [2024-07-15 16:10:20.507694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.887 [2024-07-15 16:10:20.507704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.887 qpair failed and we were unable to recover it. 00:28:51.887 [2024-07-15 16:10:20.507805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.887 [2024-07-15 16:10:20.507815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.887 qpair failed and we were unable to recover it. 00:28:51.887 [2024-07-15 16:10:20.507932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.887 [2024-07-15 16:10:20.507941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.887 qpair failed and we were unable to recover it. 00:28:51.887 [2024-07-15 16:10:20.508044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.887 [2024-07-15 16:10:20.508055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.887 qpair failed and we were unable to recover it. 00:28:51.887 [2024-07-15 16:10:20.508149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.887 [2024-07-15 16:10:20.508159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.887 qpair failed and we were unable to recover it. 00:28:51.887 [2024-07-15 16:10:20.508326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.887 [2024-07-15 16:10:20.508336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.887 qpair failed and we were unable to recover it. 00:28:51.887 [2024-07-15 16:10:20.508472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.887 [2024-07-15 16:10:20.508482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.887 qpair failed and we were unable to recover it. 00:28:51.887 [2024-07-15 16:10:20.508641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.887 [2024-07-15 16:10:20.508650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.887 qpair failed and we were unable to recover it. 00:28:51.887 [2024-07-15 16:10:20.508741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.887 [2024-07-15 16:10:20.508751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.887 qpair failed and we were unable to recover it. 00:28:51.887 [2024-07-15 16:10:20.508845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.887 [2024-07-15 16:10:20.508856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.887 qpair failed and we were unable to recover it. 00:28:51.887 [2024-07-15 16:10:20.508951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.887 [2024-07-15 16:10:20.508961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.887 qpair failed and we were unable to recover it. 00:28:51.887 [2024-07-15 16:10:20.509053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.887 [2024-07-15 16:10:20.509062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.887 qpair failed and we were unable to recover it. 00:28:51.887 [2024-07-15 16:10:20.509170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.887 [2024-07-15 16:10:20.509180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.887 qpair failed and we were unable to recover it. 00:28:51.887 [2024-07-15 16:10:20.509334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.887 [2024-07-15 16:10:20.509345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.887 qpair failed and we were unable to recover it. 00:28:51.887 [2024-07-15 16:10:20.509440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.887 [2024-07-15 16:10:20.509451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.887 qpair failed and we were unable to recover it. 00:28:51.887 [2024-07-15 16:10:20.509539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.887 [2024-07-15 16:10:20.509553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.887 qpair failed and we were unable to recover it. 00:28:51.887 [2024-07-15 16:10:20.509644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.887 [2024-07-15 16:10:20.509654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.887 qpair failed and we were unable to recover it. 00:28:51.887 [2024-07-15 16:10:20.509743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.887 [2024-07-15 16:10:20.509753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.887 qpair failed and we were unable to recover it. 00:28:51.887 [2024-07-15 16:10:20.509849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.887 [2024-07-15 16:10:20.509859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.887 qpair failed and we were unable to recover it. 00:28:51.887 [2024-07-15 16:10:20.510015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.887 [2024-07-15 16:10:20.510026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.887 qpair failed and we were unable to recover it. 00:28:51.887 [2024-07-15 16:10:20.510183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.887 [2024-07-15 16:10:20.510194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.887 qpair failed and we were unable to recover it. 00:28:51.887 [2024-07-15 16:10:20.510368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.887 [2024-07-15 16:10:20.510379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.887 qpair failed and we were unable to recover it. 00:28:51.887 [2024-07-15 16:10:20.510536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.887 [2024-07-15 16:10:20.510546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.887 qpair failed and we were unable to recover it. 00:28:51.887 [2024-07-15 16:10:20.510637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.887 [2024-07-15 16:10:20.510646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.887 qpair failed and we were unable to recover it. 00:28:51.887 [2024-07-15 16:10:20.510735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.887 [2024-07-15 16:10:20.510745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.887 qpair failed and we were unable to recover it. 00:28:51.887 [2024-07-15 16:10:20.510840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.887 [2024-07-15 16:10:20.510849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.887 qpair failed and we were unable to recover it. 00:28:51.887 [2024-07-15 16:10:20.510948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.887 [2024-07-15 16:10:20.510958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.887 qpair failed and we were unable to recover it. 00:28:51.887 [2024-07-15 16:10:20.511040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.887 [2024-07-15 16:10:20.511049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.887 qpair failed and we were unable to recover it. 00:28:51.887 [2024-07-15 16:10:20.511134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.887 [2024-07-15 16:10:20.511144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.887 qpair failed and we were unable to recover it. 00:28:51.887 [2024-07-15 16:10:20.511240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.887 [2024-07-15 16:10:20.511253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.887 qpair failed and we were unable to recover it. 00:28:51.887 [2024-07-15 16:10:20.511355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.887 [2024-07-15 16:10:20.511365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.887 qpair failed and we were unable to recover it. 00:28:51.887 [2024-07-15 16:10:20.511544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.887 [2024-07-15 16:10:20.511554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.887 qpair failed and we were unable to recover it. 00:28:51.887 [2024-07-15 16:10:20.511646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.887 [2024-07-15 16:10:20.511655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.887 qpair failed and we were unable to recover it. 00:28:51.887 [2024-07-15 16:10:20.511762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.887 [2024-07-15 16:10:20.511772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.887 qpair failed and we were unable to recover it. 00:28:51.887 [2024-07-15 16:10:20.511861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.887 [2024-07-15 16:10:20.511871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.887 qpair failed and we were unable to recover it. 00:28:51.887 [2024-07-15 16:10:20.512089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.887 [2024-07-15 16:10:20.512099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.887 qpair failed and we were unable to recover it. 00:28:51.887 [2024-07-15 16:10:20.512266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.887 [2024-07-15 16:10:20.512281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.888 qpair failed and we were unable to recover it. 00:28:51.888 [2024-07-15 16:10:20.512481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.888 [2024-07-15 16:10:20.512492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.888 qpair failed and we were unable to recover it. 00:28:51.888 [2024-07-15 16:10:20.512595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.888 [2024-07-15 16:10:20.512605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.888 qpair failed and we were unable to recover it. 00:28:51.888 [2024-07-15 16:10:20.512771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.888 [2024-07-15 16:10:20.512781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.888 qpair failed and we were unable to recover it. 00:28:51.888 [2024-07-15 16:10:20.512948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.888 [2024-07-15 16:10:20.512958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.888 qpair failed and we were unable to recover it. 00:28:51.888 [2024-07-15 16:10:20.513064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.888 [2024-07-15 16:10:20.513074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.888 qpair failed and we were unable to recover it. 00:28:51.888 [2024-07-15 16:10:20.513195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.888 [2024-07-15 16:10:20.513206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.888 qpair failed and we were unable to recover it. 00:28:51.888 [2024-07-15 16:10:20.513319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.888 [2024-07-15 16:10:20.513330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.888 qpair failed and we were unable to recover it. 00:28:51.888 [2024-07-15 16:10:20.513429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.888 [2024-07-15 16:10:20.513456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.888 qpair failed and we were unable to recover it. 00:28:51.888 [2024-07-15 16:10:20.513619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.888 [2024-07-15 16:10:20.513629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.888 qpair failed and we were unable to recover it. 00:28:51.888 [2024-07-15 16:10:20.513787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.888 [2024-07-15 16:10:20.513797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.888 qpair failed and we were unable to recover it. 00:28:51.888 [2024-07-15 16:10:20.514022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.888 [2024-07-15 16:10:20.514032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.888 qpair failed and we were unable to recover it. 00:28:51.888 [2024-07-15 16:10:20.514190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.888 [2024-07-15 16:10:20.514200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.888 qpair failed and we were unable to recover it. 00:28:51.888 [2024-07-15 16:10:20.514309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.888 [2024-07-15 16:10:20.514320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.888 qpair failed and we were unable to recover it. 00:28:51.888 [2024-07-15 16:10:20.514423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.888 [2024-07-15 16:10:20.514434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.888 qpair failed and we were unable to recover it. 00:28:51.888 [2024-07-15 16:10:20.514599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.888 [2024-07-15 16:10:20.514609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.888 qpair failed and we were unable to recover it. 00:28:51.888 [2024-07-15 16:10:20.514772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.888 [2024-07-15 16:10:20.514782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.888 qpair failed and we were unable to recover it. 00:28:51.888 [2024-07-15 16:10:20.514954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.888 [2024-07-15 16:10:20.514964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.888 qpair failed and we were unable to recover it. 00:28:51.888 [2024-07-15 16:10:20.515126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.888 [2024-07-15 16:10:20.515136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.888 qpair failed and we were unable to recover it. 00:28:51.888 [2024-07-15 16:10:20.515241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.888 [2024-07-15 16:10:20.515252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.888 qpair failed and we were unable to recover it. 00:28:51.888 [2024-07-15 16:10:20.515373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.888 [2024-07-15 16:10:20.515405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.888 qpair failed and we were unable to recover it. 00:28:51.888 [2024-07-15 16:10:20.515582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.888 [2024-07-15 16:10:20.515597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.888 qpair failed and we were unable to recover it. 00:28:51.888 [2024-07-15 16:10:20.515714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.888 [2024-07-15 16:10:20.515729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.888 qpair failed and we were unable to recover it. 00:28:51.888 [2024-07-15 16:10:20.515911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.888 [2024-07-15 16:10:20.515925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.888 qpair failed and we were unable to recover it. 00:28:51.888 [2024-07-15 16:10:20.516033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.888 [2024-07-15 16:10:20.516047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.888 qpair failed and we were unable to recover it. 00:28:51.888 [2024-07-15 16:10:20.516168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.888 [2024-07-15 16:10:20.516181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.888 qpair failed and we were unable to recover it. 00:28:51.888 [2024-07-15 16:10:20.516289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.888 [2024-07-15 16:10:20.516304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.888 qpair failed and we were unable to recover it. 00:28:51.888 [2024-07-15 16:10:20.516419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.888 [2024-07-15 16:10:20.516433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.888 qpair failed and we were unable to recover it. 00:28:51.888 [2024-07-15 16:10:20.516615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.888 [2024-07-15 16:10:20.516629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.888 qpair failed and we were unable to recover it. 00:28:51.888 [2024-07-15 16:10:20.516732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.888 [2024-07-15 16:10:20.516750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.888 qpair failed and we were unable to recover it. 00:28:51.888 [2024-07-15 16:10:20.516852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.888 [2024-07-15 16:10:20.516870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.888 qpair failed and we were unable to recover it. 00:28:51.888 [2024-07-15 16:10:20.517031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.888 [2024-07-15 16:10:20.517045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.888 qpair failed and we were unable to recover it. 00:28:51.888 [2024-07-15 16:10:20.517207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.888 [2024-07-15 16:10:20.517221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.888 qpair failed and we were unable to recover it. 00:28:51.888 [2024-07-15 16:10:20.517337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.888 [2024-07-15 16:10:20.517352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.888 qpair failed and we were unable to recover it. 00:28:51.888 [2024-07-15 16:10:20.517476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.888 [2024-07-15 16:10:20.517490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.888 qpair failed and we were unable to recover it. 00:28:51.888 [2024-07-15 16:10:20.517654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.888 [2024-07-15 16:10:20.517668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.888 qpair failed and we were unable to recover it. 00:28:51.888 [2024-07-15 16:10:20.517830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.888 [2024-07-15 16:10:20.517844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.888 qpair failed and we were unable to recover it. 00:28:51.888 [2024-07-15 16:10:20.518024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.888 [2024-07-15 16:10:20.518038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.888 qpair failed and we were unable to recover it. 00:28:51.888 [2024-07-15 16:10:20.518215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.888 [2024-07-15 16:10:20.518234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.888 qpair failed and we were unable to recover it. 00:28:51.888 [2024-07-15 16:10:20.518332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.888 [2024-07-15 16:10:20.518346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.888 qpair failed and we were unable to recover it. 00:28:51.888 [2024-07-15 16:10:20.518459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.888 [2024-07-15 16:10:20.518473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.888 qpair failed and we were unable to recover it. 00:28:51.889 [2024-07-15 16:10:20.518576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.889 [2024-07-15 16:10:20.518590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.889 qpair failed and we were unable to recover it. 00:28:51.889 [2024-07-15 16:10:20.518751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.889 [2024-07-15 16:10:20.518764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.889 qpair failed and we were unable to recover it. 00:28:51.889 [2024-07-15 16:10:20.518861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.889 [2024-07-15 16:10:20.518875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.889 qpair failed and we were unable to recover it. 00:28:51.889 [2024-07-15 16:10:20.519106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.889 [2024-07-15 16:10:20.519120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.889 qpair failed and we were unable to recover it. 00:28:51.889 [2024-07-15 16:10:20.519236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.889 [2024-07-15 16:10:20.519250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.889 qpair failed and we were unable to recover it. 00:28:51.889 [2024-07-15 16:10:20.519417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.889 [2024-07-15 16:10:20.519430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.889 qpair failed and we were unable to recover it. 00:28:51.889 [2024-07-15 16:10:20.519541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.889 [2024-07-15 16:10:20.519556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.889 qpair failed and we were unable to recover it. 00:28:51.889 [2024-07-15 16:10:20.519681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.889 [2024-07-15 16:10:20.519694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.889 qpair failed and we were unable to recover it. 00:28:51.889 [2024-07-15 16:10:20.519860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.889 [2024-07-15 16:10:20.519874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.889 qpair failed and we were unable to recover it. 00:28:51.889 [2024-07-15 16:10:20.520038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.889 [2024-07-15 16:10:20.520051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.889 qpair failed and we were unable to recover it. 00:28:51.889 [2024-07-15 16:10:20.520170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.889 [2024-07-15 16:10:20.520184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.889 qpair failed and we were unable to recover it. 00:28:51.889 [2024-07-15 16:10:20.520282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.889 [2024-07-15 16:10:20.520296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.889 qpair failed and we were unable to recover it. 00:28:51.889 [2024-07-15 16:10:20.520400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.889 [2024-07-15 16:10:20.520414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.889 qpair failed and we were unable to recover it. 00:28:51.889 [2024-07-15 16:10:20.520589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.889 [2024-07-15 16:10:20.520602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.889 qpair failed and we were unable to recover it. 00:28:51.889 [2024-07-15 16:10:20.520775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.889 [2024-07-15 16:10:20.520789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.889 qpair failed and we were unable to recover it. 00:28:51.889 [2024-07-15 16:10:20.520893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.889 [2024-07-15 16:10:20.520906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.889 qpair failed and we were unable to recover it. 00:28:51.889 [2024-07-15 16:10:20.521048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.889 [2024-07-15 16:10:20.521061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.889 qpair failed and we were unable to recover it. 00:28:51.889 [2024-07-15 16:10:20.521176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.889 [2024-07-15 16:10:20.521190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.889 qpair failed and we were unable to recover it. 00:28:51.889 [2024-07-15 16:10:20.521355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.889 [2024-07-15 16:10:20.521368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.889 qpair failed and we were unable to recover it. 00:28:51.889 [2024-07-15 16:10:20.521511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.889 [2024-07-15 16:10:20.521525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.889 qpair failed and we were unable to recover it. 00:28:51.889 [2024-07-15 16:10:20.521685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.889 [2024-07-15 16:10:20.521699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.889 qpair failed and we were unable to recover it. 00:28:51.889 [2024-07-15 16:10:20.521861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.889 [2024-07-15 16:10:20.521874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.889 qpair failed and we were unable to recover it. 00:28:51.889 [2024-07-15 16:10:20.522107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.889 [2024-07-15 16:10:20.522121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.889 qpair failed and we were unable to recover it. 00:28:51.889 [2024-07-15 16:10:20.522256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.889 [2024-07-15 16:10:20.522270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.889 qpair failed and we were unable to recover it. 00:28:51.889 [2024-07-15 16:10:20.522381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.889 [2024-07-15 16:10:20.522394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.889 qpair failed and we were unable to recover it. 00:28:51.889 [2024-07-15 16:10:20.522506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.889 [2024-07-15 16:10:20.522521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.889 qpair failed and we were unable to recover it. 00:28:51.889 [2024-07-15 16:10:20.522726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.889 [2024-07-15 16:10:20.522740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.889 qpair failed and we were unable to recover it. 00:28:51.889 [2024-07-15 16:10:20.522846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.889 [2024-07-15 16:10:20.522859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.889 qpair failed and we were unable to recover it. 00:28:51.889 [2024-07-15 16:10:20.523092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.889 [2024-07-15 16:10:20.523106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.889 qpair failed and we were unable to recover it. 00:28:51.889 [2024-07-15 16:10:20.523292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.889 [2024-07-15 16:10:20.523307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.889 qpair failed and we were unable to recover it. 00:28:51.889 [2024-07-15 16:10:20.523419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.889 [2024-07-15 16:10:20.523432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.889 qpair failed and we were unable to recover it. 00:28:51.889 [2024-07-15 16:10:20.523540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.889 [2024-07-15 16:10:20.523553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.889 qpair failed and we were unable to recover it. 00:28:51.889 [2024-07-15 16:10:20.523656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.889 [2024-07-15 16:10:20.523670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.889 qpair failed and we were unable to recover it. 00:28:51.889 [2024-07-15 16:10:20.523776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.889 [2024-07-15 16:10:20.523793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.889 qpair failed and we were unable to recover it. 00:28:51.889 [2024-07-15 16:10:20.523908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.889 [2024-07-15 16:10:20.523921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.889 qpair failed and we were unable to recover it. 00:28:51.889 [2024-07-15 16:10:20.524034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.889 [2024-07-15 16:10:20.524047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.889 qpair failed and we were unable to recover it. 00:28:51.889 [2024-07-15 16:10:20.524144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.889 [2024-07-15 16:10:20.524158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.889 qpair failed and we were unable to recover it. 00:28:51.889 [2024-07-15 16:10:20.524279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.889 [2024-07-15 16:10:20.524294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.889 qpair failed and we were unable to recover it. 00:28:51.889 [2024-07-15 16:10:20.524388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.889 [2024-07-15 16:10:20.524402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.890 qpair failed and we were unable to recover it. 00:28:51.890 [2024-07-15 16:10:20.524509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.890 [2024-07-15 16:10:20.524522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.890 qpair failed and we were unable to recover it. 00:28:51.890 [2024-07-15 16:10:20.524761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.890 [2024-07-15 16:10:20.524771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.890 qpair failed and we were unable to recover it. 00:28:51.890 [2024-07-15 16:10:20.524933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.890 [2024-07-15 16:10:20.524942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.890 qpair failed and we were unable to recover it. 00:28:51.890 [2024-07-15 16:10:20.525191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.890 [2024-07-15 16:10:20.525201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.890 qpair failed and we were unable to recover it. 00:28:51.890 [2024-07-15 16:10:20.525315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.890 [2024-07-15 16:10:20.525325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.890 qpair failed and we were unable to recover it. 00:28:51.890 [2024-07-15 16:10:20.525570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.890 [2024-07-15 16:10:20.525580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.890 qpair failed and we were unable to recover it. 00:28:51.890 [2024-07-15 16:10:20.525681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.890 [2024-07-15 16:10:20.525691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.890 qpair failed and we were unable to recover it. 00:28:51.890 [2024-07-15 16:10:20.525824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.890 [2024-07-15 16:10:20.525833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.890 qpair failed and we were unable to recover it. 00:28:51.890 [2024-07-15 16:10:20.525932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.890 [2024-07-15 16:10:20.525942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.890 qpair failed and we were unable to recover it. 00:28:51.890 [2024-07-15 16:10:20.526104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.890 [2024-07-15 16:10:20.526114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.890 qpair failed and we were unable to recover it. 00:28:51.890 [2024-07-15 16:10:20.526217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.890 [2024-07-15 16:10:20.526231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.890 qpair failed and we were unable to recover it. 00:28:51.890 [2024-07-15 16:10:20.526397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.890 [2024-07-15 16:10:20.526407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.890 qpair failed and we were unable to recover it. 00:28:51.890 [2024-07-15 16:10:20.526523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.890 [2024-07-15 16:10:20.526533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.890 qpair failed and we were unable to recover it. 00:28:51.890 [2024-07-15 16:10:20.526694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.890 [2024-07-15 16:10:20.526704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.890 qpair failed and we were unable to recover it. 00:28:51.890 [2024-07-15 16:10:20.526809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.890 [2024-07-15 16:10:20.526819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.890 qpair failed and we were unable to recover it. 00:28:51.890 [2024-07-15 16:10:20.526915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.890 [2024-07-15 16:10:20.526924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.890 qpair failed and we were unable to recover it. 00:28:51.890 [2024-07-15 16:10:20.527011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.890 [2024-07-15 16:10:20.527020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.890 qpair failed and we were unable to recover it. 00:28:51.890 [2024-07-15 16:10:20.527114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.890 [2024-07-15 16:10:20.527124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.890 qpair failed and we were unable to recover it. 00:28:51.890 [2024-07-15 16:10:20.527236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.890 [2024-07-15 16:10:20.527247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.890 qpair failed and we were unable to recover it. 00:28:51.890 [2024-07-15 16:10:20.527358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.890 [2024-07-15 16:10:20.527368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.890 qpair failed and we were unable to recover it. 00:28:51.890 [2024-07-15 16:10:20.527471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.890 [2024-07-15 16:10:20.527480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.890 qpair failed and we were unable to recover it. 00:28:51.890 [2024-07-15 16:10:20.527577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.890 [2024-07-15 16:10:20.527587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.890 qpair failed and we were unable to recover it. 00:28:51.890 [2024-07-15 16:10:20.527691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.890 [2024-07-15 16:10:20.527702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.890 qpair failed and we were unable to recover it. 00:28:51.890 [2024-07-15 16:10:20.527791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.890 [2024-07-15 16:10:20.527802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.890 qpair failed and we were unable to recover it. 00:28:51.890 [2024-07-15 16:10:20.527982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.890 [2024-07-15 16:10:20.527992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.890 qpair failed and we were unable to recover it. 00:28:51.890 [2024-07-15 16:10:20.528080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.890 [2024-07-15 16:10:20.528089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.890 qpair failed and we were unable to recover it. 00:28:51.890 [2024-07-15 16:10:20.528184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.890 [2024-07-15 16:10:20.528194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.890 qpair failed and we were unable to recover it. 00:28:51.890 [2024-07-15 16:10:20.528361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.890 [2024-07-15 16:10:20.528371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.890 qpair failed and we were unable to recover it. 00:28:51.890 [2024-07-15 16:10:20.528476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.890 [2024-07-15 16:10:20.528486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.890 qpair failed and we were unable to recover it. 00:28:51.890 [2024-07-15 16:10:20.528696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.890 [2024-07-15 16:10:20.528705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.890 qpair failed and we were unable to recover it. 00:28:51.890 [2024-07-15 16:10:20.528808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.890 [2024-07-15 16:10:20.528817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.890 qpair failed and we were unable to recover it. 00:28:51.890 [2024-07-15 16:10:20.528957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.890 [2024-07-15 16:10:20.528966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.890 qpair failed and we were unable to recover it. 00:28:51.891 [2024-07-15 16:10:20.529061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.891 [2024-07-15 16:10:20.529071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.891 qpair failed and we were unable to recover it. 00:28:51.891 [2024-07-15 16:10:20.529158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.891 [2024-07-15 16:10:20.529168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.891 qpair failed and we were unable to recover it. 00:28:51.891 [2024-07-15 16:10:20.529336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.891 [2024-07-15 16:10:20.529346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.891 qpair failed and we were unable to recover it. 00:28:51.891 [2024-07-15 16:10:20.529439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.891 [2024-07-15 16:10:20.529449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.891 qpair failed and we were unable to recover it. 00:28:51.891 [2024-07-15 16:10:20.529625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.891 [2024-07-15 16:10:20.529634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.891 qpair failed and we were unable to recover it. 00:28:51.891 [2024-07-15 16:10:20.529744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.891 [2024-07-15 16:10:20.529754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.891 qpair failed and we were unable to recover it. 00:28:51.891 [2024-07-15 16:10:20.529859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.891 [2024-07-15 16:10:20.529869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.891 qpair failed and we were unable to recover it. 00:28:51.891 [2024-07-15 16:10:20.529973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.891 [2024-07-15 16:10:20.529982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.891 qpair failed and we were unable to recover it. 00:28:51.891 [2024-07-15 16:10:20.530085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.891 [2024-07-15 16:10:20.530095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.891 qpair failed and we were unable to recover it. 00:28:51.891 [2024-07-15 16:10:20.530318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.891 [2024-07-15 16:10:20.530328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.891 qpair failed and we were unable to recover it. 00:28:51.891 [2024-07-15 16:10:20.530488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.891 [2024-07-15 16:10:20.530498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.891 qpair failed and we were unable to recover it. 00:28:51.891 [2024-07-15 16:10:20.530638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.891 [2024-07-15 16:10:20.530647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.891 qpair failed and we were unable to recover it. 00:28:51.891 [2024-07-15 16:10:20.530732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.891 [2024-07-15 16:10:20.530742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.891 qpair failed and we were unable to recover it. 00:28:51.891 [2024-07-15 16:10:20.530847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.891 [2024-07-15 16:10:20.530857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.891 qpair failed and we were unable to recover it. 00:28:51.891 [2024-07-15 16:10:20.530963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.891 [2024-07-15 16:10:20.530972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.891 qpair failed and we were unable to recover it. 00:28:51.891 [2024-07-15 16:10:20.531150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.891 [2024-07-15 16:10:20.531160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.891 qpair failed and we were unable to recover it. 00:28:51.891 [2024-07-15 16:10:20.531317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.891 [2024-07-15 16:10:20.531327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.891 qpair failed and we were unable to recover it. 00:28:51.891 [2024-07-15 16:10:20.531415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.891 [2024-07-15 16:10:20.531425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.891 qpair failed and we were unable to recover it. 00:28:51.891 [2024-07-15 16:10:20.531591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.891 [2024-07-15 16:10:20.531601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.891 qpair failed and we were unable to recover it. 00:28:51.891 [2024-07-15 16:10:20.531710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.891 [2024-07-15 16:10:20.531720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.891 qpair failed and we were unable to recover it. 00:28:51.891 [2024-07-15 16:10:20.531821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.891 [2024-07-15 16:10:20.531831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.891 qpair failed and we were unable to recover it. 00:28:51.891 [2024-07-15 16:10:20.531923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.891 [2024-07-15 16:10:20.531933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.891 qpair failed and we were unable to recover it. 00:28:51.891 [2024-07-15 16:10:20.532031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.891 [2024-07-15 16:10:20.532040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.891 qpair failed and we were unable to recover it. 00:28:51.891 [2024-07-15 16:10:20.532144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.891 [2024-07-15 16:10:20.532154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.891 qpair failed and we were unable to recover it. 00:28:51.891 [2024-07-15 16:10:20.532374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.891 [2024-07-15 16:10:20.532385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.891 qpair failed and we were unable to recover it. 00:28:51.891 [2024-07-15 16:10:20.532542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.891 [2024-07-15 16:10:20.532552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.891 qpair failed and we were unable to recover it. 00:28:51.891 [2024-07-15 16:10:20.532777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.891 [2024-07-15 16:10:20.532787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.891 qpair failed and we were unable to recover it. 00:28:51.891 [2024-07-15 16:10:20.532909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.891 [2024-07-15 16:10:20.532918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.891 qpair failed and we were unable to recover it. 00:28:51.891 [2024-07-15 16:10:20.533094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.891 [2024-07-15 16:10:20.533105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.891 qpair failed and we were unable to recover it. 00:28:51.891 [2024-07-15 16:10:20.533208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.891 [2024-07-15 16:10:20.533220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.891 qpair failed and we were unable to recover it. 00:28:51.891 [2024-07-15 16:10:20.533431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.891 [2024-07-15 16:10:20.533442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.891 qpair failed and we were unable to recover it. 00:28:51.891 [2024-07-15 16:10:20.533611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.891 [2024-07-15 16:10:20.533622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.891 qpair failed and we were unable to recover it. 00:28:51.891 [2024-07-15 16:10:20.533709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.891 [2024-07-15 16:10:20.533719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.891 qpair failed and we were unable to recover it. 00:28:51.891 [2024-07-15 16:10:20.533876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.891 [2024-07-15 16:10:20.533885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.891 qpair failed and we were unable to recover it. 00:28:51.891 [2024-07-15 16:10:20.533984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.891 [2024-07-15 16:10:20.533993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.891 qpair failed and we were unable to recover it. 00:28:51.891 [2024-07-15 16:10:20.534091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.891 [2024-07-15 16:10:20.534101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.891 qpair failed and we were unable to recover it. 00:28:51.891 [2024-07-15 16:10:20.534270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.891 [2024-07-15 16:10:20.534279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.891 qpair failed and we were unable to recover it. 00:28:51.891 [2024-07-15 16:10:20.534438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.891 [2024-07-15 16:10:20.534448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.892 qpair failed and we were unable to recover it. 00:28:51.892 [2024-07-15 16:10:20.534547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.892 [2024-07-15 16:10:20.534557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.892 qpair failed and we were unable to recover it. 00:28:51.892 [2024-07-15 16:10:20.534780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.892 [2024-07-15 16:10:20.534790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.892 qpair failed and we were unable to recover it. 00:28:51.892 [2024-07-15 16:10:20.534954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.892 [2024-07-15 16:10:20.534963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.892 qpair failed and we were unable to recover it. 00:28:51.892 [2024-07-15 16:10:20.535080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.892 [2024-07-15 16:10:20.535090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.892 qpair failed and we were unable to recover it. 00:28:51.892 [2024-07-15 16:10:20.535262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.892 [2024-07-15 16:10:20.535272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.892 qpair failed and we were unable to recover it. 00:28:51.892 [2024-07-15 16:10:20.535373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.892 [2024-07-15 16:10:20.535382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.892 qpair failed and we were unable to recover it. 00:28:51.892 [2024-07-15 16:10:20.535560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.892 [2024-07-15 16:10:20.535570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.892 qpair failed and we were unable to recover it. 00:28:51.892 [2024-07-15 16:10:20.535677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.892 [2024-07-15 16:10:20.535687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.892 qpair failed and we were unable to recover it. 00:28:51.892 [2024-07-15 16:10:20.535861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.892 [2024-07-15 16:10:20.535871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.892 qpair failed and we were unable to recover it. 00:28:51.892 [2024-07-15 16:10:20.535983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.892 [2024-07-15 16:10:20.535993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.892 qpair failed and we were unable to recover it. 00:28:51.892 [2024-07-15 16:10:20.536094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.892 [2024-07-15 16:10:20.536104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.892 qpair failed and we were unable to recover it. 00:28:51.892 [2024-07-15 16:10:20.536278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.892 [2024-07-15 16:10:20.536288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.892 qpair failed and we were unable to recover it. 00:28:51.892 [2024-07-15 16:10:20.536382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.892 [2024-07-15 16:10:20.536391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.892 qpair failed and we were unable to recover it. 00:28:51.892 [2024-07-15 16:10:20.536482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.892 [2024-07-15 16:10:20.536491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.892 qpair failed and we were unable to recover it. 00:28:51.892 [2024-07-15 16:10:20.536582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.892 [2024-07-15 16:10:20.536592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.892 qpair failed and we were unable to recover it. 00:28:51.892 [2024-07-15 16:10:20.536744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.892 [2024-07-15 16:10:20.536754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.892 qpair failed and we were unable to recover it. 00:28:51.892 [2024-07-15 16:10:20.536922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.892 [2024-07-15 16:10:20.536931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.892 qpair failed and we were unable to recover it. 00:28:51.892 [2024-07-15 16:10:20.537026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.892 [2024-07-15 16:10:20.537036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.892 qpair failed and we were unable to recover it. 00:28:51.892 [2024-07-15 16:10:20.537193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.892 [2024-07-15 16:10:20.537202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.892 qpair failed and we were unable to recover it. 00:28:51.892 [2024-07-15 16:10:20.537352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.892 [2024-07-15 16:10:20.537362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.892 qpair failed and we were unable to recover it. 00:28:51.892 [2024-07-15 16:10:20.537600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.892 [2024-07-15 16:10:20.537610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.892 qpair failed and we were unable to recover it. 00:28:51.892 [2024-07-15 16:10:20.537710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.892 [2024-07-15 16:10:20.537719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.892 qpair failed and we were unable to recover it. 00:28:51.892 [2024-07-15 16:10:20.537886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.892 [2024-07-15 16:10:20.537895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.892 qpair failed and we were unable to recover it. 00:28:51.892 [2024-07-15 16:10:20.537986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.892 [2024-07-15 16:10:20.537995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.892 qpair failed and we were unable to recover it. 00:28:51.892 [2024-07-15 16:10:20.538211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.892 [2024-07-15 16:10:20.538221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.892 qpair failed and we were unable to recover it. 00:28:51.892 [2024-07-15 16:10:20.538397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.892 [2024-07-15 16:10:20.538407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.892 qpair failed and we were unable to recover it. 00:28:51.892 [2024-07-15 16:10:20.538493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.892 [2024-07-15 16:10:20.538502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.892 qpair failed and we were unable to recover it. 00:28:51.892 [2024-07-15 16:10:20.538612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.892 [2024-07-15 16:10:20.538623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.892 qpair failed and we were unable to recover it. 00:28:51.892 [2024-07-15 16:10:20.538797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.892 [2024-07-15 16:10:20.538807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.892 qpair failed and we were unable to recover it. 00:28:51.892 [2024-07-15 16:10:20.538908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.892 [2024-07-15 16:10:20.538917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.892 qpair failed and we were unable to recover it. 00:28:51.892 [2024-07-15 16:10:20.539067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.892 [2024-07-15 16:10:20.539078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.892 qpair failed and we were unable to recover it. 00:28:51.892 [2024-07-15 16:10:20.539305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.892 [2024-07-15 16:10:20.539317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.892 qpair failed and we were unable to recover it. 00:28:51.892 [2024-07-15 16:10:20.539418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.892 [2024-07-15 16:10:20.539427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.892 qpair failed and we were unable to recover it. 00:28:51.892 [2024-07-15 16:10:20.539517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.892 [2024-07-15 16:10:20.539527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.892 qpair failed and we were unable to recover it. 00:28:51.892 [2024-07-15 16:10:20.539643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.892 [2024-07-15 16:10:20.539652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.892 qpair failed and we were unable to recover it. 00:28:51.892 [2024-07-15 16:10:20.539745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.892 [2024-07-15 16:10:20.539755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.892 qpair failed and we were unable to recover it. 00:28:51.892 [2024-07-15 16:10:20.539876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.892 [2024-07-15 16:10:20.539886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.892 qpair failed and we were unable to recover it. 00:28:51.892 [2024-07-15 16:10:20.540041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.892 [2024-07-15 16:10:20.540051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.892 qpair failed and we were unable to recover it. 00:28:51.892 [2024-07-15 16:10:20.540135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.892 [2024-07-15 16:10:20.540145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.892 qpair failed and we were unable to recover it. 00:28:51.892 [2024-07-15 16:10:20.540236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.893 [2024-07-15 16:10:20.540246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.893 qpair failed and we were unable to recover it. 00:28:51.893 [2024-07-15 16:10:20.540365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.893 [2024-07-15 16:10:20.540375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.893 qpair failed and we were unable to recover it. 00:28:51.893 [2024-07-15 16:10:20.540471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.893 [2024-07-15 16:10:20.540480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.893 qpair failed and we were unable to recover it. 00:28:51.893 [2024-07-15 16:10:20.540578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.893 [2024-07-15 16:10:20.540587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.893 qpair failed and we were unable to recover it. 00:28:51.893 [2024-07-15 16:10:20.540748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.893 [2024-07-15 16:10:20.540758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.893 qpair failed and we were unable to recover it. 00:28:51.893 [2024-07-15 16:10:20.540866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.893 [2024-07-15 16:10:20.540875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.893 qpair failed and we were unable to recover it. 00:28:51.893 [2024-07-15 16:10:20.541047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.893 [2024-07-15 16:10:20.541056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.893 qpair failed and we were unable to recover it. 00:28:51.893 [2024-07-15 16:10:20.541283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.893 [2024-07-15 16:10:20.541293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.893 qpair failed and we were unable to recover it. 00:28:51.893 [2024-07-15 16:10:20.541399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.893 [2024-07-15 16:10:20.541409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.893 qpair failed and we were unable to recover it. 00:28:51.893 [2024-07-15 16:10:20.541515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.893 [2024-07-15 16:10:20.541524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.893 qpair failed and we were unable to recover it. 00:28:51.893 [2024-07-15 16:10:20.541628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.893 [2024-07-15 16:10:20.541638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.893 qpair failed and we were unable to recover it. 00:28:51.893 [2024-07-15 16:10:20.541791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.893 [2024-07-15 16:10:20.541800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.893 qpair failed and we were unable to recover it. 00:28:51.893 [2024-07-15 16:10:20.541903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.893 [2024-07-15 16:10:20.541913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.893 qpair failed and we were unable to recover it. 00:28:51.893 [2024-07-15 16:10:20.542092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.893 [2024-07-15 16:10:20.542102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.893 qpair failed and we were unable to recover it. 00:28:51.893 [2024-07-15 16:10:20.542291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.893 [2024-07-15 16:10:20.542301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.893 qpair failed and we were unable to recover it. 00:28:51.893 [2024-07-15 16:10:20.542406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.893 [2024-07-15 16:10:20.542415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.893 qpair failed and we were unable to recover it. 00:28:51.893 [2024-07-15 16:10:20.542569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.893 [2024-07-15 16:10:20.542579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.893 qpair failed and we were unable to recover it. 00:28:51.893 [2024-07-15 16:10:20.542686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.893 [2024-07-15 16:10:20.542696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.893 qpair failed and we were unable to recover it. 00:28:51.893 [2024-07-15 16:10:20.542849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.893 [2024-07-15 16:10:20.542858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.893 qpair failed and we were unable to recover it. 00:28:51.893 [2024-07-15 16:10:20.542969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.893 [2024-07-15 16:10:20.542979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.893 qpair failed and we were unable to recover it. 00:28:51.893 [2024-07-15 16:10:20.543133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.893 [2024-07-15 16:10:20.543142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.893 qpair failed and we were unable to recover it. 00:28:51.893 [2024-07-15 16:10:20.543240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.893 [2024-07-15 16:10:20.543250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.893 qpair failed and we were unable to recover it. 00:28:51.893 [2024-07-15 16:10:20.543355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.893 [2024-07-15 16:10:20.543365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.893 qpair failed and we were unable to recover it. 00:28:51.893 [2024-07-15 16:10:20.543459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.893 [2024-07-15 16:10:20.543469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.893 qpair failed and we were unable to recover it. 00:28:51.893 [2024-07-15 16:10:20.543573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.893 [2024-07-15 16:10:20.543583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.893 qpair failed and we were unable to recover it. 00:28:51.893 [2024-07-15 16:10:20.543686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.893 [2024-07-15 16:10:20.543696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.893 qpair failed and we were unable to recover it. 00:28:51.893 [2024-07-15 16:10:20.543803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.893 [2024-07-15 16:10:20.543812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.893 qpair failed and we were unable to recover it. 00:28:51.893 [2024-07-15 16:10:20.543912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.893 [2024-07-15 16:10:20.543922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.893 qpair failed and we were unable to recover it. 00:28:51.893 [2024-07-15 16:10:20.544093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.893 [2024-07-15 16:10:20.544103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.893 qpair failed and we were unable to recover it. 00:28:51.893 [2024-07-15 16:10:20.544198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.893 [2024-07-15 16:10:20.544208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.893 qpair failed and we were unable to recover it. 00:28:51.893 [2024-07-15 16:10:20.544290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.893 [2024-07-15 16:10:20.544300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.893 qpair failed and we were unable to recover it. 00:28:51.893 [2024-07-15 16:10:20.544469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.893 [2024-07-15 16:10:20.544478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.893 qpair failed and we were unable to recover it. 00:28:51.893 [2024-07-15 16:10:20.544559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.893 [2024-07-15 16:10:20.544570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.893 qpair failed and we were unable to recover it. 00:28:51.893 [2024-07-15 16:10:20.544672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.893 [2024-07-15 16:10:20.544682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.893 qpair failed and we were unable to recover it. 00:28:51.893 [2024-07-15 16:10:20.544818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.893 [2024-07-15 16:10:20.544828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.893 qpair failed and we were unable to recover it. 00:28:51.893 [2024-07-15 16:10:20.544932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.893 [2024-07-15 16:10:20.544942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.893 qpair failed and we were unable to recover it. 00:28:51.893 [2024-07-15 16:10:20.545096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.893 [2024-07-15 16:10:20.545106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.893 qpair failed and we were unable to recover it. 00:28:51.893 [2024-07-15 16:10:20.545199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.893 [2024-07-15 16:10:20.545209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.893 qpair failed and we were unable to recover it. 00:28:51.893 [2024-07-15 16:10:20.545393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.893 [2024-07-15 16:10:20.545403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.894 qpair failed and we were unable to recover it. 00:28:51.894 [2024-07-15 16:10:20.545511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.894 [2024-07-15 16:10:20.545521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.894 qpair failed and we were unable to recover it. 00:28:51.894 [2024-07-15 16:10:20.545704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.894 [2024-07-15 16:10:20.545714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.894 qpair failed and we were unable to recover it. 00:28:51.894 [2024-07-15 16:10:20.545832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.894 [2024-07-15 16:10:20.545842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.894 qpair failed and we were unable to recover it. 00:28:51.894 [2024-07-15 16:10:20.545939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.894 [2024-07-15 16:10:20.545949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.894 qpair failed and we were unable to recover it. 00:28:51.894 [2024-07-15 16:10:20.546038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.894 [2024-07-15 16:10:20.546048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.894 qpair failed and we were unable to recover it. 00:28:51.894 [2024-07-15 16:10:20.546236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.894 [2024-07-15 16:10:20.546247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.894 qpair failed and we were unable to recover it. 00:28:51.894 [2024-07-15 16:10:20.546343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.894 [2024-07-15 16:10:20.546352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.894 qpair failed and we were unable to recover it. 00:28:51.894 [2024-07-15 16:10:20.546532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.894 [2024-07-15 16:10:20.546542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.894 qpair failed and we were unable to recover it. 00:28:51.894 [2024-07-15 16:10:20.546629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.894 [2024-07-15 16:10:20.546638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.894 qpair failed and we were unable to recover it. 00:28:51.894 [2024-07-15 16:10:20.546807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.894 [2024-07-15 16:10:20.546816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.894 qpair failed and we were unable to recover it. 00:28:51.894 [2024-07-15 16:10:20.547012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.894 [2024-07-15 16:10:20.547023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.894 qpair failed and we were unable to recover it. 00:28:51.894 [2024-07-15 16:10:20.547117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.894 [2024-07-15 16:10:20.547127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.894 qpair failed and we were unable to recover it. 00:28:51.894 [2024-07-15 16:10:20.547235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.894 [2024-07-15 16:10:20.547245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.894 qpair failed and we were unable to recover it. 00:28:51.894 [2024-07-15 16:10:20.547321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.894 [2024-07-15 16:10:20.547330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.894 qpair failed and we were unable to recover it. 00:28:51.894 [2024-07-15 16:10:20.547509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.894 [2024-07-15 16:10:20.547519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.894 qpair failed and we were unable to recover it. 00:28:51.894 [2024-07-15 16:10:20.547627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.894 [2024-07-15 16:10:20.547636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.894 qpair failed and we were unable to recover it. 00:28:51.894 [2024-07-15 16:10:20.547726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.894 [2024-07-15 16:10:20.547735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.894 qpair failed and we were unable to recover it. 00:28:51.894 [2024-07-15 16:10:20.547906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.894 [2024-07-15 16:10:20.547916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.894 qpair failed and we were unable to recover it. 00:28:51.894 [2024-07-15 16:10:20.548094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.894 [2024-07-15 16:10:20.548104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.894 qpair failed and we were unable to recover it. 00:28:51.894 [2024-07-15 16:10:20.548207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.894 [2024-07-15 16:10:20.548216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.894 qpair failed and we were unable to recover it. 00:28:51.894 [2024-07-15 16:10:20.548323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.894 [2024-07-15 16:10:20.548333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.894 qpair failed and we were unable to recover it. 00:28:51.894 [2024-07-15 16:10:20.548426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.894 [2024-07-15 16:10:20.548435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.894 qpair failed and we were unable to recover it. 00:28:51.894 [2024-07-15 16:10:20.548591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.894 [2024-07-15 16:10:20.548601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.894 qpair failed and we were unable to recover it. 00:28:51.894 [2024-07-15 16:10:20.548721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.894 [2024-07-15 16:10:20.548731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.894 qpair failed and we were unable to recover it. 00:28:51.894 [2024-07-15 16:10:20.548826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.894 [2024-07-15 16:10:20.548836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.894 qpair failed and we were unable to recover it. 00:28:51.894 [2024-07-15 16:10:20.548926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.894 [2024-07-15 16:10:20.548935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.894 qpair failed and we were unable to recover it. 00:28:51.894 [2024-07-15 16:10:20.549113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.894 [2024-07-15 16:10:20.549124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.894 qpair failed and we were unable to recover it. 00:28:51.894 [2024-07-15 16:10:20.549212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.894 [2024-07-15 16:10:20.549221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.894 qpair failed and we were unable to recover it. 00:28:51.894 [2024-07-15 16:10:20.549333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.894 [2024-07-15 16:10:20.549343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.894 qpair failed and we were unable to recover it. 00:28:51.894 [2024-07-15 16:10:20.549443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.894 [2024-07-15 16:10:20.549452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.894 qpair failed and we were unable to recover it. 00:28:51.894 [2024-07-15 16:10:20.549619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.894 [2024-07-15 16:10:20.549629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.894 qpair failed and we were unable to recover it. 00:28:51.894 [2024-07-15 16:10:20.549714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.894 [2024-07-15 16:10:20.549723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.894 qpair failed and we were unable to recover it. 00:28:51.894 [2024-07-15 16:10:20.549839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.894 [2024-07-15 16:10:20.549849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.894 qpair failed and we were unable to recover it. 00:28:51.894 [2024-07-15 16:10:20.549939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.894 [2024-07-15 16:10:20.549950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.895 qpair failed and we were unable to recover it. 00:28:51.895 [2024-07-15 16:10:20.550040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.895 [2024-07-15 16:10:20.550050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.895 qpair failed and we were unable to recover it. 00:28:51.895 [2024-07-15 16:10:20.550143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.895 [2024-07-15 16:10:20.550153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.895 qpair failed and we were unable to recover it. 00:28:51.895 [2024-07-15 16:10:20.550247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.895 [2024-07-15 16:10:20.550257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.895 qpair failed and we were unable to recover it. 00:28:51.895 [2024-07-15 16:10:20.550351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.895 [2024-07-15 16:10:20.550360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.895 qpair failed and we were unable to recover it. 00:28:51.895 [2024-07-15 16:10:20.550471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.895 [2024-07-15 16:10:20.550480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.895 qpair failed and we were unable to recover it. 00:28:51.895 [2024-07-15 16:10:20.550579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.895 [2024-07-15 16:10:20.550588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.895 qpair failed and we were unable to recover it. 00:28:51.895 [2024-07-15 16:10:20.550677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.895 [2024-07-15 16:10:20.550687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.895 qpair failed and we were unable to recover it. 00:28:51.895 [2024-07-15 16:10:20.550801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.895 [2024-07-15 16:10:20.550811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.895 qpair failed and we were unable to recover it. 00:28:51.895 [2024-07-15 16:10:20.550969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.895 [2024-07-15 16:10:20.550978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.895 qpair failed and we were unable to recover it. 00:28:51.895 [2024-07-15 16:10:20.551145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.895 [2024-07-15 16:10:20.551154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.895 qpair failed and we were unable to recover it. 00:28:51.895 [2024-07-15 16:10:20.551248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.895 [2024-07-15 16:10:20.551258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.895 qpair failed and we were unable to recover it. 00:28:51.895 [2024-07-15 16:10:20.551331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.895 [2024-07-15 16:10:20.551341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.895 qpair failed and we were unable to recover it. 00:28:51.895 [2024-07-15 16:10:20.551430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.895 [2024-07-15 16:10:20.551439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.895 qpair failed and we were unable to recover it. 00:28:51.895 [2024-07-15 16:10:20.551536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.895 [2024-07-15 16:10:20.551546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.895 qpair failed and we were unable to recover it. 00:28:51.895 [2024-07-15 16:10:20.551645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.895 [2024-07-15 16:10:20.551655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.895 qpair failed and we were unable to recover it. 00:28:51.895 [2024-07-15 16:10:20.551749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.895 [2024-07-15 16:10:20.551759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.895 qpair failed and we were unable to recover it. 00:28:51.895 [2024-07-15 16:10:20.551860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.895 [2024-07-15 16:10:20.551870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.895 qpair failed and we were unable to recover it. 00:28:51.895 [2024-07-15 16:10:20.552038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.895 [2024-07-15 16:10:20.552048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.895 qpair failed and we were unable to recover it. 00:28:51.895 [2024-07-15 16:10:20.552138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.895 [2024-07-15 16:10:20.552147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.895 qpair failed and we were unable to recover it. 00:28:51.895 [2024-07-15 16:10:20.552304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.895 [2024-07-15 16:10:20.552314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.895 qpair failed and we were unable to recover it. 00:28:51.895 [2024-07-15 16:10:20.552481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.895 [2024-07-15 16:10:20.552491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.895 qpair failed and we were unable to recover it. 00:28:51.895 [2024-07-15 16:10:20.552581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.895 [2024-07-15 16:10:20.552591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.895 qpair failed and we were unable to recover it. 00:28:51.895 [2024-07-15 16:10:20.552723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.895 [2024-07-15 16:10:20.552732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.895 qpair failed and we were unable to recover it. 00:28:51.895 [2024-07-15 16:10:20.552832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.895 [2024-07-15 16:10:20.552842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.895 qpair failed and we were unable to recover it. 00:28:51.895 [2024-07-15 16:10:20.553001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.895 [2024-07-15 16:10:20.553010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.895 qpair failed and we were unable to recover it. 00:28:51.895 [2024-07-15 16:10:20.553095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.895 [2024-07-15 16:10:20.553104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.895 qpair failed and we were unable to recover it. 00:28:51.895 [2024-07-15 16:10:20.553263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.895 [2024-07-15 16:10:20.553273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.895 qpair failed and we were unable to recover it. 00:28:51.895 [2024-07-15 16:10:20.553373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.895 [2024-07-15 16:10:20.553383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.895 qpair failed and we were unable to recover it. 00:28:51.895 [2024-07-15 16:10:20.553487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.895 [2024-07-15 16:10:20.553496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.895 qpair failed and we were unable to recover it. 00:28:51.895 [2024-07-15 16:10:20.553669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.895 [2024-07-15 16:10:20.553678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.895 qpair failed and we were unable to recover it. 00:28:51.895 [2024-07-15 16:10:20.553835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.895 [2024-07-15 16:10:20.553844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.895 qpair failed and we were unable to recover it. 00:28:51.895 [2024-07-15 16:10:20.553992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.895 [2024-07-15 16:10:20.554002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.895 qpair failed and we were unable to recover it. 00:28:51.895 [2024-07-15 16:10:20.554088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.895 [2024-07-15 16:10:20.554098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.895 qpair failed and we were unable to recover it. 00:28:51.895 [2024-07-15 16:10:20.554190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.895 [2024-07-15 16:10:20.554199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.895 qpair failed and we were unable to recover it. 00:28:51.895 [2024-07-15 16:10:20.554304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.895 [2024-07-15 16:10:20.554314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.895 qpair failed and we were unable to recover it. 00:28:51.895 [2024-07-15 16:10:20.554471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.895 [2024-07-15 16:10:20.554481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.895 qpair failed and we were unable to recover it. 00:28:51.895 [2024-07-15 16:10:20.554589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.896 [2024-07-15 16:10:20.554599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.896 qpair failed and we were unable to recover it. 00:28:51.896 [2024-07-15 16:10:20.554765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.896 [2024-07-15 16:10:20.554774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.896 qpair failed and we were unable to recover it. 00:28:51.896 [2024-07-15 16:10:20.554872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.896 [2024-07-15 16:10:20.554882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.896 qpair failed and we were unable to recover it. 00:28:51.896 [2024-07-15 16:10:20.554995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.896 [2024-07-15 16:10:20.555006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.896 qpair failed and we were unable to recover it. 00:28:51.896 [2024-07-15 16:10:20.555115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.896 [2024-07-15 16:10:20.555126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.896 qpair failed and we were unable to recover it. 00:28:51.896 [2024-07-15 16:10:20.555295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.896 [2024-07-15 16:10:20.555305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.896 qpair failed and we were unable to recover it. 00:28:51.896 [2024-07-15 16:10:20.555471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.896 [2024-07-15 16:10:20.555481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.896 qpair failed and we were unable to recover it. 00:28:51.896 [2024-07-15 16:10:20.555573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.896 [2024-07-15 16:10:20.555583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.896 qpair failed and we were unable to recover it. 00:28:51.896 [2024-07-15 16:10:20.555728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.896 [2024-07-15 16:10:20.555737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.896 qpair failed and we were unable to recover it. 00:28:51.896 [2024-07-15 16:10:20.555823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.896 [2024-07-15 16:10:20.555832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.896 qpair failed and we were unable to recover it. 00:28:51.896 [2024-07-15 16:10:20.555926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.896 [2024-07-15 16:10:20.555935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.896 qpair failed and we were unable to recover it. 00:28:51.896 [2024-07-15 16:10:20.556157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.896 [2024-07-15 16:10:20.556166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.896 qpair failed and we were unable to recover it. 00:28:51.896 [2024-07-15 16:10:20.556268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.896 [2024-07-15 16:10:20.556278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.896 qpair failed and we were unable to recover it. 00:28:51.896 [2024-07-15 16:10:20.556440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.896 [2024-07-15 16:10:20.556449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.896 qpair failed and we were unable to recover it. 00:28:51.896 [2024-07-15 16:10:20.556539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.896 [2024-07-15 16:10:20.556549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.896 qpair failed and we were unable to recover it. 00:28:51.896 [2024-07-15 16:10:20.556661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.896 [2024-07-15 16:10:20.556671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.896 qpair failed and we were unable to recover it. 00:28:51.896 [2024-07-15 16:10:20.556835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.896 [2024-07-15 16:10:20.556846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.896 qpair failed and we were unable to recover it. 00:28:51.896 [2024-07-15 16:10:20.556940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.896 [2024-07-15 16:10:20.556949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.896 qpair failed and we were unable to recover it. 00:28:51.896 [2024-07-15 16:10:20.557035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.896 [2024-07-15 16:10:20.557045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.896 qpair failed and we were unable to recover it. 00:28:51.896 [2024-07-15 16:10:20.557198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.896 [2024-07-15 16:10:20.557208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.896 qpair failed and we were unable to recover it. 00:28:51.896 [2024-07-15 16:10:20.557385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.896 [2024-07-15 16:10:20.557396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.896 qpair failed and we were unable to recover it. 00:28:51.896 [2024-07-15 16:10:20.557553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.896 [2024-07-15 16:10:20.557562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.896 qpair failed and we were unable to recover it. 00:28:51.896 [2024-07-15 16:10:20.557774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.896 [2024-07-15 16:10:20.557783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.896 qpair failed and we were unable to recover it. 00:28:51.896 [2024-07-15 16:10:20.557899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.896 [2024-07-15 16:10:20.557908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.896 qpair failed and we were unable to recover it. 00:28:51.896 [2024-07-15 16:10:20.558002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.896 [2024-07-15 16:10:20.558012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.896 qpair failed and we were unable to recover it. 00:28:51.896 [2024-07-15 16:10:20.558104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.896 [2024-07-15 16:10:20.558113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.896 qpair failed and we were unable to recover it. 00:28:51.896 [2024-07-15 16:10:20.558202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.896 [2024-07-15 16:10:20.558211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.896 qpair failed and we were unable to recover it. 00:28:51.896 [2024-07-15 16:10:20.558325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.896 [2024-07-15 16:10:20.558335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.896 qpair failed and we were unable to recover it. 00:28:51.896 [2024-07-15 16:10:20.558494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.896 [2024-07-15 16:10:20.558504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.896 qpair failed and we were unable to recover it. 00:28:51.896 [2024-07-15 16:10:20.558601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.896 [2024-07-15 16:10:20.558610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.896 qpair failed and we were unable to recover it. 00:28:51.896 [2024-07-15 16:10:20.558715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.896 [2024-07-15 16:10:20.558725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.896 qpair failed and we were unable to recover it. 00:28:51.896 [2024-07-15 16:10:20.558812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.896 [2024-07-15 16:10:20.558823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.896 qpair failed and we were unable to recover it. 00:28:51.896 [2024-07-15 16:10:20.559066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.896 [2024-07-15 16:10:20.559076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.896 qpair failed and we were unable to recover it. 00:28:51.896 [2024-07-15 16:10:20.559155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.896 [2024-07-15 16:10:20.559164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.896 qpair failed and we were unable to recover it. 00:28:51.896 [2024-07-15 16:10:20.559270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.896 [2024-07-15 16:10:20.559281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.896 qpair failed and we were unable to recover it. 00:28:51.896 [2024-07-15 16:10:20.559379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.896 [2024-07-15 16:10:20.559389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.896 qpair failed and we were unable to recover it. 00:28:51.896 [2024-07-15 16:10:20.559496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.896 [2024-07-15 16:10:20.559506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.896 qpair failed and we were unable to recover it. 00:28:51.896 [2024-07-15 16:10:20.559609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.896 [2024-07-15 16:10:20.559619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.896 qpair failed and we were unable to recover it. 00:28:51.896 [2024-07-15 16:10:20.559684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.896 [2024-07-15 16:10:20.559693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.896 qpair failed and we were unable to recover it. 00:28:51.897 [2024-07-15 16:10:20.559847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.897 [2024-07-15 16:10:20.559857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.897 qpair failed and we were unable to recover it. 00:28:51.897 [2024-07-15 16:10:20.560038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.897 [2024-07-15 16:10:20.560048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.897 qpair failed and we were unable to recover it. 00:28:51.897 [2024-07-15 16:10:20.560207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.897 [2024-07-15 16:10:20.560216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.897 qpair failed and we were unable to recover it. 00:28:51.897 [2024-07-15 16:10:20.560321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.897 [2024-07-15 16:10:20.560331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.897 qpair failed and we were unable to recover it. 00:28:51.897 [2024-07-15 16:10:20.560425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.897 [2024-07-15 16:10:20.560436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.897 qpair failed and we were unable to recover it. 00:28:51.897 [2024-07-15 16:10:20.560530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.897 [2024-07-15 16:10:20.560540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.897 qpair failed and we were unable to recover it. 00:28:51.897 [2024-07-15 16:10:20.560711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.897 [2024-07-15 16:10:20.560721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.897 qpair failed and we were unable to recover it. 00:28:51.897 [2024-07-15 16:10:20.560889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.897 [2024-07-15 16:10:20.560899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.897 qpair failed and we were unable to recover it. 00:28:51.897 [2024-07-15 16:10:20.561072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.897 [2024-07-15 16:10:20.561082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.897 qpair failed and we were unable to recover it. 00:28:51.897 [2024-07-15 16:10:20.561258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.897 [2024-07-15 16:10:20.561268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.897 qpair failed and we were unable to recover it. 00:28:51.897 [2024-07-15 16:10:20.561423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.897 [2024-07-15 16:10:20.561432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.897 qpair failed and we were unable to recover it. 00:28:51.897 [2024-07-15 16:10:20.561616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.897 [2024-07-15 16:10:20.561625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.897 qpair failed and we were unable to recover it. 00:28:51.897 [2024-07-15 16:10:20.561798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.897 [2024-07-15 16:10:20.561807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.897 qpair failed and we were unable to recover it. 00:28:51.897 [2024-07-15 16:10:20.561926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.897 [2024-07-15 16:10:20.561935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.897 qpair failed and we were unable to recover it. 00:28:51.897 [2024-07-15 16:10:20.562037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.897 [2024-07-15 16:10:20.562046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.897 qpair failed and we were unable to recover it. 00:28:51.897 [2024-07-15 16:10:20.562197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.897 [2024-07-15 16:10:20.562206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.897 qpair failed and we were unable to recover it. 00:28:51.897 [2024-07-15 16:10:20.562300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.897 [2024-07-15 16:10:20.562310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.897 qpair failed and we were unable to recover it. 00:28:51.897 [2024-07-15 16:10:20.562549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.897 [2024-07-15 16:10:20.562560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.897 qpair failed and we were unable to recover it. 00:28:51.897 [2024-07-15 16:10:20.562663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.897 [2024-07-15 16:10:20.562672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.897 qpair failed and we were unable to recover it. 00:28:51.897 [2024-07-15 16:10:20.562769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.897 [2024-07-15 16:10:20.562778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.897 qpair failed and we were unable to recover it. 00:28:51.897 [2024-07-15 16:10:20.562931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.897 [2024-07-15 16:10:20.562941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.897 qpair failed and we were unable to recover it. 00:28:51.897 [2024-07-15 16:10:20.563102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.897 [2024-07-15 16:10:20.563112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.897 qpair failed and we were unable to recover it. 00:28:51.897 [2024-07-15 16:10:20.563272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.897 [2024-07-15 16:10:20.563282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.897 qpair failed and we were unable to recover it. 00:28:51.897 [2024-07-15 16:10:20.563370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.897 [2024-07-15 16:10:20.563379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.897 qpair failed and we were unable to recover it. 00:28:51.897 [2024-07-15 16:10:20.563494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.897 [2024-07-15 16:10:20.563504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.897 qpair failed and we were unable to recover it. 00:28:51.897 [2024-07-15 16:10:20.563596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.897 [2024-07-15 16:10:20.563606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.897 qpair failed and we were unable to recover it. 00:28:51.897 [2024-07-15 16:10:20.563702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.897 [2024-07-15 16:10:20.563711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.897 qpair failed and we were unable to recover it. 00:28:51.897 [2024-07-15 16:10:20.563801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.897 [2024-07-15 16:10:20.563811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.897 qpair failed and we were unable to recover it. 00:28:51.897 [2024-07-15 16:10:20.563890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.897 [2024-07-15 16:10:20.563899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.897 qpair failed and we were unable to recover it. 00:28:51.897 [2024-07-15 16:10:20.564012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.897 [2024-07-15 16:10:20.564022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.897 qpair failed and we were unable to recover it. 00:28:51.897 [2024-07-15 16:10:20.564216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.897 [2024-07-15 16:10:20.564238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.897 qpair failed and we were unable to recover it. 00:28:51.897 [2024-07-15 16:10:20.564332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.897 [2024-07-15 16:10:20.564342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.897 qpair failed and we were unable to recover it. 00:28:51.897 [2024-07-15 16:10:20.564501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.897 [2024-07-15 16:10:20.564510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.897 qpair failed and we were unable to recover it. 00:28:51.897 [2024-07-15 16:10:20.564598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.897 [2024-07-15 16:10:20.564608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.897 qpair failed and we were unable to recover it. 00:28:51.897 [2024-07-15 16:10:20.564696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.897 [2024-07-15 16:10:20.564705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.897 qpair failed and we were unable to recover it. 00:28:51.897 [2024-07-15 16:10:20.564865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.897 [2024-07-15 16:10:20.564874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.897 qpair failed and we were unable to recover it. 00:28:51.897 [2024-07-15 16:10:20.565055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.897 [2024-07-15 16:10:20.565064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.897 qpair failed and we were unable to recover it. 00:28:51.897 [2024-07-15 16:10:20.565151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.897 [2024-07-15 16:10:20.565160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.898 qpair failed and we were unable to recover it. 00:28:51.898 [2024-07-15 16:10:20.565269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.898 [2024-07-15 16:10:20.565279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.898 qpair failed and we were unable to recover it. 00:28:51.898 [2024-07-15 16:10:20.565468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.898 [2024-07-15 16:10:20.565478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.898 qpair failed and we were unable to recover it. 00:28:51.898 [2024-07-15 16:10:20.565655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.898 [2024-07-15 16:10:20.565665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.898 qpair failed and we were unable to recover it. 00:28:51.898 [2024-07-15 16:10:20.565755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.898 [2024-07-15 16:10:20.565765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.898 qpair failed and we were unable to recover it. 00:28:51.898 [2024-07-15 16:10:20.565940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.898 [2024-07-15 16:10:20.565949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.898 qpair failed and we were unable to recover it. 00:28:51.898 [2024-07-15 16:10:20.566042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.898 [2024-07-15 16:10:20.566051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.898 qpair failed and we were unable to recover it. 00:28:51.898 [2024-07-15 16:10:20.566136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.898 [2024-07-15 16:10:20.566148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.898 qpair failed and we were unable to recover it. 00:28:51.898 [2024-07-15 16:10:20.566261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.898 [2024-07-15 16:10:20.566271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.898 qpair failed and we were unable to recover it. 00:28:51.898 [2024-07-15 16:10:20.566432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.898 [2024-07-15 16:10:20.566441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.898 qpair failed and we were unable to recover it. 00:28:51.898 [2024-07-15 16:10:20.566595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.898 [2024-07-15 16:10:20.566604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.898 qpair failed and we were unable to recover it. 00:28:51.898 [2024-07-15 16:10:20.566694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.898 [2024-07-15 16:10:20.566703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.898 qpair failed and we were unable to recover it. 00:28:51.898 [2024-07-15 16:10:20.566805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.898 [2024-07-15 16:10:20.566814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.898 qpair failed and we were unable to recover it. 00:28:51.898 [2024-07-15 16:10:20.566971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.898 [2024-07-15 16:10:20.566980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.898 qpair failed and we were unable to recover it. 00:28:51.898 [2024-07-15 16:10:20.567150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.898 [2024-07-15 16:10:20.567160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.898 qpair failed and we were unable to recover it. 00:28:51.898 [2024-07-15 16:10:20.567270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.898 [2024-07-15 16:10:20.567280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.898 qpair failed and we were unable to recover it. 00:28:51.898 [2024-07-15 16:10:20.567440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.898 [2024-07-15 16:10:20.567450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.898 qpair failed and we were unable to recover it. 00:28:51.898 [2024-07-15 16:10:20.567569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.898 [2024-07-15 16:10:20.567579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.898 qpair failed and we were unable to recover it. 00:28:51.898 [2024-07-15 16:10:20.567682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.898 [2024-07-15 16:10:20.567691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.898 qpair failed and we were unable to recover it. 00:28:51.898 [2024-07-15 16:10:20.567784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.898 [2024-07-15 16:10:20.567794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.898 qpair failed and we were unable to recover it. 00:28:51.898 [2024-07-15 16:10:20.567886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.898 [2024-07-15 16:10:20.567895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.898 qpair failed and we were unable to recover it. 00:28:51.898 [2024-07-15 16:10:20.567963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.898 [2024-07-15 16:10:20.567973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.898 qpair failed and we were unable to recover it. 00:28:51.898 [2024-07-15 16:10:20.568127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.898 [2024-07-15 16:10:20.568137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.898 qpair failed and we were unable to recover it. 00:28:51.898 [2024-07-15 16:10:20.568236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.898 [2024-07-15 16:10:20.568246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.898 qpair failed and we were unable to recover it. 00:28:51.898 [2024-07-15 16:10:20.568347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.898 [2024-07-15 16:10:20.568356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.898 qpair failed and we were unable to recover it. 00:28:51.898 [2024-07-15 16:10:20.568457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.898 [2024-07-15 16:10:20.568467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.898 qpair failed and we were unable to recover it. 00:28:51.898 [2024-07-15 16:10:20.568639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.898 [2024-07-15 16:10:20.568648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.898 qpair failed and we were unable to recover it. 00:28:51.898 [2024-07-15 16:10:20.568728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.898 [2024-07-15 16:10:20.568737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.898 qpair failed and we were unable to recover it. 00:28:51.898 [2024-07-15 16:10:20.568834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.898 [2024-07-15 16:10:20.568844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.898 qpair failed and we were unable to recover it. 00:28:51.898 [2024-07-15 16:10:20.569014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.898 [2024-07-15 16:10:20.569023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.898 qpair failed and we were unable to recover it. 00:28:51.898 [2024-07-15 16:10:20.569118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.898 [2024-07-15 16:10:20.569127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.898 qpair failed and we were unable to recover it. 00:28:51.898 [2024-07-15 16:10:20.569221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.898 [2024-07-15 16:10:20.569234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.898 qpair failed and we were unable to recover it. 00:28:51.898 [2024-07-15 16:10:20.569338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.898 [2024-07-15 16:10:20.569348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.898 qpair failed and we were unable to recover it. 00:28:51.898 [2024-07-15 16:10:20.569438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.898 [2024-07-15 16:10:20.569448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.898 qpair failed and we were unable to recover it. 00:28:51.898 [2024-07-15 16:10:20.569609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.898 [2024-07-15 16:10:20.569619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.898 qpair failed and we were unable to recover it. 00:28:51.898 [2024-07-15 16:10:20.569778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.898 [2024-07-15 16:10:20.569788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.898 qpair failed and we were unable to recover it. 00:28:51.898 [2024-07-15 16:10:20.569886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.898 [2024-07-15 16:10:20.569896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.898 qpair failed and we were unable to recover it. 00:28:51.898 [2024-07-15 16:10:20.569992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.898 [2024-07-15 16:10:20.570001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.898 qpair failed and we were unable to recover it. 00:28:51.898 [2024-07-15 16:10:20.570090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.898 [2024-07-15 16:10:20.570099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.898 qpair failed and we were unable to recover it. 00:28:51.898 [2024-07-15 16:10:20.570255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.899 [2024-07-15 16:10:20.570266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.899 qpair failed and we were unable to recover it. 00:28:51.899 [2024-07-15 16:10:20.570369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.899 [2024-07-15 16:10:20.570379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.899 qpair failed and we were unable to recover it. 00:28:51.899 [2024-07-15 16:10:20.570545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.899 [2024-07-15 16:10:20.570555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.899 qpair failed and we were unable to recover it. 00:28:51.899 [2024-07-15 16:10:20.570662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.899 [2024-07-15 16:10:20.570671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.899 qpair failed and we were unable to recover it. 00:28:51.899 [2024-07-15 16:10:20.570855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.899 [2024-07-15 16:10:20.570865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.899 qpair failed and we were unable to recover it. 00:28:51.899 [2024-07-15 16:10:20.570959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.899 [2024-07-15 16:10:20.570969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.899 qpair failed and we were unable to recover it. 00:28:51.899 [2024-07-15 16:10:20.571121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.899 [2024-07-15 16:10:20.571130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.899 qpair failed and we were unable to recover it. 00:28:51.899 [2024-07-15 16:10:20.571222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.899 [2024-07-15 16:10:20.571235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.899 qpair failed and we were unable to recover it. 00:28:51.899 [2024-07-15 16:10:20.571320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.899 [2024-07-15 16:10:20.571331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.899 qpair failed and we were unable to recover it. 00:28:51.899 [2024-07-15 16:10:20.571485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.899 [2024-07-15 16:10:20.571495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.899 qpair failed and we were unable to recover it. 00:28:51.899 [2024-07-15 16:10:20.571577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.899 [2024-07-15 16:10:20.571587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.899 qpair failed and we were unable to recover it. 00:28:51.899 [2024-07-15 16:10:20.571674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.899 [2024-07-15 16:10:20.571684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.899 qpair failed and we were unable to recover it. 00:28:51.899 [2024-07-15 16:10:20.571784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.899 [2024-07-15 16:10:20.571794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.899 qpair failed and we were unable to recover it. 00:28:51.899 [2024-07-15 16:10:20.571913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.899 [2024-07-15 16:10:20.571922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.899 qpair failed and we were unable to recover it. 00:28:51.899 [2024-07-15 16:10:20.572027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.899 [2024-07-15 16:10:20.572036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.899 qpair failed and we were unable to recover it. 00:28:51.899 [2024-07-15 16:10:20.572163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.899 [2024-07-15 16:10:20.572173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.899 qpair failed and we were unable to recover it. 00:28:51.899 [2024-07-15 16:10:20.572291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.899 [2024-07-15 16:10:20.572301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.899 qpair failed and we were unable to recover it. 00:28:51.899 [2024-07-15 16:10:20.572464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.899 [2024-07-15 16:10:20.572473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.899 qpair failed and we were unable to recover it. 00:28:51.899 [2024-07-15 16:10:20.572574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.899 [2024-07-15 16:10:20.572583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.899 qpair failed and we were unable to recover it. 00:28:51.899 [2024-07-15 16:10:20.572696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.899 [2024-07-15 16:10:20.572705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.899 qpair failed and we were unable to recover it. 00:28:51.899 [2024-07-15 16:10:20.572802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.899 [2024-07-15 16:10:20.572811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.899 qpair failed and we were unable to recover it. 00:28:51.899 [2024-07-15 16:10:20.572900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.899 [2024-07-15 16:10:20.572909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.899 qpair failed and we were unable to recover it. 00:28:51.899 [2024-07-15 16:10:20.573005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.899 [2024-07-15 16:10:20.573015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.899 qpair failed and we were unable to recover it. 00:28:51.899 [2024-07-15 16:10:20.573099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.899 [2024-07-15 16:10:20.573108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.899 qpair failed and we were unable to recover it. 00:28:51.899 [2024-07-15 16:10:20.573199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.899 [2024-07-15 16:10:20.573209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.899 qpair failed and we were unable to recover it. 00:28:51.899 [2024-07-15 16:10:20.573294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.899 [2024-07-15 16:10:20.573304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.899 qpair failed and we were unable to recover it. 00:28:51.899 [2024-07-15 16:10:20.573491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.899 [2024-07-15 16:10:20.573500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.899 qpair failed and we were unable to recover it. 00:28:51.899 [2024-07-15 16:10:20.573615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.899 [2024-07-15 16:10:20.573625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.899 qpair failed and we were unable to recover it. 00:28:51.899 [2024-07-15 16:10:20.573783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.899 [2024-07-15 16:10:20.573793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.899 qpair failed and we were unable to recover it. 00:28:51.899 [2024-07-15 16:10:20.573959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.899 [2024-07-15 16:10:20.573969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.899 qpair failed and we were unable to recover it. 00:28:51.899 [2024-07-15 16:10:20.574079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.899 [2024-07-15 16:10:20.574089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.899 qpair failed and we were unable to recover it. 00:28:51.899 [2024-07-15 16:10:20.574248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.899 [2024-07-15 16:10:20.574258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.900 qpair failed and we were unable to recover it. 00:28:51.900 [2024-07-15 16:10:20.574353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.900 [2024-07-15 16:10:20.574362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.900 qpair failed and we were unable to recover it. 00:28:51.900 [2024-07-15 16:10:20.574437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.900 [2024-07-15 16:10:20.574447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.900 qpair failed and we were unable to recover it. 00:28:51.900 [2024-07-15 16:10:20.574547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.900 [2024-07-15 16:10:20.574557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.900 qpair failed and we were unable to recover it. 00:28:51.900 [2024-07-15 16:10:20.574652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.900 [2024-07-15 16:10:20.574661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.900 qpair failed and we were unable to recover it. 00:28:51.900 [2024-07-15 16:10:20.574754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.900 [2024-07-15 16:10:20.574763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.900 qpair failed and we were unable to recover it. 00:28:51.900 [2024-07-15 16:10:20.574859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.900 [2024-07-15 16:10:20.574869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.900 qpair failed and we were unable to recover it. 00:28:51.900 [2024-07-15 16:10:20.574995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.900 [2024-07-15 16:10:20.575004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.900 qpair failed and we were unable to recover it. 00:28:51.900 [2024-07-15 16:10:20.575093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.900 [2024-07-15 16:10:20.575103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.900 qpair failed and we were unable to recover it. 00:28:51.900 [2024-07-15 16:10:20.575196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.900 [2024-07-15 16:10:20.575206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.900 qpair failed and we were unable to recover it. 00:28:51.900 [2024-07-15 16:10:20.575321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.900 [2024-07-15 16:10:20.575330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.900 qpair failed and we were unable to recover it. 00:28:51.900 [2024-07-15 16:10:20.575426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.900 [2024-07-15 16:10:20.575435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.900 qpair failed and we were unable to recover it. 00:28:51.900 [2024-07-15 16:10:20.575600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.900 [2024-07-15 16:10:20.575610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.900 qpair failed and we were unable to recover it. 00:28:51.900 [2024-07-15 16:10:20.575721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.900 [2024-07-15 16:10:20.575730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.900 qpair failed and we were unable to recover it. 00:28:51.900 [2024-07-15 16:10:20.575823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.900 [2024-07-15 16:10:20.575834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.900 qpair failed and we were unable to recover it. 00:28:51.900 [2024-07-15 16:10:20.575948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.900 [2024-07-15 16:10:20.575958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.900 qpair failed and we were unable to recover it. 00:28:51.900 [2024-07-15 16:10:20.576054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.900 [2024-07-15 16:10:20.576064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.900 qpair failed and we were unable to recover it. 00:28:51.900 [2024-07-15 16:10:20.576169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.900 [2024-07-15 16:10:20.576180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.900 qpair failed and we were unable to recover it. 00:28:51.900 [2024-07-15 16:10:20.576278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.900 [2024-07-15 16:10:20.576288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.900 qpair failed and we were unable to recover it. 00:28:51.900 [2024-07-15 16:10:20.576397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.900 [2024-07-15 16:10:20.576407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.900 qpair failed and we were unable to recover it. 00:28:51.900 [2024-07-15 16:10:20.576504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.900 [2024-07-15 16:10:20.576513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.900 qpair failed and we were unable to recover it. 00:28:51.900 [2024-07-15 16:10:20.576593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.900 [2024-07-15 16:10:20.576602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.900 qpair failed and we were unable to recover it. 00:28:51.900 [2024-07-15 16:10:20.576760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.900 [2024-07-15 16:10:20.576770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.900 qpair failed and we were unable to recover it. 00:28:51.900 [2024-07-15 16:10:20.576866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.900 [2024-07-15 16:10:20.576875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.900 qpair failed and we were unable to recover it. 00:28:51.900 [2024-07-15 16:10:20.576976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.900 [2024-07-15 16:10:20.576986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.900 qpair failed and we were unable to recover it. 00:28:51.900 [2024-07-15 16:10:20.577106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.900 [2024-07-15 16:10:20.577115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.900 qpair failed and we were unable to recover it. 00:28:51.900 [2024-07-15 16:10:20.577220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.900 [2024-07-15 16:10:20.577234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.900 qpair failed and we were unable to recover it. 00:28:51.900 [2024-07-15 16:10:20.577328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.900 [2024-07-15 16:10:20.577338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.900 qpair failed and we were unable to recover it. 00:28:51.900 [2024-07-15 16:10:20.577443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.900 [2024-07-15 16:10:20.577453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.900 qpair failed and we were unable to recover it. 00:28:51.900 [2024-07-15 16:10:20.577645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.900 [2024-07-15 16:10:20.577654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.900 qpair failed and we were unable to recover it. 00:28:51.900 [2024-07-15 16:10:20.577820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.900 [2024-07-15 16:10:20.577830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.900 qpair failed and we were unable to recover it. 00:28:51.900 [2024-07-15 16:10:20.577924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.900 [2024-07-15 16:10:20.577933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.900 qpair failed and we were unable to recover it. 00:28:51.900 [2024-07-15 16:10:20.578052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.900 [2024-07-15 16:10:20.578061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.900 qpair failed and we were unable to recover it. 00:28:51.900 [2024-07-15 16:10:20.578161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.900 [2024-07-15 16:10:20.578171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.900 qpair failed and we were unable to recover it. 00:28:51.900 [2024-07-15 16:10:20.578276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.900 [2024-07-15 16:10:20.578286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.900 qpair failed and we were unable to recover it. 00:28:51.900 [2024-07-15 16:10:20.578389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.900 [2024-07-15 16:10:20.578399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.900 qpair failed and we were unable to recover it. 00:28:51.900 [2024-07-15 16:10:20.578557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.900 [2024-07-15 16:10:20.578566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.900 qpair failed and we were unable to recover it. 00:28:51.900 [2024-07-15 16:10:20.578654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.900 [2024-07-15 16:10:20.578663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.900 qpair failed and we were unable to recover it. 00:28:51.900 [2024-07-15 16:10:20.578774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.900 [2024-07-15 16:10:20.578785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.900 qpair failed and we were unable to recover it. 00:28:51.900 [2024-07-15 16:10:20.578885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.901 [2024-07-15 16:10:20.578895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.901 qpair failed and we were unable to recover it. 00:28:51.901 [2024-07-15 16:10:20.578992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.901 [2024-07-15 16:10:20.579001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.901 qpair failed and we were unable to recover it. 00:28:51.901 [2024-07-15 16:10:20.579252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.901 [2024-07-15 16:10:20.579262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.901 qpair failed and we were unable to recover it. 00:28:51.901 [2024-07-15 16:10:20.579371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.901 [2024-07-15 16:10:20.579380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.901 qpair failed and we were unable to recover it. 00:28:51.901 [2024-07-15 16:10:20.579489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.901 [2024-07-15 16:10:20.579498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.901 qpair failed and we were unable to recover it. 00:28:51.901 [2024-07-15 16:10:20.579591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.901 [2024-07-15 16:10:20.579600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.901 qpair failed and we were unable to recover it. 00:28:51.901 [2024-07-15 16:10:20.579703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.901 [2024-07-15 16:10:20.579713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.901 qpair failed and we were unable to recover it. 00:28:51.901 [2024-07-15 16:10:20.579878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.901 [2024-07-15 16:10:20.579888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.901 qpair failed and we were unable to recover it. 00:28:51.901 [2024-07-15 16:10:20.580046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.901 [2024-07-15 16:10:20.580056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.901 qpair failed and we were unable to recover it. 00:28:51.901 [2024-07-15 16:10:20.580155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.901 [2024-07-15 16:10:20.580165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.901 qpair failed and we were unable to recover it. 00:28:51.901 [2024-07-15 16:10:20.580262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.901 [2024-07-15 16:10:20.580280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.901 qpair failed and we were unable to recover it. 00:28:51.901 [2024-07-15 16:10:20.580451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.901 [2024-07-15 16:10:20.580461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.901 qpair failed and we were unable to recover it. 00:28:51.901 [2024-07-15 16:10:20.580538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.901 [2024-07-15 16:10:20.580547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.901 qpair failed and we were unable to recover it. 00:28:51.901 [2024-07-15 16:10:20.580649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.901 [2024-07-15 16:10:20.580659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.901 qpair failed and we were unable to recover it. 00:28:51.901 [2024-07-15 16:10:20.580750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.901 [2024-07-15 16:10:20.580759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.901 qpair failed and we were unable to recover it. 00:28:51.901 [2024-07-15 16:10:20.580896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.901 [2024-07-15 16:10:20.580906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.901 qpair failed and we were unable to recover it. 00:28:51.901 [2024-07-15 16:10:20.581060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.901 [2024-07-15 16:10:20.581071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.901 qpair failed and we were unable to recover it. 00:28:51.901 [2024-07-15 16:10:20.581189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.901 [2024-07-15 16:10:20.581199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.901 qpair failed and we were unable to recover it. 00:28:51.901 [2024-07-15 16:10:20.581294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.901 [2024-07-15 16:10:20.581305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.901 qpair failed and we were unable to recover it. 00:28:51.901 [2024-07-15 16:10:20.581408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.901 [2024-07-15 16:10:20.581418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.901 qpair failed and we were unable to recover it. 00:28:51.901 [2024-07-15 16:10:20.581519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.901 [2024-07-15 16:10:20.581528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.901 qpair failed and we were unable to recover it. 00:28:51.901 [2024-07-15 16:10:20.581685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.901 [2024-07-15 16:10:20.581695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.901 qpair failed and we were unable to recover it. 00:28:51.901 [2024-07-15 16:10:20.581850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.901 [2024-07-15 16:10:20.581860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.901 qpair failed and we were unable to recover it. 00:28:51.901 [2024-07-15 16:10:20.581975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.901 [2024-07-15 16:10:20.581985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.901 qpair failed and we were unable to recover it. 00:28:51.901 [2024-07-15 16:10:20.582142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.901 [2024-07-15 16:10:20.582151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.901 qpair failed and we were unable to recover it. 00:28:51.901 [2024-07-15 16:10:20.582256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.901 [2024-07-15 16:10:20.582266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.901 qpair failed and we were unable to recover it. 00:28:51.901 [2024-07-15 16:10:20.582355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.901 [2024-07-15 16:10:20.582365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.901 qpair failed and we were unable to recover it. 00:28:51.901 [2024-07-15 16:10:20.582466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.901 [2024-07-15 16:10:20.582476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.901 qpair failed and we were unable to recover it. 00:28:51.901 [2024-07-15 16:10:20.582578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.901 [2024-07-15 16:10:20.582588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.901 qpair failed and we were unable to recover it. 00:28:51.901 [2024-07-15 16:10:20.582682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.901 [2024-07-15 16:10:20.582692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.901 qpair failed and we were unable to recover it. 00:28:51.901 [2024-07-15 16:10:20.582796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.901 [2024-07-15 16:10:20.582805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.901 qpair failed and we were unable to recover it. 00:28:51.901 [2024-07-15 16:10:20.582895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.901 [2024-07-15 16:10:20.582904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.901 qpair failed and we were unable to recover it. 00:28:51.901 [2024-07-15 16:10:20.583063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.901 [2024-07-15 16:10:20.583072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.901 qpair failed and we were unable to recover it. 00:28:51.901 [2024-07-15 16:10:20.583157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.901 [2024-07-15 16:10:20.583166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.901 qpair failed and we were unable to recover it. 00:28:51.901 [2024-07-15 16:10:20.583256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.901 [2024-07-15 16:10:20.583266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.901 qpair failed and we were unable to recover it. 00:28:51.901 [2024-07-15 16:10:20.583357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.901 [2024-07-15 16:10:20.583367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.901 qpair failed and we were unable to recover it. 00:28:51.901 [2024-07-15 16:10:20.583501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.901 [2024-07-15 16:10:20.583510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.901 qpair failed and we were unable to recover it. 00:28:51.901 [2024-07-15 16:10:20.583608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.901 [2024-07-15 16:10:20.583618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.901 qpair failed and we were unable to recover it. 00:28:51.901 [2024-07-15 16:10:20.583707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.901 [2024-07-15 16:10:20.583717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.901 qpair failed and we were unable to recover it. 00:28:51.901 [2024-07-15 16:10:20.583890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.901 [2024-07-15 16:10:20.583900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.902 qpair failed and we were unable to recover it. 00:28:51.902 [2024-07-15 16:10:20.583985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.902 [2024-07-15 16:10:20.583994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.902 qpair failed and we were unable to recover it. 00:28:51.902 [2024-07-15 16:10:20.584149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.902 [2024-07-15 16:10:20.584158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.902 qpair failed and we were unable to recover it. 00:28:51.902 [2024-07-15 16:10:20.584267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.902 [2024-07-15 16:10:20.584277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.902 qpair failed and we were unable to recover it. 00:28:51.902 [2024-07-15 16:10:20.584468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.902 [2024-07-15 16:10:20.584479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.902 qpair failed and we were unable to recover it. 00:28:51.902 [2024-07-15 16:10:20.584569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.902 [2024-07-15 16:10:20.584578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.902 qpair failed and we were unable to recover it. 00:28:51.902 [2024-07-15 16:10:20.584681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.902 [2024-07-15 16:10:20.584690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.902 qpair failed and we were unable to recover it. 00:28:51.902 [2024-07-15 16:10:20.584797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.902 [2024-07-15 16:10:20.584807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.902 qpair failed and we were unable to recover it. 00:28:51.902 [2024-07-15 16:10:20.584906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.902 [2024-07-15 16:10:20.584916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.902 qpair failed and we were unable to recover it. 00:28:51.902 [2024-07-15 16:10:20.585067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.902 [2024-07-15 16:10:20.585076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.902 qpair failed and we were unable to recover it. 00:28:51.902 [2024-07-15 16:10:20.585164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.902 [2024-07-15 16:10:20.585173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.902 qpair failed and we were unable to recover it. 00:28:51.902 [2024-07-15 16:10:20.585345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.902 [2024-07-15 16:10:20.585354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.902 qpair failed and we were unable to recover it. 00:28:51.902 [2024-07-15 16:10:20.585446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.902 [2024-07-15 16:10:20.585455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.902 qpair failed and we were unable to recover it. 00:28:51.902 [2024-07-15 16:10:20.585551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.902 [2024-07-15 16:10:20.585560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.902 qpair failed and we were unable to recover it. 00:28:51.902 [2024-07-15 16:10:20.585781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.902 [2024-07-15 16:10:20.585790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.902 qpair failed and we were unable to recover it. 00:28:51.902 [2024-07-15 16:10:20.585870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.902 [2024-07-15 16:10:20.585880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.902 qpair failed and we were unable to recover it. 00:28:51.902 [2024-07-15 16:10:20.586033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.902 [2024-07-15 16:10:20.586042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.902 qpair failed and we were unable to recover it. 00:28:51.902 [2024-07-15 16:10:20.586199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.902 [2024-07-15 16:10:20.586208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.902 qpair failed and we were unable to recover it. 00:28:51.902 [2024-07-15 16:10:20.586306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.902 [2024-07-15 16:10:20.586316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.902 qpair failed and we were unable to recover it. 00:28:51.902 [2024-07-15 16:10:20.586503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.902 [2024-07-15 16:10:20.586514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.902 qpair failed and we were unable to recover it. 00:28:51.902 [2024-07-15 16:10:20.586623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.902 [2024-07-15 16:10:20.586633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.902 qpair failed and we were unable to recover it. 00:28:51.902 [2024-07-15 16:10:20.586726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.902 [2024-07-15 16:10:20.586736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.902 qpair failed and we were unable to recover it. 00:28:51.902 [2024-07-15 16:10:20.586899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.902 [2024-07-15 16:10:20.586909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.902 qpair failed and we were unable to recover it. 00:28:51.902 [2024-07-15 16:10:20.587066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.902 [2024-07-15 16:10:20.587076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.902 qpair failed and we were unable to recover it. 00:28:51.902 [2024-07-15 16:10:20.587237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.902 [2024-07-15 16:10:20.587247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.902 qpair failed and we were unable to recover it. 00:28:51.902 [2024-07-15 16:10:20.587417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.902 [2024-07-15 16:10:20.587427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.902 qpair failed and we were unable to recover it. 00:28:51.902 [2024-07-15 16:10:20.587513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.902 [2024-07-15 16:10:20.587523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.902 qpair failed and we were unable to recover it. 00:28:51.902 [2024-07-15 16:10:20.587610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.902 [2024-07-15 16:10:20.587620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.902 qpair failed and we were unable to recover it. 00:28:51.902 [2024-07-15 16:10:20.587805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.902 [2024-07-15 16:10:20.587815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.902 qpair failed and we were unable to recover it. 00:28:51.902 [2024-07-15 16:10:20.587891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.902 [2024-07-15 16:10:20.587901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.902 qpair failed and we were unable to recover it. 00:28:51.902 [2024-07-15 16:10:20.587988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.902 [2024-07-15 16:10:20.587998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.902 qpair failed and we were unable to recover it. 00:28:51.902 [2024-07-15 16:10:20.588155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.902 [2024-07-15 16:10:20.588165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.902 qpair failed and we were unable to recover it. 00:28:51.902 [2024-07-15 16:10:20.588326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.902 [2024-07-15 16:10:20.588336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.902 qpair failed and we were unable to recover it. 00:28:51.902 [2024-07-15 16:10:20.588440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.902 [2024-07-15 16:10:20.588450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.902 qpair failed and we were unable to recover it. 00:28:51.902 [2024-07-15 16:10:20.588545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.902 [2024-07-15 16:10:20.588554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.902 qpair failed and we were unable to recover it. 00:28:51.902 [2024-07-15 16:10:20.588660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.902 [2024-07-15 16:10:20.588670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.902 qpair failed and we were unable to recover it. 00:28:51.902 [2024-07-15 16:10:20.588764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.902 [2024-07-15 16:10:20.588774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.902 qpair failed and we were unable to recover it. 00:28:51.902 [2024-07-15 16:10:20.589018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.902 [2024-07-15 16:10:20.589028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.902 qpair failed and we were unable to recover it. 00:28:51.902 [2024-07-15 16:10:20.589119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.902 [2024-07-15 16:10:20.589129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.902 qpair failed and we were unable to recover it. 00:28:51.902 [2024-07-15 16:10:20.589249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.902 [2024-07-15 16:10:20.589259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.902 qpair failed and we were unable to recover it. 00:28:51.902 [2024-07-15 16:10:20.589365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.903 [2024-07-15 16:10:20.589374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.903 qpair failed and we were unable to recover it. 00:28:51.903 [2024-07-15 16:10:20.589552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.903 [2024-07-15 16:10:20.589562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.903 qpair failed and we were unable to recover it. 00:28:51.903 [2024-07-15 16:10:20.589726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.903 [2024-07-15 16:10:20.589735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.903 qpair failed and we were unable to recover it. 00:28:51.903 [2024-07-15 16:10:20.589928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.903 [2024-07-15 16:10:20.589938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.903 qpair failed and we were unable to recover it. 00:28:51.903 [2024-07-15 16:10:20.590041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.903 [2024-07-15 16:10:20.590051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.903 qpair failed and we were unable to recover it. 00:28:51.903 [2024-07-15 16:10:20.590297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.903 [2024-07-15 16:10:20.590306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.903 qpair failed and we were unable to recover it. 00:28:51.903 [2024-07-15 16:10:20.590406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.903 [2024-07-15 16:10:20.590415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.903 qpair failed and we were unable to recover it. 00:28:51.903 [2024-07-15 16:10:20.590520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.903 [2024-07-15 16:10:20.590530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.903 qpair failed and we were unable to recover it. 00:28:51.903 [2024-07-15 16:10:20.590760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.903 [2024-07-15 16:10:20.590770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.903 qpair failed and we were unable to recover it. 00:28:51.903 [2024-07-15 16:10:20.590861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.903 [2024-07-15 16:10:20.590870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.903 qpair failed and we were unable to recover it. 00:28:51.903 [2024-07-15 16:10:20.591026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.903 [2024-07-15 16:10:20.591035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.903 qpair failed and we were unable to recover it. 00:28:51.903 [2024-07-15 16:10:20.591127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.903 [2024-07-15 16:10:20.591136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.903 qpair failed and we were unable to recover it. 00:28:51.903 [2024-07-15 16:10:20.591265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.903 [2024-07-15 16:10:20.591275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.903 qpair failed and we were unable to recover it. 00:28:51.903 [2024-07-15 16:10:20.591447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.903 [2024-07-15 16:10:20.591456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.903 qpair failed and we were unable to recover it. 00:28:51.903 [2024-07-15 16:10:20.591556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.903 [2024-07-15 16:10:20.591566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.903 qpair failed and we were unable to recover it. 00:28:51.903 [2024-07-15 16:10:20.591749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.903 [2024-07-15 16:10:20.591758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.903 qpair failed and we were unable to recover it. 00:28:51.903 [2024-07-15 16:10:20.591941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.903 [2024-07-15 16:10:20.591951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.903 qpair failed and we were unable to recover it. 00:28:51.903 [2024-07-15 16:10:20.592106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.903 [2024-07-15 16:10:20.592115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.903 qpair failed and we were unable to recover it. 00:28:51.903 [2024-07-15 16:10:20.592341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.903 [2024-07-15 16:10:20.592351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.903 qpair failed and we were unable to recover it. 00:28:51.903 [2024-07-15 16:10:20.592460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.903 [2024-07-15 16:10:20.592472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.903 qpair failed and we were unable to recover it. 00:28:51.903 [2024-07-15 16:10:20.592571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.903 [2024-07-15 16:10:20.592581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.903 qpair failed and we were unable to recover it. 00:28:51.903 [2024-07-15 16:10:20.592748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.903 [2024-07-15 16:10:20.592758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.903 qpair failed and we were unable to recover it. 00:28:51.903 [2024-07-15 16:10:20.592978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.903 [2024-07-15 16:10:20.592987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.903 qpair failed and we were unable to recover it. 00:28:51.903 [2024-07-15 16:10:20.593150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.903 [2024-07-15 16:10:20.593159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.903 qpair failed and we were unable to recover it. 00:28:51.903 [2024-07-15 16:10:20.593335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.903 [2024-07-15 16:10:20.593345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.903 qpair failed and we were unable to recover it. 00:28:51.903 [2024-07-15 16:10:20.593514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.903 [2024-07-15 16:10:20.593524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.903 qpair failed and we were unable to recover it. 00:28:51.903 [2024-07-15 16:10:20.593598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.903 [2024-07-15 16:10:20.593607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.903 qpair failed and we were unable to recover it. 00:28:51.903 [2024-07-15 16:10:20.593705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.903 [2024-07-15 16:10:20.593715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.903 qpair failed and we were unable to recover it. 00:28:51.903 [2024-07-15 16:10:20.593783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.903 [2024-07-15 16:10:20.593793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.903 qpair failed and we were unable to recover it. 00:28:51.903 [2024-07-15 16:10:20.593893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.903 [2024-07-15 16:10:20.593902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.903 qpair failed and we were unable to recover it. 00:28:51.903 [2024-07-15 16:10:20.594002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.903 [2024-07-15 16:10:20.594011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.903 qpair failed and we were unable to recover it. 00:28:51.903 [2024-07-15 16:10:20.596381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.903 [2024-07-15 16:10:20.596393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.903 qpair failed and we were unable to recover it. 00:28:51.903 [2024-07-15 16:10:20.596553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.903 [2024-07-15 16:10:20.596563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.903 qpair failed and we were unable to recover it. 00:28:51.903 [2024-07-15 16:10:20.596731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.904 [2024-07-15 16:10:20.596740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.904 qpair failed and we were unable to recover it. 00:28:51.904 [2024-07-15 16:10:20.596847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.904 [2024-07-15 16:10:20.596857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.904 qpair failed and we were unable to recover it. 00:28:51.904 [2024-07-15 16:10:20.597021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.904 [2024-07-15 16:10:20.597031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.904 qpair failed and we were unable to recover it. 00:28:51.904 [2024-07-15 16:10:20.597186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.904 [2024-07-15 16:10:20.597196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.904 qpair failed and we were unable to recover it. 00:28:51.904 [2024-07-15 16:10:20.597309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.904 [2024-07-15 16:10:20.597319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.904 qpair failed and we were unable to recover it. 00:28:51.904 [2024-07-15 16:10:20.597482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.904 [2024-07-15 16:10:20.597492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.904 qpair failed and we were unable to recover it. 00:28:51.904 [2024-07-15 16:10:20.597590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.904 [2024-07-15 16:10:20.597600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.904 qpair failed and we were unable to recover it. 00:28:51.904 [2024-07-15 16:10:20.597692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.904 [2024-07-15 16:10:20.597702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.904 qpair failed and we were unable to recover it. 00:28:51.904 [2024-07-15 16:10:20.597812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.904 [2024-07-15 16:10:20.597822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.904 qpair failed and we were unable to recover it. 00:28:51.904 [2024-07-15 16:10:20.597930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.904 [2024-07-15 16:10:20.597939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.904 qpair failed and we were unable to recover it. 00:28:51.904 [2024-07-15 16:10:20.598028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.904 [2024-07-15 16:10:20.598037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.904 qpair failed and we were unable to recover it. 00:28:51.904 [2024-07-15 16:10:20.598202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.904 [2024-07-15 16:10:20.598211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.904 qpair failed and we were unable to recover it. 00:28:51.904 [2024-07-15 16:10:20.598308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.904 [2024-07-15 16:10:20.598317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.904 qpair failed and we were unable to recover it. 00:28:51.904 [2024-07-15 16:10:20.598413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.904 [2024-07-15 16:10:20.598423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.904 qpair failed and we were unable to recover it. 00:28:51.904 [2024-07-15 16:10:20.598582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.904 [2024-07-15 16:10:20.598591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.904 qpair failed and we were unable to recover it. 00:28:51.904 [2024-07-15 16:10:20.598681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.904 [2024-07-15 16:10:20.598691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.904 qpair failed and we were unable to recover it. 00:28:51.904 [2024-07-15 16:10:20.598846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.904 [2024-07-15 16:10:20.598857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.904 qpair failed and we were unable to recover it. 00:28:51.904 [2024-07-15 16:10:20.599043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.904 [2024-07-15 16:10:20.599052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.904 qpair failed and we were unable to recover it. 00:28:51.904 [2024-07-15 16:10:20.599160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.904 [2024-07-15 16:10:20.599170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.904 qpair failed and we were unable to recover it. 00:28:51.904 [2024-07-15 16:10:20.599268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.904 [2024-07-15 16:10:20.599278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.904 qpair failed and we were unable to recover it. 00:28:51.904 [2024-07-15 16:10:20.599363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.904 [2024-07-15 16:10:20.599373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.904 qpair failed and we were unable to recover it. 00:28:51.904 [2024-07-15 16:10:20.599464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.904 [2024-07-15 16:10:20.599473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.904 qpair failed and we were unable to recover it. 00:28:51.904 [2024-07-15 16:10:20.599572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.904 [2024-07-15 16:10:20.599581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.904 qpair failed and we were unable to recover it. 00:28:51.904 [2024-07-15 16:10:20.599689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.904 [2024-07-15 16:10:20.599698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.904 qpair failed and we were unable to recover it. 00:28:51.904 [2024-07-15 16:10:20.599797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.904 [2024-07-15 16:10:20.599807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.904 qpair failed and we were unable to recover it. 00:28:51.904 [2024-07-15 16:10:20.599974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.904 [2024-07-15 16:10:20.599983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.904 qpair failed and we were unable to recover it. 00:28:51.904 [2024-07-15 16:10:20.600069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.904 [2024-07-15 16:10:20.600078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.904 qpair failed and we were unable to recover it. 00:28:51.904 [2024-07-15 16:10:20.600175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.904 [2024-07-15 16:10:20.600185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.904 qpair failed and we were unable to recover it. 00:28:51.904 [2024-07-15 16:10:20.600308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.904 [2024-07-15 16:10:20.600319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.904 qpair failed and we were unable to recover it. 00:28:51.904 [2024-07-15 16:10:20.600477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.904 [2024-07-15 16:10:20.600486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.904 qpair failed and we were unable to recover it. 00:28:51.904 [2024-07-15 16:10:20.600742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.904 [2024-07-15 16:10:20.600751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.904 qpair failed and we were unable to recover it. 00:28:51.904 [2024-07-15 16:10:20.600929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.904 [2024-07-15 16:10:20.600939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.904 qpair failed and we were unable to recover it. 00:28:51.904 [2024-07-15 16:10:20.601033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.904 [2024-07-15 16:10:20.601042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.904 qpair failed and we were unable to recover it. 00:28:51.904 [2024-07-15 16:10:20.601128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.904 [2024-07-15 16:10:20.601137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.904 qpair failed and we were unable to recover it. 00:28:51.904 [2024-07-15 16:10:20.601240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.904 [2024-07-15 16:10:20.601250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.904 qpair failed and we were unable to recover it. 00:28:51.904 [2024-07-15 16:10:20.601409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.904 [2024-07-15 16:10:20.601419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.904 qpair failed and we were unable to recover it. 00:28:51.904 [2024-07-15 16:10:20.601629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.904 [2024-07-15 16:10:20.601638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.904 qpair failed and we were unable to recover it. 00:28:51.904 [2024-07-15 16:10:20.601727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.904 [2024-07-15 16:10:20.601736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.904 qpair failed and we were unable to recover it. 00:28:51.904 [2024-07-15 16:10:20.601841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.904 [2024-07-15 16:10:20.601851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.904 qpair failed and we were unable to recover it. 00:28:51.904 [2024-07-15 16:10:20.602006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.904 [2024-07-15 16:10:20.602015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.904 qpair failed and we were unable to recover it. 00:28:51.904 [2024-07-15 16:10:20.602117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.905 [2024-07-15 16:10:20.602126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.905 qpair failed and we were unable to recover it. 00:28:51.905 [2024-07-15 16:10:20.602218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.905 [2024-07-15 16:10:20.602232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.905 qpair failed and we were unable to recover it. 00:28:51.905 [2024-07-15 16:10:20.602399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.905 [2024-07-15 16:10:20.602409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.905 qpair failed and we were unable to recover it. 00:28:51.905 [2024-07-15 16:10:20.602506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.905 [2024-07-15 16:10:20.602516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.905 qpair failed and we were unable to recover it. 00:28:51.905 [2024-07-15 16:10:20.602712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.905 [2024-07-15 16:10:20.602722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.905 qpair failed and we were unable to recover it. 00:28:51.905 [2024-07-15 16:10:20.602802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.905 [2024-07-15 16:10:20.602812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.905 qpair failed and we were unable to recover it. 00:28:51.905 [2024-07-15 16:10:20.602928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.905 [2024-07-15 16:10:20.602937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.905 qpair failed and we were unable to recover it. 00:28:51.905 [2024-07-15 16:10:20.603040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.905 [2024-07-15 16:10:20.603049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.905 qpair failed and we were unable to recover it. 00:28:51.905 [2024-07-15 16:10:20.603212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.905 [2024-07-15 16:10:20.603221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.905 qpair failed and we were unable to recover it. 00:28:51.905 [2024-07-15 16:10:20.603320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.905 [2024-07-15 16:10:20.603330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.905 qpair failed and we were unable to recover it. 00:28:51.905 [2024-07-15 16:10:20.603417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.905 [2024-07-15 16:10:20.603426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.905 qpair failed and we were unable to recover it. 00:28:51.905 [2024-07-15 16:10:20.603598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.905 [2024-07-15 16:10:20.603608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.905 qpair failed and we were unable to recover it. 00:28:51.905 [2024-07-15 16:10:20.603706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.905 [2024-07-15 16:10:20.603715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.905 qpair failed and we were unable to recover it. 00:28:51.905 [2024-07-15 16:10:20.603818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.905 [2024-07-15 16:10:20.603829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.905 qpair failed and we were unable to recover it. 00:28:51.905 [2024-07-15 16:10:20.603996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.905 [2024-07-15 16:10:20.604006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.905 qpair failed and we were unable to recover it. 00:28:51.905 [2024-07-15 16:10:20.604096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.905 [2024-07-15 16:10:20.604107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.905 qpair failed and we were unable to recover it. 00:28:51.905 [2024-07-15 16:10:20.604214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.905 [2024-07-15 16:10:20.604236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.905 qpair failed and we were unable to recover it. 00:28:51.905 [2024-07-15 16:10:20.604335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.905 [2024-07-15 16:10:20.604345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.905 qpair failed and we were unable to recover it. 00:28:51.905 [2024-07-15 16:10:20.604443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.905 [2024-07-15 16:10:20.604452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.905 qpair failed and we were unable to recover it. 00:28:51.905 [2024-07-15 16:10:20.604544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.905 [2024-07-15 16:10:20.604553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.905 qpair failed and we were unable to recover it. 00:28:51.905 [2024-07-15 16:10:20.604721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.905 [2024-07-15 16:10:20.604730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.905 qpair failed and we were unable to recover it. 00:28:51.905 [2024-07-15 16:10:20.604821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.905 [2024-07-15 16:10:20.604830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.905 qpair failed and we were unable to recover it. 00:28:51.905 [2024-07-15 16:10:20.604918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.905 [2024-07-15 16:10:20.604928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.905 qpair failed and we were unable to recover it. 00:28:51.905 [2024-07-15 16:10:20.605025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.905 [2024-07-15 16:10:20.605035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.905 qpair failed and we were unable to recover it. 00:28:51.905 [2024-07-15 16:10:20.605187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.905 [2024-07-15 16:10:20.605196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.905 qpair failed and we were unable to recover it. 00:28:51.905 [2024-07-15 16:10:20.605298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.905 [2024-07-15 16:10:20.605308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.905 qpair failed and we were unable to recover it. 00:28:51.905 [2024-07-15 16:10:20.605470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.905 [2024-07-15 16:10:20.605479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.905 qpair failed and we were unable to recover it. 00:28:51.905 [2024-07-15 16:10:20.605582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.905 [2024-07-15 16:10:20.605591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.905 qpair failed and we were unable to recover it. 00:28:51.905 [2024-07-15 16:10:20.605692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.905 [2024-07-15 16:10:20.605701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.905 qpair failed and we were unable to recover it. 00:28:51.905 [2024-07-15 16:10:20.605817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.905 [2024-07-15 16:10:20.605828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.905 qpair failed and we were unable to recover it. 00:28:51.905 [2024-07-15 16:10:20.606000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.905 [2024-07-15 16:10:20.606010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.905 qpair failed and we were unable to recover it. 00:28:51.905 [2024-07-15 16:10:20.606090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.905 [2024-07-15 16:10:20.606099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.905 qpair failed and we were unable to recover it. 00:28:51.905 [2024-07-15 16:10:20.606252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.905 [2024-07-15 16:10:20.606261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.905 qpair failed and we were unable to recover it. 00:28:51.905 [2024-07-15 16:10:20.606338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.905 [2024-07-15 16:10:20.606347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.905 qpair failed and we were unable to recover it. 00:28:51.905 [2024-07-15 16:10:20.606449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.905 [2024-07-15 16:10:20.606458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.905 qpair failed and we were unable to recover it. 00:28:51.905 [2024-07-15 16:10:20.606640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.905 [2024-07-15 16:10:20.606649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.905 qpair failed and we were unable to recover it. 00:28:51.905 [2024-07-15 16:10:20.606756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.905 [2024-07-15 16:10:20.606766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.905 qpair failed and we were unable to recover it. 00:28:51.905 [2024-07-15 16:10:20.606857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.905 [2024-07-15 16:10:20.606867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.905 qpair failed and we were unable to recover it. 00:28:51.906 [2024-07-15 16:10:20.606957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.906 [2024-07-15 16:10:20.606967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.906 qpair failed and we were unable to recover it. 00:28:51.906 [2024-07-15 16:10:20.607055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.906 [2024-07-15 16:10:20.607064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.906 qpair failed and we were unable to recover it. 00:28:51.906 [2024-07-15 16:10:20.607176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.906 [2024-07-15 16:10:20.607186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.906 qpair failed and we were unable to recover it. 00:28:51.906 [2024-07-15 16:10:20.607369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.906 [2024-07-15 16:10:20.607379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.906 qpair failed and we were unable to recover it. 00:28:51.906 [2024-07-15 16:10:20.607533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.906 [2024-07-15 16:10:20.607543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.906 qpair failed and we were unable to recover it. 00:28:51.906 [2024-07-15 16:10:20.607633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.906 [2024-07-15 16:10:20.607643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.906 qpair failed and we were unable to recover it. 00:28:51.906 [2024-07-15 16:10:20.607744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.906 [2024-07-15 16:10:20.607754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.906 qpair failed and we were unable to recover it. 00:28:51.906 [2024-07-15 16:10:20.607845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.906 [2024-07-15 16:10:20.607855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.906 qpair failed and we were unable to recover it. 00:28:51.906 [2024-07-15 16:10:20.608015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.906 [2024-07-15 16:10:20.608024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.906 qpair failed and we were unable to recover it. 00:28:51.906 [2024-07-15 16:10:20.608248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.906 [2024-07-15 16:10:20.608258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.906 qpair failed and we were unable to recover it. 00:28:51.906 [2024-07-15 16:10:20.608363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.906 [2024-07-15 16:10:20.608373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.906 qpair failed and we were unable to recover it. 00:28:51.906 [2024-07-15 16:10:20.608535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.906 [2024-07-15 16:10:20.608544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.906 qpair failed and we were unable to recover it. 00:28:51.906 [2024-07-15 16:10:20.608710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.906 [2024-07-15 16:10:20.608720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.906 qpair failed and we were unable to recover it. 00:28:51.906 [2024-07-15 16:10:20.608899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.906 [2024-07-15 16:10:20.608909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.906 qpair failed and we were unable to recover it. 00:28:51.906 [2024-07-15 16:10:20.609027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.906 [2024-07-15 16:10:20.609037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.906 qpair failed and we were unable to recover it. 00:28:51.906 [2024-07-15 16:10:20.609194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.906 [2024-07-15 16:10:20.609206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.906 qpair failed and we were unable to recover it. 00:28:51.906 [2024-07-15 16:10:20.609403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.906 [2024-07-15 16:10:20.609413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.906 qpair failed and we were unable to recover it. 00:28:51.906 [2024-07-15 16:10:20.609516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.906 [2024-07-15 16:10:20.609527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.906 qpair failed and we were unable to recover it. 00:28:51.906 [2024-07-15 16:10:20.609684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.906 [2024-07-15 16:10:20.609694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.906 qpair failed and we were unable to recover it. 00:28:51.906 [2024-07-15 16:10:20.609871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.906 [2024-07-15 16:10:20.609881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.906 qpair failed and we were unable to recover it. 00:28:51.906 [2024-07-15 16:10:20.609988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.906 [2024-07-15 16:10:20.609998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.906 qpair failed and we were unable to recover it. 00:28:51.906 [2024-07-15 16:10:20.610090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.906 [2024-07-15 16:10:20.610100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.906 qpair failed and we were unable to recover it. 00:28:51.906 [2024-07-15 16:10:20.610330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.906 [2024-07-15 16:10:20.610339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.906 qpair failed and we were unable to recover it. 00:28:51.906 [2024-07-15 16:10:20.610437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.906 [2024-07-15 16:10:20.610447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.906 qpair failed and we were unable to recover it. 00:28:51.906 [2024-07-15 16:10:20.610543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.906 [2024-07-15 16:10:20.610553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.906 qpair failed and we were unable to recover it. 00:28:51.906 [2024-07-15 16:10:20.610644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.906 [2024-07-15 16:10:20.610653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.906 qpair failed and we were unable to recover it. 00:28:51.906 [2024-07-15 16:10:20.610750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.906 [2024-07-15 16:10:20.610759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.906 qpair failed and we were unable to recover it. 00:28:51.906 [2024-07-15 16:10:20.610925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.906 [2024-07-15 16:10:20.610935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.906 qpair failed and we were unable to recover it. 00:28:51.906 [2024-07-15 16:10:20.611027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.906 [2024-07-15 16:10:20.611036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.906 qpair failed and we were unable to recover it. 00:28:51.906 [2024-07-15 16:10:20.611130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.906 [2024-07-15 16:10:20.611140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.906 qpair failed and we were unable to recover it. 00:28:51.906 [2024-07-15 16:10:20.611232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.906 [2024-07-15 16:10:20.611242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.906 qpair failed and we were unable to recover it. 00:28:51.906 [2024-07-15 16:10:20.611393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.906 [2024-07-15 16:10:20.611403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.906 qpair failed and we were unable to recover it. 00:28:51.906 [2024-07-15 16:10:20.611480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.906 [2024-07-15 16:10:20.611489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.906 qpair failed and we were unable to recover it. 00:28:51.906 [2024-07-15 16:10:20.611577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.906 [2024-07-15 16:10:20.611588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.906 qpair failed and we were unable to recover it. 00:28:51.906 [2024-07-15 16:10:20.611677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.906 [2024-07-15 16:10:20.611686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.906 qpair failed and we were unable to recover it. 00:28:51.906 [2024-07-15 16:10:20.611843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.906 [2024-07-15 16:10:20.611854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.906 qpair failed and we were unable to recover it. 00:28:51.906 [2024-07-15 16:10:20.611962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.906 [2024-07-15 16:10:20.611971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.906 qpair failed and we were unable to recover it. 00:28:51.906 [2024-07-15 16:10:20.612140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.906 [2024-07-15 16:10:20.612151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.906 qpair failed and we were unable to recover it. 00:28:51.906 [2024-07-15 16:10:20.612254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.906 [2024-07-15 16:10:20.612263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.906 qpair failed and we were unable to recover it. 00:28:51.907 [2024-07-15 16:10:20.612370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.907 [2024-07-15 16:10:20.612380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.907 qpair failed and we were unable to recover it. 00:28:51.907 [2024-07-15 16:10:20.612487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.907 [2024-07-15 16:10:20.612497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.907 qpair failed and we were unable to recover it. 00:28:51.907 [2024-07-15 16:10:20.612592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.907 [2024-07-15 16:10:20.612602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.907 qpair failed and we were unable to recover it. 00:28:51.907 [2024-07-15 16:10:20.612713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.907 [2024-07-15 16:10:20.612722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.907 qpair failed and we were unable to recover it. 00:28:51.907 [2024-07-15 16:10:20.612820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.907 [2024-07-15 16:10:20.612830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.907 qpair failed and we were unable to recover it. 00:28:51.907 [2024-07-15 16:10:20.612914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.907 [2024-07-15 16:10:20.612923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.907 qpair failed and we were unable to recover it. 00:28:51.907 [2024-07-15 16:10:20.613034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.907 [2024-07-15 16:10:20.613046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.907 qpair failed and we were unable to recover it. 00:28:51.907 [2024-07-15 16:10:20.613137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.907 [2024-07-15 16:10:20.613146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.907 qpair failed and we were unable to recover it. 00:28:51.907 [2024-07-15 16:10:20.613299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.907 [2024-07-15 16:10:20.613310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.907 qpair failed and we were unable to recover it. 00:28:51.907 [2024-07-15 16:10:20.613466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.907 [2024-07-15 16:10:20.613475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.907 qpair failed and we were unable to recover it. 00:28:51.907 [2024-07-15 16:10:20.613686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.907 [2024-07-15 16:10:20.613696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.907 qpair failed and we were unable to recover it. 00:28:51.907 [2024-07-15 16:10:20.613799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.907 [2024-07-15 16:10:20.613809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.907 qpair failed and we were unable to recover it. 00:28:51.907 [2024-07-15 16:10:20.613897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.907 [2024-07-15 16:10:20.613906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.907 qpair failed and we were unable to recover it. 00:28:51.907 [2024-07-15 16:10:20.614076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.907 [2024-07-15 16:10:20.614087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.907 qpair failed and we were unable to recover it. 00:28:51.907 [2024-07-15 16:10:20.614272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.907 [2024-07-15 16:10:20.614283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.907 qpair failed and we were unable to recover it. 00:28:51.907 [2024-07-15 16:10:20.614437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.907 [2024-07-15 16:10:20.614446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.907 qpair failed and we were unable to recover it. 00:28:51.907 [2024-07-15 16:10:20.614565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.907 [2024-07-15 16:10:20.614577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.907 qpair failed and we were unable to recover it. 00:28:51.907 [2024-07-15 16:10:20.614689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.907 [2024-07-15 16:10:20.614698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.907 qpair failed and we were unable to recover it. 00:28:51.907 [2024-07-15 16:10:20.614823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.907 [2024-07-15 16:10:20.614833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.907 qpair failed and we were unable to recover it. 00:28:51.907 [2024-07-15 16:10:20.614998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.907 [2024-07-15 16:10:20.615007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.907 qpair failed and we were unable to recover it. 00:28:51.907 [2024-07-15 16:10:20.615175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.907 [2024-07-15 16:10:20.615185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.907 qpair failed and we were unable to recover it. 00:28:51.907 [2024-07-15 16:10:20.615352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.907 [2024-07-15 16:10:20.615362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.907 qpair failed and we were unable to recover it. 00:28:51.907 [2024-07-15 16:10:20.615485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.907 [2024-07-15 16:10:20.615495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.907 qpair failed and we were unable to recover it. 00:28:51.907 [2024-07-15 16:10:20.615670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.907 [2024-07-15 16:10:20.615680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.907 qpair failed and we were unable to recover it. 00:28:51.907 [2024-07-15 16:10:20.615946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.907 [2024-07-15 16:10:20.615956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.907 qpair failed and we were unable to recover it. 00:28:51.907 [2024-07-15 16:10:20.616060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.907 [2024-07-15 16:10:20.616070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.907 qpair failed and we were unable to recover it. 00:28:51.907 [2024-07-15 16:10:20.616243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.907 [2024-07-15 16:10:20.616253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.907 qpair failed and we were unable to recover it. 00:28:51.907 [2024-07-15 16:10:20.616447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.907 [2024-07-15 16:10:20.616456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.907 qpair failed and we were unable to recover it. 00:28:51.907 [2024-07-15 16:10:20.616575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.907 [2024-07-15 16:10:20.616585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.907 qpair failed and we were unable to recover it. 00:28:51.907 [2024-07-15 16:10:20.616695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.907 [2024-07-15 16:10:20.616705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.907 qpair failed and we were unable to recover it. 00:28:51.907 [2024-07-15 16:10:20.616949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.907 [2024-07-15 16:10:20.616959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.907 qpair failed and we were unable to recover it. 00:28:51.907 [2024-07-15 16:10:20.617150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.907 [2024-07-15 16:10:20.617161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.907 qpair failed and we were unable to recover it. 00:28:51.907 [2024-07-15 16:10:20.617306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.907 [2024-07-15 16:10:20.617316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.907 qpair failed and we were unable to recover it. 00:28:51.907 [2024-07-15 16:10:20.617550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.907 [2024-07-15 16:10:20.617560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.907 qpair failed and we were unable to recover it. 00:28:51.907 [2024-07-15 16:10:20.617661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.907 [2024-07-15 16:10:20.617672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.907 qpair failed and we were unable to recover it. 00:28:51.907 [2024-07-15 16:10:20.617828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.907 [2024-07-15 16:10:20.617838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.907 qpair failed and we were unable to recover it. 00:28:51.907 [2024-07-15 16:10:20.617942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.907 [2024-07-15 16:10:20.617952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.907 qpair failed and we were unable to recover it. 00:28:51.907 [2024-07-15 16:10:20.618203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.907 [2024-07-15 16:10:20.618213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.907 qpair failed and we were unable to recover it. 00:28:51.907 [2024-07-15 16:10:20.618404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.907 [2024-07-15 16:10:20.618414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.907 qpair failed and we were unable to recover it. 00:28:51.907 [2024-07-15 16:10:20.618516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.908 [2024-07-15 16:10:20.618526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.908 qpair failed and we were unable to recover it. 00:28:51.908 [2024-07-15 16:10:20.618770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.908 [2024-07-15 16:10:20.618779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.908 qpair failed and we were unable to recover it. 00:28:51.908 [2024-07-15 16:10:20.618893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.908 [2024-07-15 16:10:20.618903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.908 qpair failed and we were unable to recover it. 00:28:51.908 [2024-07-15 16:10:20.619162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.908 [2024-07-15 16:10:20.619173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.908 qpair failed and we were unable to recover it. 00:28:51.908 [2024-07-15 16:10:20.619296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.908 [2024-07-15 16:10:20.619307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.908 qpair failed and we were unable to recover it. 00:28:51.908 [2024-07-15 16:10:20.619492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.908 [2024-07-15 16:10:20.619502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.908 qpair failed and we were unable to recover it. 00:28:51.908 [2024-07-15 16:10:20.619613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.908 [2024-07-15 16:10:20.619623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.908 qpair failed and we were unable to recover it. 00:28:51.908 [2024-07-15 16:10:20.619815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.908 [2024-07-15 16:10:20.619825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.908 qpair failed and we were unable to recover it. 00:28:51.908 [2024-07-15 16:10:20.619918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.908 [2024-07-15 16:10:20.619928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.908 qpair failed and we were unable to recover it. 00:28:51.908 [2024-07-15 16:10:20.620096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.908 [2024-07-15 16:10:20.620106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.908 qpair failed and we were unable to recover it. 00:28:51.908 [2024-07-15 16:10:20.620271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.908 [2024-07-15 16:10:20.620281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.908 qpair failed and we were unable to recover it. 00:28:51.908 [2024-07-15 16:10:20.620371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.908 [2024-07-15 16:10:20.620380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.908 qpair failed and we were unable to recover it. 00:28:51.908 [2024-07-15 16:10:20.620533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.908 [2024-07-15 16:10:20.620544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.908 qpair failed and we were unable to recover it. 00:28:51.908 [2024-07-15 16:10:20.620653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.908 [2024-07-15 16:10:20.620662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.908 qpair failed and we were unable to recover it. 00:28:51.908 [2024-07-15 16:10:20.620855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.908 [2024-07-15 16:10:20.620865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.908 qpair failed and we were unable to recover it. 00:28:51.908 [2024-07-15 16:10:20.621034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.908 [2024-07-15 16:10:20.621044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.908 qpair failed and we were unable to recover it. 00:28:51.908 [2024-07-15 16:10:20.621141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.908 [2024-07-15 16:10:20.621151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.908 qpair failed and we were unable to recover it. 00:28:51.908 [2024-07-15 16:10:20.621371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.908 [2024-07-15 16:10:20.621383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.908 qpair failed and we were unable to recover it. 00:28:51.908 [2024-07-15 16:10:20.621537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.908 [2024-07-15 16:10:20.621547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.908 qpair failed and we were unable to recover it. 00:28:51.908 [2024-07-15 16:10:20.621712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.908 [2024-07-15 16:10:20.621722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.908 qpair failed and we were unable to recover it. 00:28:51.908 [2024-07-15 16:10:20.621834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.908 [2024-07-15 16:10:20.621844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.908 qpair failed and we were unable to recover it. 00:28:51.908 [2024-07-15 16:10:20.622004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.908 [2024-07-15 16:10:20.622014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.908 qpair failed and we were unable to recover it. 00:28:51.908 [2024-07-15 16:10:20.622147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.908 [2024-07-15 16:10:20.622156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.908 qpair failed and we were unable to recover it. 00:28:51.908 [2024-07-15 16:10:20.622281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.908 [2024-07-15 16:10:20.622292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.908 qpair failed and we were unable to recover it. 00:28:51.908 [2024-07-15 16:10:20.622364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.908 [2024-07-15 16:10:20.622374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.908 qpair failed and we were unable to recover it. 00:28:51.908 [2024-07-15 16:10:20.622501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.908 [2024-07-15 16:10:20.622511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.908 qpair failed and we were unable to recover it. 00:28:51.908 [2024-07-15 16:10:20.622614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.908 [2024-07-15 16:10:20.622624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.908 qpair failed and we were unable to recover it. 00:28:51.908 [2024-07-15 16:10:20.622728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.908 [2024-07-15 16:10:20.622738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.908 qpair failed and we were unable to recover it. 00:28:51.908 [2024-07-15 16:10:20.622816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.908 [2024-07-15 16:10:20.622825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.908 qpair failed and we were unable to recover it. 00:28:51.908 [2024-07-15 16:10:20.622925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.908 [2024-07-15 16:10:20.622935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.908 qpair failed and we were unable to recover it. 00:28:51.908 [2024-07-15 16:10:20.623059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.908 [2024-07-15 16:10:20.623069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.908 qpair failed and we were unable to recover it. 00:28:51.908 [2024-07-15 16:10:20.623185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.908 [2024-07-15 16:10:20.623195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.908 qpair failed and we were unable to recover it. 00:28:51.908 [2024-07-15 16:10:20.623298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.908 [2024-07-15 16:10:20.623308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.908 qpair failed and we were unable to recover it. 00:28:51.909 [2024-07-15 16:10:20.623404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.909 [2024-07-15 16:10:20.623413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.909 qpair failed and we were unable to recover it. 00:28:51.909 [2024-07-15 16:10:20.623571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.909 [2024-07-15 16:10:20.623580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.909 qpair failed and we were unable to recover it. 00:28:51.909 [2024-07-15 16:10:20.623686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.909 [2024-07-15 16:10:20.623696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.909 qpair failed and we were unable to recover it. 00:28:51.909 [2024-07-15 16:10:20.623787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.909 [2024-07-15 16:10:20.623796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.909 qpair failed and we were unable to recover it. 00:28:51.909 [2024-07-15 16:10:20.623898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.909 [2024-07-15 16:10:20.623907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.909 qpair failed and we were unable to recover it. 00:28:51.909 [2024-07-15 16:10:20.624009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.909 [2024-07-15 16:10:20.624019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.909 qpair failed and we were unable to recover it. 00:28:51.909 [2024-07-15 16:10:20.624141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.909 [2024-07-15 16:10:20.624151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.909 qpair failed and we were unable to recover it. 00:28:51.909 [2024-07-15 16:10:20.624262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.909 [2024-07-15 16:10:20.624272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.909 qpair failed and we were unable to recover it. 00:28:51.909 [2024-07-15 16:10:20.624387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.909 [2024-07-15 16:10:20.624397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.909 qpair failed and we were unable to recover it. 00:28:51.909 [2024-07-15 16:10:20.624574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.909 [2024-07-15 16:10:20.624584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.909 qpair failed and we were unable to recover it. 00:28:51.909 [2024-07-15 16:10:20.624703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.909 [2024-07-15 16:10:20.624713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.909 qpair failed and we were unable to recover it. 00:28:51.909 [2024-07-15 16:10:20.624935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.909 [2024-07-15 16:10:20.624946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.909 qpair failed and we were unable to recover it. 00:28:51.909 [2024-07-15 16:10:20.625136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.909 [2024-07-15 16:10:20.625146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.909 qpair failed and we were unable to recover it. 00:28:51.909 [2024-07-15 16:10:20.625328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.909 [2024-07-15 16:10:20.625338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.909 qpair failed and we were unable to recover it. 00:28:51.909 [2024-07-15 16:10:20.625439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.909 [2024-07-15 16:10:20.625448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.909 qpair failed and we were unable to recover it. 00:28:51.909 [2024-07-15 16:10:20.625619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.909 [2024-07-15 16:10:20.625629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.909 qpair failed and we were unable to recover it. 00:28:51.909 [2024-07-15 16:10:20.625752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.909 [2024-07-15 16:10:20.625762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.909 qpair failed and we were unable to recover it. 00:28:51.909 [2024-07-15 16:10:20.625966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.909 [2024-07-15 16:10:20.625976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.909 qpair failed and we were unable to recover it. 00:28:51.909 [2024-07-15 16:10:20.626177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.909 [2024-07-15 16:10:20.626187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.909 qpair failed and we were unable to recover it. 00:28:51.909 [2024-07-15 16:10:20.626287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.909 [2024-07-15 16:10:20.626297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.909 qpair failed and we were unable to recover it. 00:28:51.909 [2024-07-15 16:10:20.626551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.909 [2024-07-15 16:10:20.626560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.909 qpair failed and we were unable to recover it. 00:28:51.909 [2024-07-15 16:10:20.626674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.909 [2024-07-15 16:10:20.626685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.909 qpair failed and we were unable to recover it. 00:28:51.909 [2024-07-15 16:10:20.626848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.909 [2024-07-15 16:10:20.626858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.909 qpair failed and we were unable to recover it. 00:28:51.909 [2024-07-15 16:10:20.627029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.909 [2024-07-15 16:10:20.627039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.909 qpair failed and we were unable to recover it. 00:28:51.909 [2024-07-15 16:10:20.627305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.909 [2024-07-15 16:10:20.627317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.909 qpair failed and we were unable to recover it. 00:28:51.909 [2024-07-15 16:10:20.627506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.909 [2024-07-15 16:10:20.627515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.909 qpair failed and we were unable to recover it. 00:28:51.909 [2024-07-15 16:10:20.627682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.909 [2024-07-15 16:10:20.627692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.909 qpair failed and we were unable to recover it. 00:28:51.909 [2024-07-15 16:10:20.627809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.909 [2024-07-15 16:10:20.627819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.909 qpair failed and we were unable to recover it. 00:28:51.909 [2024-07-15 16:10:20.627933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.909 [2024-07-15 16:10:20.627943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.909 qpair failed and we were unable to recover it. 00:28:51.909 [2024-07-15 16:10:20.628131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.909 [2024-07-15 16:10:20.628141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.909 qpair failed and we were unable to recover it. 00:28:51.909 [2024-07-15 16:10:20.628335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.909 [2024-07-15 16:10:20.628345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.909 qpair failed and we were unable to recover it. 00:28:51.909 [2024-07-15 16:10:20.628456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.909 [2024-07-15 16:10:20.628466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.909 qpair failed and we were unable to recover it. 00:28:51.909 [2024-07-15 16:10:20.628704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.909 [2024-07-15 16:10:20.628713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.909 qpair failed and we were unable to recover it. 00:28:51.909 [2024-07-15 16:10:20.628836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.909 [2024-07-15 16:10:20.628846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.909 qpair failed and we were unable to recover it. 00:28:51.909 [2024-07-15 16:10:20.629024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.909 [2024-07-15 16:10:20.629034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.909 qpair failed and we were unable to recover it. 00:28:51.909 [2024-07-15 16:10:20.629127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.909 [2024-07-15 16:10:20.629136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.909 qpair failed and we were unable to recover it. 00:28:51.909 [2024-07-15 16:10:20.629374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.909 [2024-07-15 16:10:20.629384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.909 qpair failed and we were unable to recover it. 00:28:51.909 [2024-07-15 16:10:20.629501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.909 [2024-07-15 16:10:20.629511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.909 qpair failed and we were unable to recover it. 00:28:51.909 [2024-07-15 16:10:20.629808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.909 [2024-07-15 16:10:20.629818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.909 qpair failed and we were unable to recover it. 00:28:51.909 [2024-07-15 16:10:20.629989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.910 [2024-07-15 16:10:20.629999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.910 qpair failed and we were unable to recover it. 00:28:51.910 [2024-07-15 16:10:20.630182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.910 [2024-07-15 16:10:20.630192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.910 qpair failed and we were unable to recover it. 00:28:51.910 [2024-07-15 16:10:20.630422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.910 [2024-07-15 16:10:20.630432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.910 qpair failed and we were unable to recover it. 00:28:51.910 [2024-07-15 16:10:20.630588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.910 [2024-07-15 16:10:20.630598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.910 qpair failed and we were unable to recover it. 00:28:51.910 [2024-07-15 16:10:20.630751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.910 [2024-07-15 16:10:20.630761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.910 qpair failed and we were unable to recover it. 00:28:51.910 [2024-07-15 16:10:20.630962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.910 [2024-07-15 16:10:20.630972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.910 qpair failed and we were unable to recover it. 00:28:51.910 [2024-07-15 16:10:20.631137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.910 [2024-07-15 16:10:20.631147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.910 qpair failed and we were unable to recover it. 00:28:51.910 [2024-07-15 16:10:20.631246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.910 [2024-07-15 16:10:20.631256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.910 qpair failed and we were unable to recover it. 00:28:51.910 [2024-07-15 16:10:20.631503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.910 [2024-07-15 16:10:20.631513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.910 qpair failed and we were unable to recover it. 00:28:51.910 [2024-07-15 16:10:20.631683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.910 [2024-07-15 16:10:20.631693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.910 qpair failed and we were unable to recover it. 00:28:51.910 [2024-07-15 16:10:20.631793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.910 [2024-07-15 16:10:20.631803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.910 qpair failed and we were unable to recover it. 00:28:51.910 [2024-07-15 16:10:20.631967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.910 [2024-07-15 16:10:20.631976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.910 qpair failed and we were unable to recover it. 00:28:51.910 [2024-07-15 16:10:20.632229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.910 [2024-07-15 16:10:20.632240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.910 qpair failed and we were unable to recover it. 00:28:51.910 [2024-07-15 16:10:20.632432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.910 [2024-07-15 16:10:20.632442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.910 qpair failed and we were unable to recover it. 00:28:51.910 [2024-07-15 16:10:20.632561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.910 [2024-07-15 16:10:20.632571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.910 qpair failed and we were unable to recover it. 00:28:51.910 [2024-07-15 16:10:20.632668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.910 [2024-07-15 16:10:20.632678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.910 qpair failed and we were unable to recover it. 00:28:51.910 [2024-07-15 16:10:20.632866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.910 [2024-07-15 16:10:20.632875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.910 qpair failed and we were unable to recover it. 00:28:51.910 [2024-07-15 16:10:20.633148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.910 [2024-07-15 16:10:20.633157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.910 qpair failed and we were unable to recover it. 00:28:51.910 [2024-07-15 16:10:20.633403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.910 [2024-07-15 16:10:20.633413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.910 qpair failed and we were unable to recover it. 00:28:51.910 [2024-07-15 16:10:20.633588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.910 [2024-07-15 16:10:20.633597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.910 qpair failed and we were unable to recover it. 00:28:51.910 [2024-07-15 16:10:20.633821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.910 [2024-07-15 16:10:20.633831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.910 qpair failed and we were unable to recover it. 00:28:51.910 [2024-07-15 16:10:20.633931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.910 [2024-07-15 16:10:20.633940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.910 qpair failed and we were unable to recover it. 00:28:51.910 [2024-07-15 16:10:20.634222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.910 [2024-07-15 16:10:20.634235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.910 qpair failed and we were unable to recover it. 00:28:51.910 [2024-07-15 16:10:20.634445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.910 [2024-07-15 16:10:20.634454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.910 qpair failed and we were unable to recover it. 00:28:51.910 [2024-07-15 16:10:20.634696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.910 [2024-07-15 16:10:20.634706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.910 qpair failed and we were unable to recover it. 00:28:51.910 [2024-07-15 16:10:20.634826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.910 [2024-07-15 16:10:20.634837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.910 qpair failed and we were unable to recover it. 00:28:51.910 [2024-07-15 16:10:20.634993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.910 [2024-07-15 16:10:20.635004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.910 qpair failed and we were unable to recover it. 00:28:51.910 [2024-07-15 16:10:20.635183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.910 [2024-07-15 16:10:20.635193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.910 qpair failed and we were unable to recover it. 00:28:51.910 [2024-07-15 16:10:20.635362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.910 [2024-07-15 16:10:20.635372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.910 qpair failed and we were unable to recover it. 00:28:51.910 [2024-07-15 16:10:20.635550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.910 [2024-07-15 16:10:20.635560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.910 qpair failed and we were unable to recover it. 00:28:51.910 [2024-07-15 16:10:20.635731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.910 [2024-07-15 16:10:20.635741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.910 qpair failed and we were unable to recover it. 00:28:51.910 [2024-07-15 16:10:20.635959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.910 [2024-07-15 16:10:20.635969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.910 qpair failed and we were unable to recover it. 00:28:51.910 [2024-07-15 16:10:20.636152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.910 [2024-07-15 16:10:20.636162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.910 qpair failed and we were unable to recover it. 00:28:51.910 [2024-07-15 16:10:20.636397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.910 [2024-07-15 16:10:20.636407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.910 qpair failed and we were unable to recover it. 00:28:51.910 [2024-07-15 16:10:20.636593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.910 [2024-07-15 16:10:20.636603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.910 qpair failed and we were unable to recover it. 00:28:51.910 [2024-07-15 16:10:20.636718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.910 [2024-07-15 16:10:20.636728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.910 qpair failed and we were unable to recover it. 00:28:51.910 [2024-07-15 16:10:20.637032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.910 [2024-07-15 16:10:20.637042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.910 qpair failed and we were unable to recover it. 00:28:51.910 [2024-07-15 16:10:20.637244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.910 [2024-07-15 16:10:20.637254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.910 qpair failed and we were unable to recover it. 00:28:51.910 [2024-07-15 16:10:20.637515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.910 [2024-07-15 16:10:20.637524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.910 qpair failed and we were unable to recover it. 00:28:51.910 [2024-07-15 16:10:20.637794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.910 [2024-07-15 16:10:20.637804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.910 qpair failed and we were unable to recover it. 00:28:51.910 [2024-07-15 16:10:20.638060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.911 [2024-07-15 16:10:20.638070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.911 qpair failed and we were unable to recover it. 00:28:51.911 [2024-07-15 16:10:20.638310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.911 [2024-07-15 16:10:20.638320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.911 qpair failed and we were unable to recover it. 00:28:51.911 [2024-07-15 16:10:20.638487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.911 [2024-07-15 16:10:20.638496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.911 qpair failed and we were unable to recover it. 00:28:51.911 [2024-07-15 16:10:20.638624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.911 [2024-07-15 16:10:20.638635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.911 qpair failed and we were unable to recover it. 00:28:51.911 [2024-07-15 16:10:20.638804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.911 [2024-07-15 16:10:20.638813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.911 qpair failed and we were unable to recover it. 00:28:51.911 [2024-07-15 16:10:20.639003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.911 [2024-07-15 16:10:20.639013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.911 qpair failed and we were unable to recover it. 00:28:51.911 [2024-07-15 16:10:20.639105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.911 [2024-07-15 16:10:20.639115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.911 qpair failed and we were unable to recover it. 00:28:51.911 [2024-07-15 16:10:20.639317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.911 [2024-07-15 16:10:20.639327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.911 qpair failed and we were unable to recover it. 00:28:51.911 [2024-07-15 16:10:20.639439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.911 [2024-07-15 16:10:20.639449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.911 qpair failed and we were unable to recover it. 00:28:51.911 [2024-07-15 16:10:20.639611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.911 [2024-07-15 16:10:20.639620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.911 qpair failed and we were unable to recover it. 00:28:51.911 [2024-07-15 16:10:20.639785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.911 [2024-07-15 16:10:20.639795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.911 qpair failed and we were unable to recover it. 00:28:51.911 [2024-07-15 16:10:20.640054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.911 [2024-07-15 16:10:20.640063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.911 qpair failed and we were unable to recover it. 00:28:51.911 [2024-07-15 16:10:20.640251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.911 [2024-07-15 16:10:20.640261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.911 qpair failed and we were unable to recover it. 00:28:51.911 [2024-07-15 16:10:20.640443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.911 [2024-07-15 16:10:20.640453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.911 qpair failed and we were unable to recover it. 00:28:51.911 [2024-07-15 16:10:20.640626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.911 [2024-07-15 16:10:20.640636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.911 qpair failed and we were unable to recover it. 00:28:51.911 [2024-07-15 16:10:20.640839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.911 [2024-07-15 16:10:20.640849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.911 qpair failed and we were unable to recover it. 00:28:51.911 [2024-07-15 16:10:20.641002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.911 [2024-07-15 16:10:20.641012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.911 qpair failed and we were unable to recover it. 00:28:51.911 [2024-07-15 16:10:20.641177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.911 [2024-07-15 16:10:20.641187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.911 qpair failed and we were unable to recover it. 00:28:51.911 [2024-07-15 16:10:20.641397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.911 [2024-07-15 16:10:20.641407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.911 qpair failed and we were unable to recover it. 00:28:51.911 [2024-07-15 16:10:20.641630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.911 [2024-07-15 16:10:20.641640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.911 qpair failed and we were unable to recover it. 00:28:51.911 [2024-07-15 16:10:20.641887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.911 [2024-07-15 16:10:20.641896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.911 qpair failed and we were unable to recover it. 00:28:51.911 [2024-07-15 16:10:20.642086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.911 [2024-07-15 16:10:20.642096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.911 qpair failed and we were unable to recover it. 00:28:51.911 [2024-07-15 16:10:20.642324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.911 [2024-07-15 16:10:20.642335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.911 qpair failed and we were unable to recover it. 00:28:51.911 [2024-07-15 16:10:20.642552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.911 [2024-07-15 16:10:20.642562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.911 qpair failed and we were unable to recover it. 00:28:51.911 [2024-07-15 16:10:20.642741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.911 [2024-07-15 16:10:20.642751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.911 qpair failed and we were unable to recover it. 00:28:51.911 [2024-07-15 16:10:20.643001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.911 [2024-07-15 16:10:20.643013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.911 qpair failed and we were unable to recover it. 00:28:51.911 [2024-07-15 16:10:20.643195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.911 [2024-07-15 16:10:20.643205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.911 qpair failed and we were unable to recover it. 00:28:51.911 [2024-07-15 16:10:20.643384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.911 [2024-07-15 16:10:20.643394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.911 qpair failed and we were unable to recover it. 00:28:51.911 [2024-07-15 16:10:20.643499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.911 [2024-07-15 16:10:20.643508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.911 qpair failed and we were unable to recover it. 00:28:51.911 [2024-07-15 16:10:20.643683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.911 [2024-07-15 16:10:20.643693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.911 qpair failed and we were unable to recover it. 00:28:51.911 [2024-07-15 16:10:20.643942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.911 [2024-07-15 16:10:20.643953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.911 qpair failed and we were unable to recover it. 00:28:51.911 [2024-07-15 16:10:20.644121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.911 [2024-07-15 16:10:20.644132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.911 qpair failed and we were unable to recover it. 00:28:51.911 [2024-07-15 16:10:20.644303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.911 [2024-07-15 16:10:20.644314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.911 qpair failed and we were unable to recover it. 00:28:51.911 [2024-07-15 16:10:20.644483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.911 [2024-07-15 16:10:20.644493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.911 qpair failed and we were unable to recover it. 00:28:51.911 [2024-07-15 16:10:20.644689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.911 [2024-07-15 16:10:20.644698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.911 qpair failed and we were unable to recover it. 00:28:51.911 [2024-07-15 16:10:20.644888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.911 [2024-07-15 16:10:20.644898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.911 qpair failed and we were unable to recover it. 00:28:51.911 [2024-07-15 16:10:20.645130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.911 [2024-07-15 16:10:20.645139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.911 qpair failed and we were unable to recover it. 00:28:51.911 [2024-07-15 16:10:20.645398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.911 [2024-07-15 16:10:20.645408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.911 qpair failed and we were unable to recover it. 00:28:51.911 [2024-07-15 16:10:20.645580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.912 [2024-07-15 16:10:20.645590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.912 qpair failed and we were unable to recover it. 00:28:51.912 [2024-07-15 16:10:20.645816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.912 [2024-07-15 16:10:20.645825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.912 qpair failed and we were unable to recover it. 00:28:51.912 [2024-07-15 16:10:20.645997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.912 [2024-07-15 16:10:20.646007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.912 qpair failed and we were unable to recover it. 00:28:51.912 [2024-07-15 16:10:20.646240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.912 [2024-07-15 16:10:20.646250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.912 qpair failed and we were unable to recover it. 00:28:51.912 [2024-07-15 16:10:20.646509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.912 [2024-07-15 16:10:20.646519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.912 qpair failed and we were unable to recover it. 00:28:51.912 [2024-07-15 16:10:20.646695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.912 [2024-07-15 16:10:20.646705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.912 qpair failed and we were unable to recover it. 00:28:51.912 [2024-07-15 16:10:20.646909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.912 [2024-07-15 16:10:20.646918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.912 qpair failed and we were unable to recover it. 00:28:51.912 [2024-07-15 16:10:20.647125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.912 [2024-07-15 16:10:20.647135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.912 qpair failed and we were unable to recover it. 00:28:51.912 [2024-07-15 16:10:20.647378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.912 [2024-07-15 16:10:20.647388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.912 qpair failed and we were unable to recover it. 00:28:51.912 [2024-07-15 16:10:20.647511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.912 [2024-07-15 16:10:20.647521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.912 qpair failed and we were unable to recover it. 00:28:51.912 [2024-07-15 16:10:20.647717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.912 [2024-07-15 16:10:20.647726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.912 qpair failed and we were unable to recover it. 00:28:51.912 [2024-07-15 16:10:20.647825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.912 [2024-07-15 16:10:20.647835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.912 qpair failed and we were unable to recover it. 00:28:51.912 [2024-07-15 16:10:20.648096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.912 [2024-07-15 16:10:20.648106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.912 qpair failed and we were unable to recover it. 00:28:51.912 [2024-07-15 16:10:20.648335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.912 [2024-07-15 16:10:20.648345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.912 qpair failed and we were unable to recover it. 00:28:51.912 [2024-07-15 16:10:20.648460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.912 [2024-07-15 16:10:20.648470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.912 qpair failed and we were unable to recover it. 00:28:51.912 [2024-07-15 16:10:20.648761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.912 [2024-07-15 16:10:20.648771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.912 qpair failed and we were unable to recover it. 00:28:51.912 [2024-07-15 16:10:20.648954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.912 [2024-07-15 16:10:20.648964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.912 qpair failed and we were unable to recover it. 00:28:51.912 [2024-07-15 16:10:20.649238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.912 [2024-07-15 16:10:20.649248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.912 qpair failed and we were unable to recover it. 00:28:51.912 [2024-07-15 16:10:20.649418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.912 [2024-07-15 16:10:20.649428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.912 qpair failed and we were unable to recover it. 00:28:51.912 [2024-07-15 16:10:20.649533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.912 [2024-07-15 16:10:20.649543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.912 qpair failed and we were unable to recover it. 00:28:51.912 [2024-07-15 16:10:20.649668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.912 [2024-07-15 16:10:20.649677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.912 qpair failed and we were unable to recover it. 00:28:51.912 [2024-07-15 16:10:20.649788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.912 [2024-07-15 16:10:20.649798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.912 qpair failed and we were unable to recover it. 00:28:51.912 [2024-07-15 16:10:20.649964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.912 [2024-07-15 16:10:20.649974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.912 qpair failed and we were unable to recover it. 00:28:51.912 [2024-07-15 16:10:20.650089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.912 [2024-07-15 16:10:20.650098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.912 qpair failed and we were unable to recover it. 00:28:51.912 [2024-07-15 16:10:20.650259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.912 [2024-07-15 16:10:20.650269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.912 qpair failed and we were unable to recover it. 00:28:51.912 [2024-07-15 16:10:20.650384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.912 [2024-07-15 16:10:20.650394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.912 qpair failed and we were unable to recover it. 00:28:51.912 [2024-07-15 16:10:20.650550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.912 [2024-07-15 16:10:20.650559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.912 qpair failed and we were unable to recover it. 00:28:51.912 [2024-07-15 16:10:20.650673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.912 [2024-07-15 16:10:20.650685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.912 qpair failed and we were unable to recover it. 00:28:51.913 [2024-07-15 16:10:20.650775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.913 [2024-07-15 16:10:20.650784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.913 qpair failed and we were unable to recover it. 00:28:51.913 [2024-07-15 16:10:20.650962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.913 [2024-07-15 16:10:20.650972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.913 qpair failed and we were unable to recover it. 00:28:51.913 [2024-07-15 16:10:20.651193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.913 [2024-07-15 16:10:20.651202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.913 qpair failed and we were unable to recover it. 00:28:51.913 [2024-07-15 16:10:20.651386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.913 [2024-07-15 16:10:20.651397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.913 qpair failed and we were unable to recover it. 00:28:51.913 [2024-07-15 16:10:20.651510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.913 [2024-07-15 16:10:20.651520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.913 qpair failed and we were unable to recover it. 00:28:51.913 [2024-07-15 16:10:20.651694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.913 [2024-07-15 16:10:20.651704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.913 qpair failed and we were unable to recover it. 00:28:51.913 [2024-07-15 16:10:20.651925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.913 [2024-07-15 16:10:20.651935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.913 qpair failed and we were unable to recover it. 00:28:51.913 [2024-07-15 16:10:20.652092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.913 [2024-07-15 16:10:20.652102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.913 qpair failed and we were unable to recover it. 00:28:51.913 [2024-07-15 16:10:20.652308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.913 [2024-07-15 16:10:20.652319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.913 qpair failed and we were unable to recover it. 00:28:51.913 [2024-07-15 16:10:20.652488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.913 [2024-07-15 16:10:20.652498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.913 qpair failed and we were unable to recover it. 00:28:51.913 [2024-07-15 16:10:20.652648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.913 [2024-07-15 16:10:20.652657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.913 qpair failed and we were unable to recover it. 00:28:51.913 [2024-07-15 16:10:20.652779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.913 [2024-07-15 16:10:20.652788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.913 qpair failed and we were unable to recover it. 00:28:51.913 [2024-07-15 16:10:20.653075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.913 [2024-07-15 16:10:20.653085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.913 qpair failed and we were unable to recover it. 00:28:51.913 [2024-07-15 16:10:20.653270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.913 [2024-07-15 16:10:20.653281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.913 qpair failed and we were unable to recover it. 00:28:51.913 [2024-07-15 16:10:20.653465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.913 [2024-07-15 16:10:20.653475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.913 qpair failed and we were unable to recover it. 00:28:51.913 [2024-07-15 16:10:20.653591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.913 [2024-07-15 16:10:20.653601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.913 qpair failed and we were unable to recover it. 00:28:51.913 [2024-07-15 16:10:20.653755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.913 [2024-07-15 16:10:20.653765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.913 qpair failed and we were unable to recover it. 00:28:51.913 [2024-07-15 16:10:20.653957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.913 [2024-07-15 16:10:20.653967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.913 qpair failed and we were unable to recover it. 00:28:51.913 [2024-07-15 16:10:20.654136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.913 [2024-07-15 16:10:20.654146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.913 qpair failed and we were unable to recover it. 00:28:51.913 [2024-07-15 16:10:20.654307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.913 [2024-07-15 16:10:20.654317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.913 qpair failed and we were unable to recover it. 00:28:51.913 [2024-07-15 16:10:20.654563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.913 [2024-07-15 16:10:20.654573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.913 qpair failed and we were unable to recover it. 00:28:51.913 [2024-07-15 16:10:20.654820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.913 [2024-07-15 16:10:20.654830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.913 qpair failed and we were unable to recover it. 00:28:51.913 [2024-07-15 16:10:20.655075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.913 [2024-07-15 16:10:20.655084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.913 qpair failed and we were unable to recover it. 00:28:51.913 [2024-07-15 16:10:20.655174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.913 [2024-07-15 16:10:20.655184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.913 qpair failed and we were unable to recover it. 00:28:51.913 [2024-07-15 16:10:20.655393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.913 [2024-07-15 16:10:20.655403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.914 qpair failed and we were unable to recover it. 00:28:51.914 [2024-07-15 16:10:20.655627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.914 [2024-07-15 16:10:20.655636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.914 qpair failed and we were unable to recover it. 00:28:51.914 [2024-07-15 16:10:20.655738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.914 [2024-07-15 16:10:20.655748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.914 qpair failed and we were unable to recover it. 00:28:51.914 [2024-07-15 16:10:20.655919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.914 [2024-07-15 16:10:20.655929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.914 qpair failed and we were unable to recover it. 00:28:51.914 [2024-07-15 16:10:20.656082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.914 [2024-07-15 16:10:20.656092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.914 qpair failed and we were unable to recover it. 00:28:51.914 [2024-07-15 16:10:20.656281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.914 [2024-07-15 16:10:20.656292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.914 qpair failed and we were unable to recover it. 00:28:51.914 [2024-07-15 16:10:20.656488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.914 [2024-07-15 16:10:20.656498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.914 qpair failed and we were unable to recover it. 00:28:51.914 [2024-07-15 16:10:20.656668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.914 [2024-07-15 16:10:20.656678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.914 qpair failed and we were unable to recover it. 00:28:51.914 [2024-07-15 16:10:20.656840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.914 [2024-07-15 16:10:20.656850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.914 qpair failed and we were unable to recover it. 00:28:51.914 [2024-07-15 16:10:20.656955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.914 [2024-07-15 16:10:20.656966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.914 qpair failed and we were unable to recover it. 00:28:51.914 [2024-07-15 16:10:20.657229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.914 [2024-07-15 16:10:20.657239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.914 qpair failed and we were unable to recover it. 00:28:51.914 [2024-07-15 16:10:20.657507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.914 [2024-07-15 16:10:20.657517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.914 qpair failed and we were unable to recover it. 00:28:51.914 [2024-07-15 16:10:20.657672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.914 [2024-07-15 16:10:20.657682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.914 qpair failed and we were unable to recover it. 00:28:51.914 [2024-07-15 16:10:20.657946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.914 [2024-07-15 16:10:20.657956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.914 qpair failed and we were unable to recover it. 00:28:51.914 [2024-07-15 16:10:20.658175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.914 [2024-07-15 16:10:20.658185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.914 qpair failed and we were unable to recover it. 00:28:51.914 [2024-07-15 16:10:20.658348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.914 [2024-07-15 16:10:20.658360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.914 qpair failed and we were unable to recover it. 00:28:51.914 [2024-07-15 16:10:20.658464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.914 [2024-07-15 16:10:20.658473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.914 qpair failed and we were unable to recover it. 00:28:51.914 [2024-07-15 16:10:20.658648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.914 [2024-07-15 16:10:20.658658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.914 qpair failed and we were unable to recover it. 00:28:51.914 [2024-07-15 16:10:20.658837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.914 [2024-07-15 16:10:20.658847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.914 qpair failed and we were unable to recover it. 00:28:51.914 [2024-07-15 16:10:20.659110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.914 [2024-07-15 16:10:20.659120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.914 qpair failed and we were unable to recover it. 00:28:51.914 [2024-07-15 16:10:20.659386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.914 [2024-07-15 16:10:20.659396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.914 qpair failed and we were unable to recover it. 00:28:51.914 [2024-07-15 16:10:20.659505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.914 [2024-07-15 16:10:20.659515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.914 qpair failed and we were unable to recover it. 00:28:51.914 [2024-07-15 16:10:20.659700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.914 [2024-07-15 16:10:20.659709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.914 qpair failed and we were unable to recover it. 00:28:51.914 [2024-07-15 16:10:20.659827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.914 [2024-07-15 16:10:20.659836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.914 qpair failed and we were unable to recover it. 00:28:51.914 [2024-07-15 16:10:20.660090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.914 [2024-07-15 16:10:20.660099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.914 qpair failed and we were unable to recover it. 00:28:51.914 [2024-07-15 16:10:20.660293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.914 [2024-07-15 16:10:20.660303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.914 qpair failed and we were unable to recover it. 00:28:51.914 [2024-07-15 16:10:20.660456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.914 [2024-07-15 16:10:20.660465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.914 qpair failed and we were unable to recover it. 00:28:51.914 [2024-07-15 16:10:20.660571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.914 [2024-07-15 16:10:20.660581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.914 qpair failed and we were unable to recover it. 00:28:51.914 [2024-07-15 16:10:20.660777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.914 [2024-07-15 16:10:20.660787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.914 qpair failed and we were unable to recover it. 00:28:51.914 [2024-07-15 16:10:20.661081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.914 [2024-07-15 16:10:20.661091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.914 qpair failed and we were unable to recover it. 00:28:51.915 [2024-07-15 16:10:20.661353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.915 [2024-07-15 16:10:20.661363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.915 qpair failed and we were unable to recover it. 00:28:51.915 [2024-07-15 16:10:20.661460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.915 [2024-07-15 16:10:20.661470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.915 qpair failed and we were unable to recover it. 00:28:51.915 [2024-07-15 16:10:20.661625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.915 [2024-07-15 16:10:20.661634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.915 qpair failed and we were unable to recover it. 00:28:51.915 [2024-07-15 16:10:20.661740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.915 [2024-07-15 16:10:20.661750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.915 qpair failed and we were unable to recover it. 00:28:51.915 [2024-07-15 16:10:20.661934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.915 [2024-07-15 16:10:20.661944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.915 qpair failed and we were unable to recover it. 00:28:51.915 [2024-07-15 16:10:20.662057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.915 [2024-07-15 16:10:20.662066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.915 qpair failed and we were unable to recover it. 00:28:51.915 [2024-07-15 16:10:20.662235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.915 [2024-07-15 16:10:20.662244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.915 qpair failed and we were unable to recover it. 00:28:51.915 [2024-07-15 16:10:20.662361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.915 [2024-07-15 16:10:20.662371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.915 qpair failed and we were unable to recover it. 00:28:51.915 [2024-07-15 16:10:20.662524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.915 [2024-07-15 16:10:20.662534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.915 qpair failed and we were unable to recover it. 00:28:51.915 [2024-07-15 16:10:20.662697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.915 [2024-07-15 16:10:20.662706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.915 qpair failed and we were unable to recover it. 00:28:51.915 [2024-07-15 16:10:20.662970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.915 [2024-07-15 16:10:20.662979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.915 qpair failed and we were unable to recover it. 00:28:51.915 [2024-07-15 16:10:20.663135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.915 [2024-07-15 16:10:20.663145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.915 qpair failed and we were unable to recover it. 00:28:51.915 [2024-07-15 16:10:20.663386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.915 [2024-07-15 16:10:20.663396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.915 qpair failed and we were unable to recover it. 00:28:51.915 [2024-07-15 16:10:20.663571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.915 [2024-07-15 16:10:20.663580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.915 qpair failed and we were unable to recover it. 00:28:51.915 [2024-07-15 16:10:20.663708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.915 [2024-07-15 16:10:20.663718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.915 qpair failed and we were unable to recover it. 00:28:51.915 [2024-07-15 16:10:20.663840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.915 [2024-07-15 16:10:20.663850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.915 qpair failed and we were unable to recover it. 00:28:51.915 [2024-07-15 16:10:20.664013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.915 [2024-07-15 16:10:20.664023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.915 qpair failed and we were unable to recover it. 00:28:51.915 [2024-07-15 16:10:20.664193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.915 [2024-07-15 16:10:20.664203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.915 qpair failed and we were unable to recover it. 00:28:51.915 [2024-07-15 16:10:20.664466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.915 [2024-07-15 16:10:20.664477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.915 qpair failed and we were unable to recover it. 00:28:51.915 [2024-07-15 16:10:20.664589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.915 [2024-07-15 16:10:20.664599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.915 qpair failed and we were unable to recover it. 00:28:51.915 [2024-07-15 16:10:20.664823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.915 [2024-07-15 16:10:20.664833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.915 qpair failed and we were unable to recover it. 00:28:51.915 [2024-07-15 16:10:20.665120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.915 [2024-07-15 16:10:20.665130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.915 qpair failed and we were unable to recover it. 00:28:51.915 [2024-07-15 16:10:20.665300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.915 [2024-07-15 16:10:20.665311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.915 qpair failed and we were unable to recover it. 00:28:51.915 [2024-07-15 16:10:20.665489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.915 [2024-07-15 16:10:20.665498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.915 qpair failed and we were unable to recover it. 00:28:51.915 [2024-07-15 16:10:20.665664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.915 [2024-07-15 16:10:20.665673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.915 qpair failed and we were unable to recover it. 00:28:51.915 [2024-07-15 16:10:20.665790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.916 [2024-07-15 16:10:20.665802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.916 qpair failed and we were unable to recover it. 00:28:51.916 [2024-07-15 16:10:20.665914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.916 [2024-07-15 16:10:20.665923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.916 qpair failed and we were unable to recover it. 00:28:51.916 [2024-07-15 16:10:20.666041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.916 [2024-07-15 16:10:20.666051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.916 qpair failed and we were unable to recover it. 00:28:51.916 [2024-07-15 16:10:20.666296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.916 [2024-07-15 16:10:20.666306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.916 qpair failed and we were unable to recover it. 00:28:51.916 [2024-07-15 16:10:20.666399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.916 [2024-07-15 16:10:20.666409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.916 qpair failed and we were unable to recover it. 00:28:51.916 [2024-07-15 16:10:20.666577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.916 [2024-07-15 16:10:20.666587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.916 qpair failed and we were unable to recover it. 00:28:51.916 [2024-07-15 16:10:20.666764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.916 [2024-07-15 16:10:20.666774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.916 qpair failed and we were unable to recover it. 00:28:51.916 [2024-07-15 16:10:20.666970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.916 [2024-07-15 16:10:20.666979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.916 qpair failed and we were unable to recover it. 00:28:51.916 [2024-07-15 16:10:20.667079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.916 [2024-07-15 16:10:20.667088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.916 qpair failed and we were unable to recover it. 00:28:51.916 [2024-07-15 16:10:20.667272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.916 [2024-07-15 16:10:20.667282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.916 qpair failed and we were unable to recover it. 00:28:51.916 [2024-07-15 16:10:20.667441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.916 [2024-07-15 16:10:20.667451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.916 qpair failed and we were unable to recover it. 00:28:51.916 [2024-07-15 16:10:20.667576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.916 [2024-07-15 16:10:20.667586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.916 qpair failed and we were unable to recover it. 00:28:51.916 [2024-07-15 16:10:20.667698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.916 [2024-07-15 16:10:20.667707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.916 qpair failed and we were unable to recover it. 00:28:51.916 [2024-07-15 16:10:20.667976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.916 [2024-07-15 16:10:20.667987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.916 qpair failed and we were unable to recover it. 00:28:51.916 [2024-07-15 16:10:20.668229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.916 [2024-07-15 16:10:20.668240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.916 qpair failed and we were unable to recover it. 00:28:51.916 [2024-07-15 16:10:20.668371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.916 [2024-07-15 16:10:20.668381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.916 qpair failed and we were unable to recover it. 00:28:51.916 [2024-07-15 16:10:20.668536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.916 [2024-07-15 16:10:20.668545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.916 qpair failed and we were unable to recover it. 00:28:51.916 [2024-07-15 16:10:20.668642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.916 [2024-07-15 16:10:20.668652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.916 qpair failed and we were unable to recover it. 00:28:51.916 [2024-07-15 16:10:20.668766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.916 [2024-07-15 16:10:20.668776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.916 qpair failed and we were unable to recover it. 00:28:51.916 [2024-07-15 16:10:20.669021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.916 [2024-07-15 16:10:20.669031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.916 qpair failed and we were unable to recover it. 00:28:51.916 [2024-07-15 16:10:20.669212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.916 [2024-07-15 16:10:20.669221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.916 qpair failed and we were unable to recover it. 00:28:51.916 [2024-07-15 16:10:20.669392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.916 [2024-07-15 16:10:20.669402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.916 qpair failed and we were unable to recover it. 00:28:51.916 [2024-07-15 16:10:20.669529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.916 [2024-07-15 16:10:20.669540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.916 qpair failed and we were unable to recover it. 00:28:51.916 [2024-07-15 16:10:20.669703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.916 [2024-07-15 16:10:20.669713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.916 qpair failed and we were unable to recover it. 00:28:51.916 [2024-07-15 16:10:20.670014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.916 [2024-07-15 16:10:20.670025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.916 qpair failed and we were unable to recover it. 00:28:51.916 [2024-07-15 16:10:20.670132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.916 [2024-07-15 16:10:20.670142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.916 qpair failed and we were unable to recover it. 00:28:51.916 [2024-07-15 16:10:20.670271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.916 [2024-07-15 16:10:20.670282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.916 qpair failed and we were unable to recover it. 00:28:51.916 [2024-07-15 16:10:20.670444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.916 [2024-07-15 16:10:20.670454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.916 qpair failed and we were unable to recover it. 00:28:51.917 [2024-07-15 16:10:20.670553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.917 [2024-07-15 16:10:20.670563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.917 qpair failed and we were unable to recover it. 00:28:51.917 [2024-07-15 16:10:20.670690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.917 [2024-07-15 16:10:20.670699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.917 qpair failed and we were unable to recover it. 00:28:51.917 [2024-07-15 16:10:20.670869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.917 [2024-07-15 16:10:20.670881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.917 qpair failed and we were unable to recover it. 00:28:51.917 [2024-07-15 16:10:20.670993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.917 [2024-07-15 16:10:20.671004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.917 qpair failed and we were unable to recover it. 00:28:51.917 [2024-07-15 16:10:20.671122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.917 [2024-07-15 16:10:20.671134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.917 qpair failed and we were unable to recover it. 00:28:51.917 [2024-07-15 16:10:20.671324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.917 [2024-07-15 16:10:20.671334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.917 qpair failed and we were unable to recover it. 00:28:51.917 [2024-07-15 16:10:20.671435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.917 [2024-07-15 16:10:20.671446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.917 qpair failed and we were unable to recover it. 00:28:51.917 [2024-07-15 16:10:20.671618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.917 [2024-07-15 16:10:20.671628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.917 qpair failed and we were unable to recover it. 00:28:51.917 [2024-07-15 16:10:20.671801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.917 [2024-07-15 16:10:20.671810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.917 qpair failed and we were unable to recover it. 00:28:51.917 [2024-07-15 16:10:20.672061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.917 [2024-07-15 16:10:20.672071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.917 qpair failed and we were unable to recover it. 00:28:51.917 [2024-07-15 16:10:20.672234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.917 [2024-07-15 16:10:20.672244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.917 qpair failed and we were unable to recover it. 00:28:51.917 [2024-07-15 16:10:20.672366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.917 [2024-07-15 16:10:20.672376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.917 qpair failed and we were unable to recover it. 00:28:51.917 [2024-07-15 16:10:20.672545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.917 [2024-07-15 16:10:20.672556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.917 qpair failed and we were unable to recover it. 00:28:51.917 [2024-07-15 16:10:20.672668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.917 [2024-07-15 16:10:20.672678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.917 qpair failed and we were unable to recover it. 00:28:51.917 [2024-07-15 16:10:20.672857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.917 [2024-07-15 16:10:20.672867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.917 qpair failed and we were unable to recover it. 00:28:51.917 [2024-07-15 16:10:20.673066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.917 [2024-07-15 16:10:20.673076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.917 qpair failed and we were unable to recover it. 00:28:51.917 [2024-07-15 16:10:20.673276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.917 [2024-07-15 16:10:20.673287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.917 qpair failed and we were unable to recover it. 00:28:51.917 [2024-07-15 16:10:20.673422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.917 [2024-07-15 16:10:20.673432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.917 qpair failed and we were unable to recover it. 00:28:51.917 [2024-07-15 16:10:20.673656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.917 [2024-07-15 16:10:20.673667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.917 qpair failed and we were unable to recover it. 00:28:51.917 [2024-07-15 16:10:20.673776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.917 [2024-07-15 16:10:20.673786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.917 qpair failed and we were unable to recover it. 00:28:51.917 [2024-07-15 16:10:20.673983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.917 [2024-07-15 16:10:20.673993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.917 qpair failed and we were unable to recover it. 00:28:51.917 [2024-07-15 16:10:20.674247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.917 [2024-07-15 16:10:20.674257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.917 qpair failed and we were unable to recover it. 00:28:51.917 [2024-07-15 16:10:20.674483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.917 [2024-07-15 16:10:20.674493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.917 qpair failed and we were unable to recover it. 00:28:51.917 [2024-07-15 16:10:20.674668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.917 [2024-07-15 16:10:20.674678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.917 qpair failed and we were unable to recover it. 00:28:51.917 [2024-07-15 16:10:20.674860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.917 [2024-07-15 16:10:20.674870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.917 qpair failed and we were unable to recover it. 00:28:51.917 [2024-07-15 16:10:20.675096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.917 [2024-07-15 16:10:20.675105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.917 qpair failed and we were unable to recover it. 00:28:51.917 [2024-07-15 16:10:20.675311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.917 [2024-07-15 16:10:20.675322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.917 qpair failed and we were unable to recover it. 00:28:51.917 [2024-07-15 16:10:20.675496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.917 [2024-07-15 16:10:20.675506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.917 qpair failed and we were unable to recover it. 00:28:51.917 [2024-07-15 16:10:20.675662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.917 [2024-07-15 16:10:20.675672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.918 qpair failed and we were unable to recover it. 00:28:51.918 [2024-07-15 16:10:20.675801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.918 [2024-07-15 16:10:20.675810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.918 qpair failed and we were unable to recover it. 00:28:51.918 [2024-07-15 16:10:20.676064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.918 [2024-07-15 16:10:20.676074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.918 qpair failed and we were unable to recover it. 00:28:51.918 [2024-07-15 16:10:20.676193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.918 [2024-07-15 16:10:20.676203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.918 qpair failed and we were unable to recover it. 00:28:51.918 [2024-07-15 16:10:20.676496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.918 [2024-07-15 16:10:20.676506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.918 qpair failed and we were unable to recover it. 00:28:51.918 [2024-07-15 16:10:20.676624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.918 [2024-07-15 16:10:20.676634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.918 qpair failed and we were unable to recover it. 00:28:51.918 [2024-07-15 16:10:20.676861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.918 [2024-07-15 16:10:20.676871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.918 qpair failed and we were unable to recover it. 00:28:51.918 [2024-07-15 16:10:20.677114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.918 [2024-07-15 16:10:20.677123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.918 qpair failed and we were unable to recover it. 00:28:51.918 [2024-07-15 16:10:20.677349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.918 [2024-07-15 16:10:20.677361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.918 qpair failed and we were unable to recover it. 00:28:51.918 [2024-07-15 16:10:20.677594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.918 [2024-07-15 16:10:20.677604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.918 qpair failed and we were unable to recover it. 00:28:51.918 [2024-07-15 16:10:20.677766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.918 [2024-07-15 16:10:20.677776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.918 qpair failed and we were unable to recover it. 00:28:51.918 [2024-07-15 16:10:20.678124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.918 [2024-07-15 16:10:20.678159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa5c000b90 with addr=10.0.0.2, port=4420 00:28:51.918 qpair failed and we were unable to recover it. 00:28:51.918 [2024-07-15 16:10:20.678415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.918 [2024-07-15 16:10:20.678443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.918 qpair failed and we were unable to recover it. 00:28:51.918 [2024-07-15 16:10:20.678630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.918 [2024-07-15 16:10:20.678646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.918 qpair failed and we were unable to recover it. 00:28:51.918 [2024-07-15 16:10:20.678775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.918 [2024-07-15 16:10:20.678789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.918 qpair failed and we were unable to recover it. 00:28:51.918 [2024-07-15 16:10:20.678983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.918 [2024-07-15 16:10:20.678997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.918 qpair failed and we were unable to recover it. 00:28:51.918 [2024-07-15 16:10:20.679195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.918 [2024-07-15 16:10:20.679209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.918 qpair failed and we were unable to recover it. 00:28:51.918 [2024-07-15 16:10:20.679395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.918 [2024-07-15 16:10:20.679409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.918 qpair failed and we were unable to recover it. 00:28:51.918 [2024-07-15 16:10:20.679526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.918 [2024-07-15 16:10:20.679541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1398ed0 with addr=10.0.0.2, port=4420 00:28:51.918 qpair failed and we were unable to recover it. 00:28:51.918 [2024-07-15 16:10:20.679741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.918 [2024-07-15 16:10:20.679752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.918 qpair failed and we were unable to recover it. 00:28:51.918 [2024-07-15 16:10:20.679885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.918 [2024-07-15 16:10:20.679895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.918 qpair failed and we were unable to recover it. 00:28:51.918 [2024-07-15 16:10:20.680067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.918 [2024-07-15 16:10:20.680077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.918 qpair failed and we were unable to recover it. 00:28:51.918 [2024-07-15 16:10:20.680199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.918 [2024-07-15 16:10:20.680209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.918 qpair failed and we were unable to recover it. 00:28:51.918 [2024-07-15 16:10:20.680389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.918 [2024-07-15 16:10:20.680399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.918 qpair failed and we were unable to recover it. 00:28:51.918 [2024-07-15 16:10:20.680511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.918 [2024-07-15 16:10:20.680521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.918 qpair failed and we were unable to recover it. 00:28:51.918 [2024-07-15 16:10:20.680739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.918 [2024-07-15 16:10:20.680750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.918 qpair failed and we were unable to recover it. 00:28:51.918 [2024-07-15 16:10:20.680848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.918 [2024-07-15 16:10:20.680859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.918 qpair failed and we were unable to recover it. 00:28:51.918 [2024-07-15 16:10:20.681074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.918 [2024-07-15 16:10:20.681084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.919 qpair failed and we were unable to recover it. 00:28:51.919 [2024-07-15 16:10:20.681253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.919 [2024-07-15 16:10:20.681263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.919 qpair failed and we were unable to recover it. 00:28:51.919 [2024-07-15 16:10:20.681380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.919 [2024-07-15 16:10:20.681390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.919 qpair failed and we were unable to recover it. 00:28:51.919 [2024-07-15 16:10:20.681569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.919 [2024-07-15 16:10:20.681578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.919 qpair failed and we were unable to recover it. 00:28:51.919 [2024-07-15 16:10:20.681700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.919 [2024-07-15 16:10:20.681710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.919 qpair failed and we were unable to recover it. 00:28:51.919 [2024-07-15 16:10:20.681885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.919 [2024-07-15 16:10:20.681896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.919 qpair failed and we were unable to recover it. 00:28:51.919 [2024-07-15 16:10:20.682101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.919 [2024-07-15 16:10:20.682111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.919 qpair failed and we were unable to recover it. 00:28:51.919 [2024-07-15 16:10:20.682276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.919 [2024-07-15 16:10:20.682287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.919 qpair failed and we were unable to recover it. 00:28:51.919 [2024-07-15 16:10:20.682401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.919 [2024-07-15 16:10:20.682410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.919 qpair failed and we were unable to recover it. 00:28:51.919 [2024-07-15 16:10:20.682607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.919 [2024-07-15 16:10:20.682617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.919 qpair failed and we were unable to recover it. 00:28:51.919 [2024-07-15 16:10:20.682796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.919 [2024-07-15 16:10:20.682806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.919 qpair failed and we were unable to recover it. 00:28:51.919 [2024-07-15 16:10:20.683058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.919 [2024-07-15 16:10:20.683069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.919 qpair failed and we were unable to recover it. 00:28:51.919 [2024-07-15 16:10:20.683318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.919 [2024-07-15 16:10:20.683329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.919 qpair failed and we were unable to recover it. 00:28:51.919 [2024-07-15 16:10:20.683498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.919 [2024-07-15 16:10:20.683508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.919 qpair failed and we were unable to recover it. 00:28:51.919 [2024-07-15 16:10:20.683666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.919 [2024-07-15 16:10:20.683675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.919 qpair failed and we were unable to recover it. 00:28:51.919 [2024-07-15 16:10:20.683801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.919 [2024-07-15 16:10:20.683811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.919 qpair failed and we were unable to recover it. 00:28:51.919 [2024-07-15 16:10:20.684001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.919 [2024-07-15 16:10:20.684011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.919 qpair failed and we were unable to recover it. 00:28:51.919 [2024-07-15 16:10:20.684232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.919 [2024-07-15 16:10:20.684242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.919 qpair failed and we were unable to recover it. 00:28:51.919 [2024-07-15 16:10:20.684415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.919 [2024-07-15 16:10:20.684425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.919 qpair failed and we were unable to recover it. 00:28:51.919 [2024-07-15 16:10:20.684583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.919 [2024-07-15 16:10:20.684593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.919 qpair failed and we were unable to recover it. 00:28:51.919 [2024-07-15 16:10:20.684773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.919 [2024-07-15 16:10:20.684783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.919 qpair failed and we were unable to recover it. 00:28:51.920 [2024-07-15 16:10:20.684996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.920 [2024-07-15 16:10:20.685007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.920 qpair failed and we were unable to recover it. 00:28:51.920 [2024-07-15 16:10:20.685161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.920 [2024-07-15 16:10:20.685171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.920 qpair failed and we were unable to recover it. 00:28:51.920 [2024-07-15 16:10:20.685372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.920 [2024-07-15 16:10:20.685382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.920 qpair failed and we were unable to recover it. 00:28:51.920 [2024-07-15 16:10:20.685493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.920 [2024-07-15 16:10:20.685505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.920 qpair failed and we were unable to recover it. 00:28:51.920 [2024-07-15 16:10:20.685670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.920 [2024-07-15 16:10:20.685680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.920 qpair failed and we were unable to recover it. 00:28:51.920 [2024-07-15 16:10:20.685795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.920 [2024-07-15 16:10:20.685805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.920 qpair failed and we were unable to recover it. 00:28:51.920 [2024-07-15 16:10:20.685917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.920 [2024-07-15 16:10:20.685928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.920 qpair failed and we were unable to recover it. 00:28:51.920 [2024-07-15 16:10:20.686109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.920 [2024-07-15 16:10:20.686118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.920 qpair failed and we were unable to recover it. 00:28:51.920 [2024-07-15 16:10:20.686282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.920 [2024-07-15 16:10:20.686292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.920 qpair failed and we were unable to recover it. 00:28:51.920 [2024-07-15 16:10:20.686490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.920 [2024-07-15 16:10:20.686500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.920 qpair failed and we were unable to recover it. 00:28:51.920 [2024-07-15 16:10:20.686604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.920 [2024-07-15 16:10:20.686615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.920 qpair failed and we were unable to recover it. 00:28:51.920 [2024-07-15 16:10:20.686798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.920 [2024-07-15 16:10:20.686807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.920 qpair failed and we were unable to recover it. 00:28:51.920 [2024-07-15 16:10:20.687052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.920 [2024-07-15 16:10:20.687062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.920 qpair failed and we were unable to recover it. 00:28:51.920 [2024-07-15 16:10:20.687153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.920 [2024-07-15 16:10:20.687163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.920 qpair failed and we were unable to recover it. 00:28:51.920 [2024-07-15 16:10:20.687281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.920 [2024-07-15 16:10:20.687292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.920 qpair failed and we were unable to recover it. 00:28:51.920 [2024-07-15 16:10:20.687447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.920 [2024-07-15 16:10:20.687457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.920 qpair failed and we were unable to recover it. 00:28:51.920 [2024-07-15 16:10:20.687690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.920 [2024-07-15 16:10:20.687700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.920 qpair failed and we were unable to recover it. 00:28:51.920 [2024-07-15 16:10:20.687940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.920 [2024-07-15 16:10:20.687950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.920 qpair failed and we were unable to recover it. 00:28:51.920 [2024-07-15 16:10:20.688129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.921 [2024-07-15 16:10:20.688140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.921 qpair failed and we were unable to recover it. 00:28:51.921 [2024-07-15 16:10:20.688396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.921 [2024-07-15 16:10:20.688406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.921 qpair failed and we were unable to recover it. 00:28:51.921 [2024-07-15 16:10:20.688580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.921 [2024-07-15 16:10:20.688590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.921 qpair failed and we were unable to recover it. 00:28:51.921 [2024-07-15 16:10:20.688773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.921 [2024-07-15 16:10:20.688783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.921 qpair failed and we were unable to recover it. 00:28:51.921 [2024-07-15 16:10:20.689001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.921 [2024-07-15 16:10:20.689011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.921 qpair failed and we were unable to recover it. 00:28:51.921 [2024-07-15 16:10:20.689249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.921 [2024-07-15 16:10:20.689260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.921 qpair failed and we were unable to recover it. 00:28:51.921 [2024-07-15 16:10:20.689362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.921 [2024-07-15 16:10:20.689372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.921 qpair failed and we were unable to recover it. 00:28:51.921 [2024-07-15 16:10:20.689541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.921 [2024-07-15 16:10:20.689551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.921 qpair failed and we were unable to recover it. 00:28:51.921 [2024-07-15 16:10:20.689710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.921 [2024-07-15 16:10:20.689721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.921 qpair failed and we were unable to recover it. 00:28:51.921 [2024-07-15 16:10:20.689969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.921 [2024-07-15 16:10:20.689979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.921 qpair failed and we were unable to recover it. 00:28:51.921 [2024-07-15 16:10:20.690244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.921 [2024-07-15 16:10:20.690255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.921 qpair failed and we were unable to recover it. 00:28:51.921 [2024-07-15 16:10:20.690480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.921 [2024-07-15 16:10:20.690491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.921 qpair failed and we were unable to recover it. 00:28:51.921 [2024-07-15 16:10:20.690715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.921 [2024-07-15 16:10:20.690725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.921 qpair failed and we were unable to recover it. 00:28:51.921 [2024-07-15 16:10:20.690997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.921 [2024-07-15 16:10:20.691007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.921 qpair failed and we were unable to recover it. 00:28:51.921 [2024-07-15 16:10:20.691299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.921 [2024-07-15 16:10:20.691309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.921 qpair failed and we were unable to recover it. 00:28:51.921 [2024-07-15 16:10:20.691414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.921 [2024-07-15 16:10:20.691423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.921 qpair failed and we were unable to recover it. 00:28:51.921 [2024-07-15 16:10:20.691592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.921 [2024-07-15 16:10:20.691603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.921 qpair failed and we were unable to recover it. 00:28:51.921 [2024-07-15 16:10:20.691726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.921 [2024-07-15 16:10:20.691736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.921 qpair failed and we were unable to recover it. 00:28:51.921 [2024-07-15 16:10:20.691830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.921 [2024-07-15 16:10:20.691840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.921 qpair failed and we were unable to recover it. 00:28:51.921 [2024-07-15 16:10:20.691994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.921 [2024-07-15 16:10:20.692004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.921 qpair failed and we were unable to recover it. 00:28:51.921 [2024-07-15 16:10:20.692104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.921 [2024-07-15 16:10:20.692114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.921 qpair failed and we were unable to recover it. 00:28:51.921 [2024-07-15 16:10:20.692301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.921 [2024-07-15 16:10:20.692312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.921 qpair failed and we were unable to recover it. 00:28:51.921 [2024-07-15 16:10:20.692412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.921 [2024-07-15 16:10:20.692422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.921 qpair failed and we were unable to recover it. 00:28:51.921 [2024-07-15 16:10:20.692648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.921 [2024-07-15 16:10:20.692658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.921 qpair failed and we were unable to recover it. 00:28:51.921 [2024-07-15 16:10:20.692914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.921 [2024-07-15 16:10:20.692924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.921 qpair failed and we were unable to recover it. 00:28:51.921 [2024-07-15 16:10:20.693097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.921 [2024-07-15 16:10:20.693109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.921 qpair failed and we were unable to recover it. 00:28:51.921 [2024-07-15 16:10:20.693267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.921 [2024-07-15 16:10:20.693277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.921 qpair failed and we were unable to recover it. 00:28:51.921 [2024-07-15 16:10:20.693379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.921 [2024-07-15 16:10:20.693388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.921 qpair failed and we were unable to recover it. 00:28:51.921 [2024-07-15 16:10:20.693517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.921 [2024-07-15 16:10:20.693528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.921 qpair failed and we were unable to recover it. 00:28:51.921 [2024-07-15 16:10:20.693698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.921 [2024-07-15 16:10:20.693707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.921 qpair failed and we were unable to recover it. 00:28:51.921 [2024-07-15 16:10:20.693897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.921 [2024-07-15 16:10:20.693906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.922 qpair failed and we were unable to recover it. 00:28:51.922 [2024-07-15 16:10:20.694130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.922 [2024-07-15 16:10:20.694140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.922 qpair failed and we were unable to recover it. 00:28:51.922 [2024-07-15 16:10:20.694398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.922 [2024-07-15 16:10:20.694409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.922 qpair failed and we were unable to recover it. 00:28:51.922 [2024-07-15 16:10:20.694518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.922 [2024-07-15 16:10:20.694528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.922 qpair failed and we were unable to recover it. 00:28:51.922 [2024-07-15 16:10:20.694647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.922 [2024-07-15 16:10:20.694657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.922 qpair failed and we were unable to recover it. 00:28:51.922 [2024-07-15 16:10:20.694759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.922 [2024-07-15 16:10:20.694768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.922 qpair failed and we were unable to recover it. 00:28:51.922 [2024-07-15 16:10:20.695017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.922 [2024-07-15 16:10:20.695027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.922 qpair failed and we were unable to recover it. 00:28:51.922 [2024-07-15 16:10:20.695233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.922 [2024-07-15 16:10:20.695243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.922 qpair failed and we were unable to recover it. 00:28:51.922 [2024-07-15 16:10:20.695372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.922 [2024-07-15 16:10:20.695382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.922 qpair failed and we were unable to recover it. 00:28:51.922 [2024-07-15 16:10:20.695485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.922 [2024-07-15 16:10:20.695495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.922 qpair failed and we were unable to recover it. 00:28:51.922 [2024-07-15 16:10:20.695627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.922 [2024-07-15 16:10:20.695637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.922 qpair failed and we were unable to recover it. 00:28:51.922 [2024-07-15 16:10:20.695768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.922 [2024-07-15 16:10:20.695777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.922 qpair failed and we were unable to recover it. 00:28:51.922 [2024-07-15 16:10:20.695898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.922 [2024-07-15 16:10:20.695908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.922 qpair failed and we were unable to recover it. 00:28:51.922 [2024-07-15 16:10:20.696035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.922 [2024-07-15 16:10:20.696045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.922 qpair failed and we were unable to recover it. 00:28:51.922 [2024-07-15 16:10:20.696200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.922 [2024-07-15 16:10:20.696211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.922 qpair failed and we were unable to recover it. 00:28:51.922 [2024-07-15 16:10:20.696334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.922 [2024-07-15 16:10:20.696345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.922 qpair failed and we were unable to recover it. 00:28:51.922 [2024-07-15 16:10:20.696465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.922 [2024-07-15 16:10:20.696474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.922 qpair failed and we were unable to recover it. 00:28:51.922 [2024-07-15 16:10:20.696571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.922 [2024-07-15 16:10:20.696581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.922 qpair failed and we were unable to recover it. 00:28:51.922 [2024-07-15 16:10:20.696689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.922 [2024-07-15 16:10:20.696699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.922 qpair failed and we were unable to recover it. 00:28:51.922 [2024-07-15 16:10:20.696794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.922 [2024-07-15 16:10:20.696803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.922 qpair failed and we were unable to recover it. 00:28:51.922 [2024-07-15 16:10:20.696914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.922 [2024-07-15 16:10:20.696924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.922 qpair failed and we were unable to recover it. 00:28:51.922 [2024-07-15 16:10:20.697096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.922 [2024-07-15 16:10:20.697106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.922 qpair failed and we were unable to recover it. 00:28:51.922 [2024-07-15 16:10:20.697263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.922 [2024-07-15 16:10:20.697273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.922 qpair failed and we were unable to recover it. 00:28:51.922 [2024-07-15 16:10:20.697417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.922 [2024-07-15 16:10:20.697427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.922 qpair failed and we were unable to recover it. 00:28:51.922 [2024-07-15 16:10:20.697529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.922 [2024-07-15 16:10:20.697539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.922 qpair failed and we were unable to recover it. 00:28:51.922 [2024-07-15 16:10:20.697704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.922 [2024-07-15 16:10:20.697714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.922 qpair failed and we were unable to recover it. 00:28:51.922 [2024-07-15 16:10:20.697819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.922 [2024-07-15 16:10:20.697829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.922 qpair failed and we were unable to recover it. 00:28:51.922 [2024-07-15 16:10:20.697922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.922 [2024-07-15 16:10:20.697932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.922 qpair failed and we were unable to recover it. 00:28:51.922 [2024-07-15 16:10:20.698021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.922 [2024-07-15 16:10:20.698031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.922 qpair failed and we were unable to recover it. 00:28:51.922 [2024-07-15 16:10:20.698128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.923 [2024-07-15 16:10:20.698138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.923 qpair failed and we were unable to recover it. 00:28:51.923 [2024-07-15 16:10:20.698235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.923 [2024-07-15 16:10:20.698245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.923 qpair failed and we were unable to recover it. 00:28:51.923 [2024-07-15 16:10:20.698409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.923 [2024-07-15 16:10:20.698419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.923 qpair failed and we were unable to recover it. 00:28:51.923 [2024-07-15 16:10:20.698510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.923 [2024-07-15 16:10:20.698520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.923 qpair failed and we were unable to recover it. 00:28:51.923 [2024-07-15 16:10:20.698623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.923 [2024-07-15 16:10:20.698632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.923 qpair failed and we were unable to recover it. 00:28:51.923 [2024-07-15 16:10:20.698703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.923 [2024-07-15 16:10:20.698713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.923 qpair failed and we were unable to recover it. 00:28:51.923 [2024-07-15 16:10:20.698823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.923 [2024-07-15 16:10:20.698834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.923 qpair failed and we were unable to recover it. 00:28:51.923 [2024-07-15 16:10:20.698932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.923 [2024-07-15 16:10:20.698941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.923 qpair failed and we were unable to recover it. 00:28:51.923 [2024-07-15 16:10:20.699043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.923 [2024-07-15 16:10:20.699053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.923 qpair failed and we were unable to recover it. 00:28:51.923 [2024-07-15 16:10:20.699161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.923 [2024-07-15 16:10:20.699171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.923 qpair failed and we were unable to recover it. 00:28:51.923 [2024-07-15 16:10:20.699327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.923 [2024-07-15 16:10:20.699337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.923 qpair failed and we were unable to recover it. 00:28:51.923 [2024-07-15 16:10:20.699466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.923 [2024-07-15 16:10:20.699477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.923 qpair failed and we were unable to recover it. 00:28:51.923 [2024-07-15 16:10:20.699579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.923 [2024-07-15 16:10:20.699591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.923 qpair failed and we were unable to recover it. 00:28:51.923 [2024-07-15 16:10:20.699693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.923 [2024-07-15 16:10:20.699703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.923 qpair failed and we were unable to recover it. 00:28:51.923 [2024-07-15 16:10:20.699794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.923 [2024-07-15 16:10:20.699804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.923 qpair failed and we were unable to recover it. 00:28:51.923 [2024-07-15 16:10:20.699873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.923 [2024-07-15 16:10:20.699885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.923 qpair failed and we were unable to recover it. 00:28:51.923 [2024-07-15 16:10:20.700036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.923 [2024-07-15 16:10:20.700047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.923 qpair failed and we were unable to recover it. 00:28:51.923 [2024-07-15 16:10:20.700142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.923 [2024-07-15 16:10:20.700153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.923 qpair failed and we were unable to recover it. 00:28:51.923 [2024-07-15 16:10:20.700259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.923 [2024-07-15 16:10:20.700270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.923 qpair failed and we were unable to recover it. 00:28:51.923 [2024-07-15 16:10:20.700481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.923 [2024-07-15 16:10:20.700491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.923 qpair failed and we were unable to recover it. 00:28:51.923 [2024-07-15 16:10:20.700658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.923 [2024-07-15 16:10:20.700668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.923 qpair failed and we were unable to recover it. 00:28:51.923 [2024-07-15 16:10:20.700756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.923 [2024-07-15 16:10:20.700766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.923 qpair failed and we were unable to recover it. 00:28:51.923 [2024-07-15 16:10:20.700887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.923 [2024-07-15 16:10:20.700897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.923 qpair failed and we were unable to recover it. 00:28:51.923 [2024-07-15 16:10:20.701057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.923 [2024-07-15 16:10:20.701066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.923 qpair failed and we were unable to recover it. 00:28:51.923 [2024-07-15 16:10:20.701168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.923 [2024-07-15 16:10:20.701179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.923 qpair failed and we were unable to recover it. 00:28:51.923 [2024-07-15 16:10:20.701273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.923 [2024-07-15 16:10:20.701285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.923 qpair failed and we were unable to recover it. 00:28:51.923 [2024-07-15 16:10:20.701459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.923 [2024-07-15 16:10:20.701469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.923 qpair failed and we were unable to recover it. 00:28:51.923 [2024-07-15 16:10:20.701580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.923 [2024-07-15 16:10:20.701589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.923 qpair failed and we were unable to recover it. 00:28:51.923 [2024-07-15 16:10:20.701703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.924 [2024-07-15 16:10:20.701713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.924 qpair failed and we were unable to recover it. 00:28:51.924 [2024-07-15 16:10:20.701824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.924 [2024-07-15 16:10:20.701834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.924 qpair failed and we were unable to recover it. 00:28:51.924 [2024-07-15 16:10:20.701927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.924 [2024-07-15 16:10:20.701937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.924 qpair failed and we were unable to recover it. 00:28:51.924 [2024-07-15 16:10:20.702028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.924 [2024-07-15 16:10:20.702038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.924 qpair failed and we were unable to recover it. 00:28:51.924 [2024-07-15 16:10:20.702202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.924 [2024-07-15 16:10:20.702213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.924 qpair failed and we were unable to recover it. 00:28:51.924 [2024-07-15 16:10:20.702324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.924 [2024-07-15 16:10:20.702335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.924 qpair failed and we were unable to recover it. 00:28:51.924 [2024-07-15 16:10:20.702423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.924 [2024-07-15 16:10:20.702433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.924 qpair failed and we were unable to recover it. 00:28:51.924 [2024-07-15 16:10:20.702535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.924 [2024-07-15 16:10:20.702544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.924 qpair failed and we were unable to recover it. 00:28:51.924 [2024-07-15 16:10:20.702643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.924 [2024-07-15 16:10:20.702653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.924 qpair failed and we were unable to recover it. 00:28:51.924 [2024-07-15 16:10:20.702810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.924 [2024-07-15 16:10:20.702820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.924 qpair failed and we were unable to recover it. 00:28:51.924 [2024-07-15 16:10:20.702998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.924 [2024-07-15 16:10:20.703008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.924 qpair failed and we were unable to recover it. 00:28:51.924 [2024-07-15 16:10:20.703118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.924 [2024-07-15 16:10:20.703128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.924 qpair failed and we were unable to recover it. 00:28:51.924 [2024-07-15 16:10:20.703233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.924 [2024-07-15 16:10:20.703244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.924 qpair failed and we were unable to recover it. 00:28:51.924 [2024-07-15 16:10:20.703353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.924 [2024-07-15 16:10:20.703363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.924 qpair failed and we were unable to recover it. 00:28:51.924 [2024-07-15 16:10:20.703475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.924 [2024-07-15 16:10:20.703484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.924 qpair failed and we were unable to recover it. 00:28:51.924 [2024-07-15 16:10:20.703709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.924 [2024-07-15 16:10:20.703719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.924 qpair failed and we were unable to recover it. 00:28:51.924 [2024-07-15 16:10:20.703881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.924 [2024-07-15 16:10:20.703892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.924 qpair failed and we were unable to recover it. 00:28:51.924 [2024-07-15 16:10:20.704004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.924 [2024-07-15 16:10:20.704014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.924 qpair failed and we were unable to recover it. 00:28:51.924 [2024-07-15 16:10:20.704102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.924 [2024-07-15 16:10:20.704114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.924 qpair failed and we were unable to recover it. 00:28:51.924 [2024-07-15 16:10:20.704222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.924 [2024-07-15 16:10:20.704237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.924 qpair failed and we were unable to recover it. 00:28:51.924 [2024-07-15 16:10:20.704338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.924 [2024-07-15 16:10:20.704348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.924 qpair failed and we were unable to recover it. 00:28:51.924 [2024-07-15 16:10:20.704442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.924 [2024-07-15 16:10:20.704452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.924 qpair failed and we were unable to recover it. 00:28:51.924 [2024-07-15 16:10:20.704617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.924 [2024-07-15 16:10:20.704627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.924 qpair failed and we were unable to recover it. 00:28:51.924 [2024-07-15 16:10:20.704785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.924 [2024-07-15 16:10:20.704795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.924 qpair failed and we were unable to recover it. 00:28:51.924 [2024-07-15 16:10:20.704895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.924 [2024-07-15 16:10:20.704905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.924 qpair failed and we were unable to recover it. 00:28:51.924 [2024-07-15 16:10:20.705008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.924 [2024-07-15 16:10:20.705019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.924 qpair failed and we were unable to recover it. 00:28:51.924 [2024-07-15 16:10:20.705121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.924 [2024-07-15 16:10:20.705131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.924 qpair failed and we were unable to recover it. 00:28:51.924 [2024-07-15 16:10:20.705265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.924 [2024-07-15 16:10:20.705276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.924 qpair failed and we were unable to recover it. 00:28:51.924 [2024-07-15 16:10:20.705372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.924 [2024-07-15 16:10:20.705382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.924 qpair failed and we were unable to recover it. 00:28:51.925 [2024-07-15 16:10:20.705487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.925 [2024-07-15 16:10:20.705497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.925 qpair failed and we were unable to recover it. 00:28:51.925 [2024-07-15 16:10:20.705675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.925 [2024-07-15 16:10:20.705685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.925 qpair failed and we were unable to recover it. 00:28:51.925 [2024-07-15 16:10:20.705785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.925 [2024-07-15 16:10:20.705795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.925 qpair failed and we were unable to recover it. 00:28:51.925 [2024-07-15 16:10:20.705978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.925 [2024-07-15 16:10:20.705988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.925 qpair failed and we were unable to recover it. 00:28:51.925 [2024-07-15 16:10:20.706097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.925 [2024-07-15 16:10:20.706109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.925 qpair failed and we were unable to recover it. 00:28:51.925 [2024-07-15 16:10:20.706437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.925 [2024-07-15 16:10:20.706447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.925 qpair failed and we were unable to recover it. 00:28:51.925 [2024-07-15 16:10:20.706555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.925 [2024-07-15 16:10:20.706564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.925 qpair failed and we were unable to recover it. 00:28:51.925 [2024-07-15 16:10:20.706677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.925 [2024-07-15 16:10:20.706688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.925 qpair failed and we were unable to recover it. 00:28:51.925 [2024-07-15 16:10:20.706784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.946 [2024-07-15 16:10:20.706794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.946 qpair failed and we were unable to recover it. 00:28:51.947 [2024-07-15 16:10:20.706955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.947 [2024-07-15 16:10:20.706965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.947 qpair failed and we were unable to recover it. 00:28:51.947 [2024-07-15 16:10:20.707064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.947 [2024-07-15 16:10:20.707074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.947 qpair failed and we were unable to recover it. 00:28:51.947 [2024-07-15 16:10:20.707178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.947 [2024-07-15 16:10:20.707187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.947 qpair failed and we were unable to recover it. 00:28:51.947 [2024-07-15 16:10:20.707352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.947 [2024-07-15 16:10:20.707363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.947 qpair failed and we were unable to recover it. 00:28:51.947 [2024-07-15 16:10:20.707476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.947 [2024-07-15 16:10:20.707486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.947 qpair failed and we were unable to recover it. 00:28:51.947 [2024-07-15 16:10:20.707646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.947 [2024-07-15 16:10:20.707656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.947 qpair failed and we were unable to recover it. 00:28:51.947 [2024-07-15 16:10:20.707757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.947 [2024-07-15 16:10:20.707767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.947 qpair failed and we were unable to recover it. 00:28:51.947 [2024-07-15 16:10:20.707923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.947 [2024-07-15 16:10:20.707933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.947 qpair failed and we were unable to recover it. 00:28:51.947 [2024-07-15 16:10:20.708090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.947 [2024-07-15 16:10:20.708100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.947 qpair failed and we were unable to recover it. 00:28:51.947 [2024-07-15 16:10:20.708210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.947 [2024-07-15 16:10:20.708220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.947 qpair failed and we were unable to recover it. 00:28:51.947 [2024-07-15 16:10:20.708377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.947 [2024-07-15 16:10:20.708387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.947 qpair failed and we were unable to recover it. 00:28:51.947 [2024-07-15 16:10:20.708575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.947 [2024-07-15 16:10:20.708585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.947 qpair failed and we were unable to recover it. 00:28:51.947 [2024-07-15 16:10:20.708756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.947 [2024-07-15 16:10:20.708767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.947 qpair failed and we were unable to recover it. 00:28:51.947 [2024-07-15 16:10:20.708870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.947 [2024-07-15 16:10:20.708880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.947 qpair failed and we were unable to recover it. 00:28:51.947 [2024-07-15 16:10:20.708996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.947 [2024-07-15 16:10:20.709006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.947 qpair failed and we were unable to recover it. 00:28:51.947 [2024-07-15 16:10:20.709135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.947 [2024-07-15 16:10:20.709146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.947 qpair failed and we were unable to recover it. 00:28:51.947 [2024-07-15 16:10:20.709239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.947 [2024-07-15 16:10:20.709249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.947 qpair failed and we were unable to recover it. 00:28:51.947 [2024-07-15 16:10:20.709340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.947 [2024-07-15 16:10:20.709350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.947 qpair failed and we were unable to recover it. 00:28:51.947 [2024-07-15 16:10:20.709450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.947 [2024-07-15 16:10:20.709459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.947 qpair failed and we were unable to recover it. 00:28:51.947 [2024-07-15 16:10:20.709562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.947 [2024-07-15 16:10:20.709572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.947 qpair failed and we were unable to recover it. 00:28:51.947 [2024-07-15 16:10:20.709683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.947 [2024-07-15 16:10:20.709696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.947 qpair failed and we were unable to recover it. 00:28:51.947 [2024-07-15 16:10:20.709786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.947 [2024-07-15 16:10:20.709796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.947 qpair failed and we were unable to recover it. 00:28:51.948 [2024-07-15 16:10:20.709892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.948 [2024-07-15 16:10:20.709902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.948 qpair failed and we were unable to recover it. 00:28:51.948 [2024-07-15 16:10:20.710154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.948 [2024-07-15 16:10:20.710164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.948 qpair failed and we were unable to recover it. 00:28:51.948 [2024-07-15 16:10:20.710252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.948 [2024-07-15 16:10:20.710262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.948 qpair failed and we were unable to recover it. 00:28:51.948 [2024-07-15 16:10:20.710517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.948 [2024-07-15 16:10:20.710527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.948 qpair failed and we were unable to recover it. 00:28:51.948 [2024-07-15 16:10:20.710657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.948 [2024-07-15 16:10:20.710667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.948 qpair failed and we were unable to recover it. 00:28:51.948 [2024-07-15 16:10:20.710763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.948 [2024-07-15 16:10:20.710773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.948 qpair failed and we were unable to recover it. 00:28:51.948 [2024-07-15 16:10:20.710949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.948 [2024-07-15 16:10:20.710959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.948 qpair failed and we were unable to recover it. 00:28:51.948 [2024-07-15 16:10:20.711118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.948 [2024-07-15 16:10:20.711127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.948 qpair failed and we were unable to recover it. 00:28:51.948 [2024-07-15 16:10:20.711222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.948 [2024-07-15 16:10:20.711236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.948 qpair failed and we were unable to recover it. 00:28:51.948 [2024-07-15 16:10:20.711382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.948 [2024-07-15 16:10:20.711393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.948 qpair failed and we were unable to recover it. 00:28:51.948 [2024-07-15 16:10:20.711622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.948 [2024-07-15 16:10:20.711632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.948 qpair failed and we were unable to recover it. 00:28:51.948 [2024-07-15 16:10:20.711736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.948 [2024-07-15 16:10:20.711746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.948 qpair failed and we were unable to recover it. 00:28:51.948 [2024-07-15 16:10:20.711856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.948 [2024-07-15 16:10:20.711866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.948 qpair failed and we were unable to recover it. 00:28:51.948 [2024-07-15 16:10:20.711999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.948 [2024-07-15 16:10:20.712009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.948 qpair failed and we were unable to recover it. 00:28:51.948 [2024-07-15 16:10:20.712171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.948 [2024-07-15 16:10:20.712180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.948 qpair failed and we were unable to recover it. 00:28:51.948 [2024-07-15 16:10:20.712344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.948 [2024-07-15 16:10:20.712354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.948 qpair failed and we were unable to recover it. 00:28:51.948 [2024-07-15 16:10:20.712454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.948 [2024-07-15 16:10:20.712463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.948 qpair failed and we were unable to recover it. 00:28:51.948 [2024-07-15 16:10:20.712570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.948 [2024-07-15 16:10:20.712579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.948 qpair failed and we were unable to recover it. 00:28:51.948 [2024-07-15 16:10:20.712808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.948 [2024-07-15 16:10:20.712819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.948 qpair failed and we were unable to recover it. 00:28:51.948 [2024-07-15 16:10:20.712982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.948 [2024-07-15 16:10:20.712992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.948 qpair failed and we were unable to recover it. 00:28:51.948 [2024-07-15 16:10:20.713151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.948 [2024-07-15 16:10:20.713161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.948 qpair failed and we were unable to recover it. 00:28:51.948 [2024-07-15 16:10:20.713405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.948 [2024-07-15 16:10:20.713415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.948 qpair failed and we were unable to recover it. 00:28:51.948 [2024-07-15 16:10:20.713578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.948 [2024-07-15 16:10:20.713587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.948 qpair failed and we were unable to recover it. 00:28:51.948 [2024-07-15 16:10:20.713666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.948 [2024-07-15 16:10:20.713676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.948 qpair failed and we were unable to recover it. 00:28:51.948 [2024-07-15 16:10:20.713947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.948 [2024-07-15 16:10:20.713957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.948 qpair failed and we were unable to recover it. 00:28:51.948 [2024-07-15 16:10:20.714078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.948 [2024-07-15 16:10:20.714088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.948 qpair failed and we were unable to recover it. 00:28:51.948 [2024-07-15 16:10:20.714190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.948 [2024-07-15 16:10:20.714199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.948 qpair failed and we were unable to recover it. 00:28:51.948 [2024-07-15 16:10:20.714325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.948 [2024-07-15 16:10:20.714334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.948 qpair failed and we were unable to recover it. 00:28:51.948 [2024-07-15 16:10:20.714539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.948 [2024-07-15 16:10:20.714549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.948 qpair failed and we were unable to recover it. 00:28:51.948 [2024-07-15 16:10:20.714723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.948 [2024-07-15 16:10:20.714733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.948 qpair failed and we were unable to recover it. 00:28:51.948 [2024-07-15 16:10:20.714840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.948 [2024-07-15 16:10:20.714849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.948 qpair failed and we were unable to recover it. 00:28:51.948 [2024-07-15 16:10:20.715007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.948 [2024-07-15 16:10:20.715017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.948 qpair failed and we were unable to recover it. 00:28:51.949 [2024-07-15 16:10:20.715122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.949 [2024-07-15 16:10:20.715132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.949 qpair failed and we were unable to recover it. 00:28:51.949 [2024-07-15 16:10:20.715257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.949 [2024-07-15 16:10:20.715269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.949 qpair failed and we were unable to recover it. 00:28:51.949 [2024-07-15 16:10:20.715360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.949 [2024-07-15 16:10:20.715370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.949 qpair failed and we were unable to recover it. 00:28:51.949 [2024-07-15 16:10:20.715460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.949 [2024-07-15 16:10:20.715469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.949 qpair failed and we were unable to recover it. 00:28:51.949 [2024-07-15 16:10:20.715626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.949 [2024-07-15 16:10:20.715635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.949 qpair failed and we were unable to recover it. 00:28:51.949 [2024-07-15 16:10:20.715744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.949 [2024-07-15 16:10:20.715754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.949 qpair failed and we were unable to recover it. 00:28:51.949 [2024-07-15 16:10:20.715911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.949 [2024-07-15 16:10:20.715922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.949 qpair failed and we were unable to recover it. 00:28:51.949 [2024-07-15 16:10:20.716049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.949 [2024-07-15 16:10:20.716058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.949 qpair failed and we were unable to recover it. 00:28:51.949 [2024-07-15 16:10:20.716245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.949 [2024-07-15 16:10:20.716255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.949 qpair failed and we were unable to recover it. 00:28:51.949 [2024-07-15 16:10:20.716419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.949 [2024-07-15 16:10:20.716429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.949 qpair failed and we were unable to recover it. 00:28:51.949 [2024-07-15 16:10:20.716529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.949 [2024-07-15 16:10:20.716539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.949 qpair failed and we were unable to recover it. 00:28:51.949 [2024-07-15 16:10:20.716650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.949 [2024-07-15 16:10:20.716660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.949 qpair failed and we were unable to recover it. 00:28:51.949 [2024-07-15 16:10:20.716812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.949 [2024-07-15 16:10:20.716821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.949 qpair failed and we were unable to recover it. 00:28:51.949 [2024-07-15 16:10:20.717011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.949 [2024-07-15 16:10:20.717021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.949 qpair failed and we were unable to recover it. 00:28:51.949 [2024-07-15 16:10:20.717127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.949 [2024-07-15 16:10:20.717137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.949 qpair failed and we were unable to recover it. 00:28:51.949 [2024-07-15 16:10:20.717253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.949 [2024-07-15 16:10:20.717264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.949 qpair failed and we were unable to recover it. 00:28:51.949 [2024-07-15 16:10:20.717376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.949 [2024-07-15 16:10:20.717385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.949 qpair failed and we were unable to recover it. 00:28:51.949 [2024-07-15 16:10:20.717632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.949 [2024-07-15 16:10:20.717642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.949 qpair failed and we were unable to recover it. 00:28:51.949 [2024-07-15 16:10:20.717756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.949 [2024-07-15 16:10:20.717765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.949 qpair failed and we were unable to recover it. 00:28:51.949 [2024-07-15 16:10:20.717934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.949 [2024-07-15 16:10:20.717943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.949 qpair failed and we were unable to recover it. 00:28:51.949 [2024-07-15 16:10:20.718100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.949 [2024-07-15 16:10:20.718110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.949 qpair failed and we were unable to recover it. 00:28:51.949 [2024-07-15 16:10:20.718279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.949 [2024-07-15 16:10:20.718289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.949 qpair failed and we were unable to recover it. 00:28:51.949 [2024-07-15 16:10:20.718460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.949 [2024-07-15 16:10:20.718470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.949 qpair failed and we were unable to recover it. 00:28:51.949 [2024-07-15 16:10:20.718567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.949 [2024-07-15 16:10:20.718577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.949 qpair failed and we were unable to recover it. 00:28:51.949 [2024-07-15 16:10:20.718685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.949 [2024-07-15 16:10:20.718695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.949 qpair failed and we were unable to recover it. 00:28:51.949 [2024-07-15 16:10:20.718758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.949 [2024-07-15 16:10:20.718768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.949 qpair failed and we were unable to recover it. 00:28:51.949 [2024-07-15 16:10:20.718941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.949 [2024-07-15 16:10:20.718951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.949 qpair failed and we were unable to recover it. 00:28:51.949 [2024-07-15 16:10:20.719052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.949 [2024-07-15 16:10:20.719062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.949 qpair failed and we were unable to recover it. 00:28:51.949 [2024-07-15 16:10:20.719153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.949 [2024-07-15 16:10:20.719163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.949 qpair failed and we were unable to recover it. 00:28:51.949 [2024-07-15 16:10:20.719276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.949 [2024-07-15 16:10:20.719287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.949 qpair failed and we were unable to recover it. 00:28:51.949 [2024-07-15 16:10:20.719367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.949 [2024-07-15 16:10:20.719377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.949 qpair failed and we were unable to recover it. 00:28:51.949 [2024-07-15 16:10:20.719479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.949 [2024-07-15 16:10:20.719489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.950 qpair failed and we were unable to recover it. 00:28:51.950 [2024-07-15 16:10:20.719599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.950 [2024-07-15 16:10:20.719609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.950 qpair failed and we were unable to recover it. 00:28:51.950 [2024-07-15 16:10:20.719711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.950 [2024-07-15 16:10:20.719720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.950 qpair failed and we were unable to recover it. 00:28:51.950 [2024-07-15 16:10:20.719890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.950 [2024-07-15 16:10:20.719899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.950 qpair failed and we were unable to recover it. 00:28:51.950 [2024-07-15 16:10:20.720001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.950 [2024-07-15 16:10:20.720011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.950 qpair failed and we were unable to recover it. 00:28:51.950 [2024-07-15 16:10:20.720115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.950 [2024-07-15 16:10:20.720125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.950 qpair failed and we were unable to recover it. 00:28:51.950 [2024-07-15 16:10:20.720234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.950 [2024-07-15 16:10:20.720243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.950 qpair failed and we were unable to recover it. 00:28:51.950 [2024-07-15 16:10:20.720350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.950 [2024-07-15 16:10:20.720360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.950 qpair failed and we were unable to recover it. 00:28:51.950 [2024-07-15 16:10:20.720536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.950 [2024-07-15 16:10:20.720545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.950 qpair failed and we were unable to recover it. 00:28:51.950 [2024-07-15 16:10:20.720726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.950 [2024-07-15 16:10:20.720736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.950 qpair failed and we were unable to recover it. 00:28:51.950 [2024-07-15 16:10:20.720833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.950 [2024-07-15 16:10:20.720843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.950 qpair failed and we were unable to recover it. 00:28:51.950 [2024-07-15 16:10:20.720949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.950 [2024-07-15 16:10:20.720958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.950 qpair failed and we were unable to recover it. 00:28:51.950 [2024-07-15 16:10:20.721081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.950 [2024-07-15 16:10:20.721090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.950 qpair failed and we were unable to recover it. 00:28:51.950 [2024-07-15 16:10:20.721191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.950 [2024-07-15 16:10:20.721201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.950 qpair failed and we were unable to recover it. 00:28:51.950 [2024-07-15 16:10:20.721306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.950 [2024-07-15 16:10:20.721316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.950 qpair failed and we were unable to recover it. 00:28:51.950 [2024-07-15 16:10:20.721403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.950 [2024-07-15 16:10:20.721415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.950 qpair failed and we were unable to recover it. 00:28:51.950 [2024-07-15 16:10:20.721522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.950 [2024-07-15 16:10:20.721532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.950 qpair failed and we were unable to recover it. 00:28:51.950 [2024-07-15 16:10:20.721600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.950 [2024-07-15 16:10:20.721609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.950 qpair failed and we were unable to recover it. 00:28:51.950 [2024-07-15 16:10:20.721764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.950 [2024-07-15 16:10:20.721774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.950 qpair failed and we were unable to recover it. 00:28:51.950 [2024-07-15 16:10:20.721872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.950 [2024-07-15 16:10:20.721882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.950 qpair failed and we were unable to recover it. 00:28:51.950 [2024-07-15 16:10:20.721989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.950 [2024-07-15 16:10:20.721998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.950 qpair failed and we were unable to recover it. 00:28:51.950 [2024-07-15 16:10:20.722126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.950 [2024-07-15 16:10:20.722135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.950 qpair failed and we were unable to recover it. 00:28:51.950 [2024-07-15 16:10:20.722248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.950 [2024-07-15 16:10:20.722258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.950 qpair failed and we were unable to recover it. 00:28:51.950 [2024-07-15 16:10:20.722346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.950 [2024-07-15 16:10:20.722356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.950 qpair failed and we were unable to recover it. 00:28:51.950 [2024-07-15 16:10:20.722461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.950 [2024-07-15 16:10:20.722470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.950 qpair failed and we were unable to recover it. 00:28:51.950 [2024-07-15 16:10:20.722560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.950 [2024-07-15 16:10:20.722569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.950 qpair failed and we were unable to recover it. 00:28:51.950 [2024-07-15 16:10:20.722687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.950 [2024-07-15 16:10:20.722696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.950 qpair failed and we were unable to recover it. 00:28:51.950 [2024-07-15 16:10:20.722796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.950 [2024-07-15 16:10:20.722806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.950 qpair failed and we were unable to recover it. 00:28:51.950 [2024-07-15 16:10:20.722904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.950 [2024-07-15 16:10:20.722913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.950 qpair failed and we were unable to recover it. 00:28:51.950 [2024-07-15 16:10:20.723006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.950 [2024-07-15 16:10:20.723016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.950 qpair failed and we were unable to recover it. 00:28:51.950 [2024-07-15 16:10:20.723182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.950 [2024-07-15 16:10:20.723191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.950 qpair failed and we were unable to recover it. 00:28:51.950 [2024-07-15 16:10:20.723248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.950 [2024-07-15 16:10:20.723257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.950 qpair failed and we were unable to recover it. 00:28:51.950 [2024-07-15 16:10:20.723354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.951 [2024-07-15 16:10:20.723364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.951 qpair failed and we were unable to recover it. 00:28:51.951 [2024-07-15 16:10:20.723472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.951 [2024-07-15 16:10:20.723481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.951 qpair failed and we were unable to recover it. 00:28:51.951 [2024-07-15 16:10:20.723574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.951 [2024-07-15 16:10:20.723583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.951 qpair failed and we were unable to recover it. 00:28:51.951 [2024-07-15 16:10:20.723674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.951 [2024-07-15 16:10:20.723683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.951 qpair failed and we were unable to recover it. 00:28:51.951 [2024-07-15 16:10:20.723846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.951 [2024-07-15 16:10:20.723855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.951 qpair failed and we were unable to recover it. 00:28:51.951 [2024-07-15 16:10:20.724024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.951 [2024-07-15 16:10:20.724035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.951 qpair failed and we were unable to recover it. 00:28:51.951 [2024-07-15 16:10:20.724145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.951 [2024-07-15 16:10:20.724155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.951 qpair failed and we were unable to recover it. 00:28:51.951 [2024-07-15 16:10:20.724254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.951 [2024-07-15 16:10:20.724264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.951 qpair failed and we were unable to recover it. 00:28:51.951 [2024-07-15 16:10:20.724424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.951 [2024-07-15 16:10:20.724434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.951 qpair failed and we were unable to recover it. 00:28:51.951 [2024-07-15 16:10:20.724598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.951 [2024-07-15 16:10:20.724607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.951 qpair failed and we were unable to recover it. 00:28:51.951 [2024-07-15 16:10:20.724680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.951 [2024-07-15 16:10:20.724690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.951 qpair failed and we were unable to recover it. 00:28:51.951 [2024-07-15 16:10:20.724784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.951 [2024-07-15 16:10:20.724793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.951 qpair failed and we were unable to recover it. 00:28:51.951 [2024-07-15 16:10:20.724960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.951 [2024-07-15 16:10:20.724970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.951 qpair failed and we were unable to recover it. 00:28:51.951 [2024-07-15 16:10:20.725082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.951 [2024-07-15 16:10:20.725092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.951 qpair failed and we were unable to recover it. 00:28:51.951 [2024-07-15 16:10:20.725167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.951 [2024-07-15 16:10:20.725176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.951 qpair failed and we were unable to recover it. 00:28:51.951 [2024-07-15 16:10:20.725271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.951 [2024-07-15 16:10:20.725281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.951 qpair failed and we were unable to recover it. 00:28:51.951 [2024-07-15 16:10:20.725394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.951 [2024-07-15 16:10:20.725403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.951 qpair failed and we were unable to recover it. 00:28:51.951 [2024-07-15 16:10:20.725629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.951 [2024-07-15 16:10:20.725639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.951 qpair failed and we were unable to recover it. 00:28:51.951 [2024-07-15 16:10:20.725738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.951 [2024-07-15 16:10:20.725748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.951 qpair failed and we were unable to recover it. 00:28:51.951 [2024-07-15 16:10:20.725855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.951 [2024-07-15 16:10:20.725865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.951 qpair failed and we were unable to recover it. 00:28:51.951 [2024-07-15 16:10:20.725957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.951 [2024-07-15 16:10:20.725966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.951 qpair failed and we were unable to recover it. 00:28:51.951 [2024-07-15 16:10:20.726068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.951 [2024-07-15 16:10:20.726078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.951 qpair failed and we were unable to recover it. 00:28:51.951 [2024-07-15 16:10:20.726192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.951 [2024-07-15 16:10:20.726202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.951 qpair failed and we were unable to recover it. 00:28:51.951 [2024-07-15 16:10:20.726366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.951 [2024-07-15 16:10:20.726379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.951 qpair failed and we were unable to recover it. 00:28:51.951 [2024-07-15 16:10:20.726492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.951 [2024-07-15 16:10:20.726501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.951 qpair failed and we were unable to recover it. 00:28:51.951 [2024-07-15 16:10:20.726598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.951 [2024-07-15 16:10:20.726608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.951 qpair failed and we were unable to recover it. 00:28:51.951 [2024-07-15 16:10:20.726710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.951 [2024-07-15 16:10:20.726719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.951 qpair failed and we were unable to recover it. 00:28:51.951 [2024-07-15 16:10:20.726802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.951 [2024-07-15 16:10:20.726812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.951 qpair failed and we were unable to recover it. 00:28:51.951 [2024-07-15 16:10:20.726995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.951 [2024-07-15 16:10:20.727004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.951 qpair failed and we were unable to recover it. 00:28:51.951 [2024-07-15 16:10:20.727099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.951 [2024-07-15 16:10:20.727109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.951 qpair failed and we were unable to recover it. 00:28:51.951 [2024-07-15 16:10:20.727230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.951 [2024-07-15 16:10:20.727240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.951 qpair failed and we were unable to recover it. 00:28:51.952 [2024-07-15 16:10:20.727326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.952 [2024-07-15 16:10:20.727335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.952 qpair failed and we were unable to recover it. 00:28:51.952 [2024-07-15 16:10:20.727438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.952 [2024-07-15 16:10:20.727448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.952 qpair failed and we were unable to recover it. 00:28:51.952 [2024-07-15 16:10:20.727611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.952 [2024-07-15 16:10:20.727621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.952 qpair failed and we were unable to recover it. 00:28:51.952 [2024-07-15 16:10:20.727730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.952 [2024-07-15 16:10:20.727740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.952 qpair failed and we were unable to recover it. 00:28:51.952 [2024-07-15 16:10:20.727849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.952 [2024-07-15 16:10:20.727859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.952 qpair failed and we were unable to recover it. 00:28:51.952 [2024-07-15 16:10:20.727944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.952 [2024-07-15 16:10:20.727954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.952 qpair failed and we were unable to recover it. 00:28:51.952 [2024-07-15 16:10:20.728176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.952 [2024-07-15 16:10:20.728185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.952 qpair failed and we were unable to recover it. 00:28:51.952 [2024-07-15 16:10:20.728279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.952 [2024-07-15 16:10:20.728289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.952 qpair failed and we were unable to recover it. 00:28:51.952 [2024-07-15 16:10:20.728456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.952 [2024-07-15 16:10:20.728466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.952 qpair failed and we were unable to recover it. 00:28:51.952 [2024-07-15 16:10:20.728573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.952 [2024-07-15 16:10:20.728582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.952 qpair failed and we were unable to recover it. 00:28:51.952 [2024-07-15 16:10:20.728692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.952 [2024-07-15 16:10:20.728702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.952 qpair failed and we were unable to recover it. 00:28:51.952 [2024-07-15 16:10:20.728965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.952 [2024-07-15 16:10:20.728974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.952 qpair failed and we were unable to recover it. 00:28:51.952 [2024-07-15 16:10:20.729127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.952 [2024-07-15 16:10:20.729137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.952 qpair failed and we were unable to recover it. 00:28:51.952 [2024-07-15 16:10:20.729379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.952 [2024-07-15 16:10:20.729389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.952 qpair failed and we were unable to recover it. 00:28:51.952 [2024-07-15 16:10:20.729557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.952 [2024-07-15 16:10:20.729567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.952 qpair failed and we were unable to recover it. 00:28:51.952 [2024-07-15 16:10:20.729812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.952 [2024-07-15 16:10:20.729821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.952 qpair failed and we were unable to recover it. 00:28:51.952 [2024-07-15 16:10:20.729985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.952 [2024-07-15 16:10:20.729995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.952 qpair failed and we were unable to recover it. 00:28:51.952 [2024-07-15 16:10:20.730150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.952 [2024-07-15 16:10:20.730159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.952 qpair failed and we were unable to recover it. 00:28:51.952 [2024-07-15 16:10:20.730352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.952 [2024-07-15 16:10:20.730362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.952 qpair failed and we were unable to recover it. 00:28:51.952 [2024-07-15 16:10:20.730518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.952 [2024-07-15 16:10:20.730528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.952 qpair failed and we were unable to recover it. 00:28:51.952 [2024-07-15 16:10:20.730639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.952 [2024-07-15 16:10:20.730649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.952 qpair failed and we were unable to recover it. 00:28:51.952 [2024-07-15 16:10:20.730899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.952 [2024-07-15 16:10:20.730908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.952 qpair failed and we were unable to recover it. 00:28:51.952 [2024-07-15 16:10:20.731159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.952 [2024-07-15 16:10:20.731169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.952 qpair failed and we were unable to recover it. 00:28:51.952 [2024-07-15 16:10:20.731346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.952 [2024-07-15 16:10:20.731356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.952 qpair failed and we were unable to recover it. 00:28:51.952 [2024-07-15 16:10:20.731457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.952 [2024-07-15 16:10:20.731466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.952 qpair failed and we were unable to recover it. 00:28:51.952 [2024-07-15 16:10:20.731607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.952 [2024-07-15 16:10:20.731617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.952 qpair failed and we were unable to recover it. 00:28:51.952 [2024-07-15 16:10:20.731799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.952 [2024-07-15 16:10:20.731808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.952 qpair failed and we were unable to recover it. 00:28:51.952 [2024-07-15 16:10:20.731916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.952 [2024-07-15 16:10:20.731926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.952 qpair failed and we were unable to recover it. 00:28:51.952 [2024-07-15 16:10:20.732174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.952 [2024-07-15 16:10:20.732184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.952 qpair failed and we were unable to recover it. 00:28:51.952 [2024-07-15 16:10:20.732364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.952 [2024-07-15 16:10:20.732374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.952 qpair failed and we were unable to recover it. 00:28:51.952 [2024-07-15 16:10:20.732501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.952 [2024-07-15 16:10:20.732510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.952 qpair failed and we were unable to recover it. 00:28:51.952 [2024-07-15 16:10:20.732643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.952 [2024-07-15 16:10:20.732653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.952 qpair failed and we were unable to recover it. 00:28:51.953 [2024-07-15 16:10:20.732770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.953 [2024-07-15 16:10:20.732781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.953 qpair failed and we were unable to recover it. 00:28:51.953 [2024-07-15 16:10:20.732955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.953 [2024-07-15 16:10:20.732965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.953 qpair failed and we were unable to recover it. 00:28:51.953 [2024-07-15 16:10:20.733127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.953 [2024-07-15 16:10:20.733137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.953 qpair failed and we were unable to recover it. 00:28:51.953 [2024-07-15 16:10:20.733242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.953 [2024-07-15 16:10:20.733252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.953 qpair failed and we were unable to recover it. 00:28:51.953 [2024-07-15 16:10:20.733358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.953 [2024-07-15 16:10:20.733367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.953 qpair failed and we were unable to recover it. 00:28:51.953 [2024-07-15 16:10:20.733487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.953 [2024-07-15 16:10:20.733497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.953 qpair failed and we were unable to recover it. 00:28:51.953 [2024-07-15 16:10:20.733665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.953 [2024-07-15 16:10:20.733675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.953 qpair failed and we were unable to recover it. 00:28:51.953 [2024-07-15 16:10:20.733768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.953 [2024-07-15 16:10:20.733778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.953 qpair failed and we were unable to recover it. 00:28:51.953 [2024-07-15 16:10:20.733948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.953 [2024-07-15 16:10:20.733957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.953 qpair failed and we were unable to recover it. 00:28:51.953 [2024-07-15 16:10:20.734121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.953 [2024-07-15 16:10:20.734131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.953 qpair failed and we were unable to recover it. 00:28:51.953 [2024-07-15 16:10:20.734220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.953 [2024-07-15 16:10:20.734243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.953 qpair failed and we were unable to recover it. 00:28:51.953 [2024-07-15 16:10:20.734348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.953 [2024-07-15 16:10:20.734359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.953 qpair failed and we were unable to recover it. 00:28:51.953 [2024-07-15 16:10:20.734532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.953 [2024-07-15 16:10:20.734542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.953 qpair failed and we were unable to recover it. 00:28:51.953 [2024-07-15 16:10:20.734660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.953 [2024-07-15 16:10:20.734670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.953 qpair failed and we were unable to recover it. 00:28:51.953 [2024-07-15 16:10:20.734845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.953 [2024-07-15 16:10:20.734854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.953 qpair failed and we were unable to recover it. 00:28:51.953 [2024-07-15 16:10:20.734974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.953 [2024-07-15 16:10:20.734985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.953 qpair failed and we were unable to recover it. 00:28:51.953 [2024-07-15 16:10:20.735075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.953 [2024-07-15 16:10:20.735085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.953 qpair failed and we were unable to recover it. 00:28:51.953 [2024-07-15 16:10:20.735198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.953 [2024-07-15 16:10:20.735208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.953 qpair failed and we were unable to recover it. 00:28:51.953 [2024-07-15 16:10:20.735330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.953 [2024-07-15 16:10:20.735341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.953 qpair failed and we were unable to recover it. 00:28:51.953 [2024-07-15 16:10:20.735457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.953 [2024-07-15 16:10:20.735467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.953 qpair failed and we were unable to recover it. 00:28:51.953 [2024-07-15 16:10:20.735570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.953 [2024-07-15 16:10:20.735579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.953 qpair failed and we were unable to recover it. 00:28:51.953 [2024-07-15 16:10:20.735791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.953 [2024-07-15 16:10:20.735801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.953 qpair failed and we were unable to recover it. 00:28:51.953 [2024-07-15 16:10:20.735905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.953 [2024-07-15 16:10:20.735914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.953 qpair failed and we were unable to recover it. 00:28:51.953 [2024-07-15 16:10:20.736021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.953 [2024-07-15 16:10:20.736031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.953 qpair failed and we were unable to recover it. 00:28:51.953 [2024-07-15 16:10:20.736130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.953 [2024-07-15 16:10:20.736140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.953 qpair failed and we were unable to recover it. 00:28:51.953 [2024-07-15 16:10:20.736240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.953 [2024-07-15 16:10:20.736250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.953 qpair failed and we were unable to recover it. 00:28:51.954 [2024-07-15 16:10:20.736381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.954 [2024-07-15 16:10:20.736391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.954 qpair failed and we were unable to recover it. 00:28:51.954 [2024-07-15 16:10:20.736498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.954 [2024-07-15 16:10:20.736508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.954 qpair failed and we were unable to recover it. 00:28:51.954 [2024-07-15 16:10:20.736603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.954 [2024-07-15 16:10:20.736613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.954 qpair failed and we were unable to recover it. 00:28:51.954 [2024-07-15 16:10:20.736775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.954 [2024-07-15 16:10:20.736785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.954 qpair failed and we were unable to recover it. 00:28:51.954 [2024-07-15 16:10:20.736943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.954 [2024-07-15 16:10:20.736953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.954 qpair failed and we were unable to recover it. 00:28:51.954 [2024-07-15 16:10:20.737170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.954 [2024-07-15 16:10:20.737180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.954 qpair failed and we were unable to recover it. 00:28:51.954 [2024-07-15 16:10:20.737414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.954 [2024-07-15 16:10:20.737424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.954 qpair failed and we were unable to recover it. 00:28:51.954 [2024-07-15 16:10:20.737632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.954 [2024-07-15 16:10:20.737642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.954 qpair failed and we were unable to recover it. 00:28:51.954 [2024-07-15 16:10:20.737734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.954 [2024-07-15 16:10:20.737743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.954 qpair failed and we were unable to recover it. 00:28:51.954 [2024-07-15 16:10:20.738050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.954 [2024-07-15 16:10:20.738060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.954 qpair failed and we were unable to recover it. 00:28:51.954 [2024-07-15 16:10:20.738235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.954 [2024-07-15 16:10:20.738245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.954 qpair failed and we were unable to recover it. 00:28:51.954 [2024-07-15 16:10:20.738409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.954 [2024-07-15 16:10:20.738419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.954 qpair failed and we were unable to recover it. 00:28:51.954 [2024-07-15 16:10:20.738594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.954 [2024-07-15 16:10:20.738603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.954 qpair failed and we were unable to recover it. 00:28:51.954 [2024-07-15 16:10:20.738722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.954 [2024-07-15 16:10:20.738732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.954 qpair failed and we were unable to recover it. 00:28:51.954 [2024-07-15 16:10:20.738968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.954 [2024-07-15 16:10:20.738978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.954 qpair failed and we were unable to recover it. 00:28:51.954 [2024-07-15 16:10:20.739090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.954 [2024-07-15 16:10:20.739100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.954 qpair failed and we were unable to recover it. 00:28:51.954 [2024-07-15 16:10:20.739291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.954 [2024-07-15 16:10:20.739301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.954 qpair failed and we were unable to recover it. 00:28:51.954 [2024-07-15 16:10:20.739456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.954 [2024-07-15 16:10:20.739465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.954 qpair failed and we were unable to recover it. 00:28:51.954 [2024-07-15 16:10:20.739722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.954 [2024-07-15 16:10:20.739731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.954 qpair failed and we were unable to recover it. 00:28:51.954 [2024-07-15 16:10:20.739935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.954 [2024-07-15 16:10:20.739944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.954 qpair failed and we were unable to recover it. 00:28:51.954 [2024-07-15 16:10:20.740058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.954 [2024-07-15 16:10:20.740067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.954 qpair failed and we were unable to recover it. 00:28:51.954 [2024-07-15 16:10:20.740164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.954 [2024-07-15 16:10:20.740174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.954 qpair failed and we were unable to recover it. 00:28:51.954 [2024-07-15 16:10:20.740341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.954 [2024-07-15 16:10:20.740351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.954 qpair failed and we were unable to recover it. 00:28:51.954 [2024-07-15 16:10:20.740479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.954 [2024-07-15 16:10:20.740490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.954 qpair failed and we were unable to recover it. 00:28:51.954 [2024-07-15 16:10:20.740760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.954 [2024-07-15 16:10:20.740770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.954 qpair failed and we were unable to recover it. 00:28:51.954 [2024-07-15 16:10:20.740940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.954 [2024-07-15 16:10:20.740950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.954 qpair failed and we were unable to recover it. 00:28:51.954 [2024-07-15 16:10:20.741061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.954 [2024-07-15 16:10:20.741070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.954 qpair failed and we were unable to recover it. 00:28:51.954 [2024-07-15 16:10:20.741184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.954 [2024-07-15 16:10:20.741194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.954 qpair failed and we were unable to recover it. 00:28:51.954 [2024-07-15 16:10:20.741362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.954 [2024-07-15 16:10:20.741372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.954 qpair failed and we were unable to recover it. 00:28:51.954 [2024-07-15 16:10:20.741533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.954 [2024-07-15 16:10:20.741543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.954 qpair failed and we were unable to recover it. 00:28:51.954 [2024-07-15 16:10:20.741668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.954 [2024-07-15 16:10:20.741677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.954 qpair failed and we were unable to recover it. 00:28:51.954 [2024-07-15 16:10:20.741853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.954 [2024-07-15 16:10:20.741862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.954 qpair failed and we were unable to recover it. 00:28:51.955 [2024-07-15 16:10:20.742126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.955 [2024-07-15 16:10:20.742135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.955 qpair failed and we were unable to recover it. 00:28:51.955 [2024-07-15 16:10:20.742308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.955 [2024-07-15 16:10:20.742317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.955 qpair failed and we were unable to recover it. 00:28:51.955 [2024-07-15 16:10:20.742533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.955 [2024-07-15 16:10:20.742542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.955 qpair failed and we were unable to recover it. 00:28:51.955 [2024-07-15 16:10:20.742710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.955 [2024-07-15 16:10:20.742720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.955 qpair failed and we were unable to recover it. 00:28:51.955 [2024-07-15 16:10:20.742820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.955 [2024-07-15 16:10:20.742830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.955 qpair failed and we were unable to recover it. 00:28:51.955 [2024-07-15 16:10:20.743001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.955 [2024-07-15 16:10:20.743011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.955 qpair failed and we were unable to recover it. 00:28:51.955 [2024-07-15 16:10:20.743249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.955 [2024-07-15 16:10:20.743259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.955 qpair failed and we were unable to recover it. 00:28:51.955 [2024-07-15 16:10:20.743436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.955 [2024-07-15 16:10:20.743445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.955 qpair failed and we were unable to recover it. 00:28:51.955 [2024-07-15 16:10:20.743612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.955 [2024-07-15 16:10:20.743622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.955 qpair failed and we were unable to recover it. 00:28:51.955 [2024-07-15 16:10:20.743778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.955 [2024-07-15 16:10:20.743790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.955 qpair failed and we were unable to recover it. 00:28:51.955 [2024-07-15 16:10:20.743965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.955 [2024-07-15 16:10:20.743975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.955 qpair failed and we were unable to recover it. 00:28:51.955 [2024-07-15 16:10:20.744222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.955 [2024-07-15 16:10:20.744235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.955 qpair failed and we were unable to recover it. 00:28:51.955 [2024-07-15 16:10:20.744416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.955 [2024-07-15 16:10:20.744427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.955 qpair failed and we were unable to recover it. 00:28:51.955 [2024-07-15 16:10:20.744603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.955 [2024-07-15 16:10:20.744614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.955 qpair failed and we were unable to recover it. 00:28:51.955 [2024-07-15 16:10:20.744718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.955 [2024-07-15 16:10:20.744728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.955 qpair failed and we were unable to recover it. 00:28:51.955 [2024-07-15 16:10:20.744990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.955 [2024-07-15 16:10:20.744999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.955 qpair failed and we were unable to recover it. 00:28:51.955 [2024-07-15 16:10:20.745234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.955 [2024-07-15 16:10:20.745244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.955 qpair failed and we were unable to recover it. 00:28:51.955 [2024-07-15 16:10:20.745389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.955 [2024-07-15 16:10:20.745399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.955 qpair failed and we were unable to recover it. 00:28:51.955 [2024-07-15 16:10:20.745542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.955 [2024-07-15 16:10:20.745551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.955 qpair failed and we were unable to recover it. 00:28:51.955 [2024-07-15 16:10:20.745722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.955 [2024-07-15 16:10:20.745731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.955 qpair failed and we were unable to recover it. 00:28:51.955 [2024-07-15 16:10:20.745856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.955 [2024-07-15 16:10:20.745866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.955 qpair failed and we were unable to recover it. 00:28:51.955 [2024-07-15 16:10:20.746051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.955 [2024-07-15 16:10:20.746062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.955 qpair failed and we were unable to recover it. 00:28:51.955 [2024-07-15 16:10:20.746214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.955 [2024-07-15 16:10:20.746235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.955 qpair failed and we were unable to recover it. 00:28:51.955 [2024-07-15 16:10:20.746316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.955 [2024-07-15 16:10:20.746326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.955 qpair failed and we were unable to recover it. 00:28:51.955 [2024-07-15 16:10:20.746572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.955 [2024-07-15 16:10:20.746583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.955 qpair failed and we were unable to recover it. 00:28:51.955 [2024-07-15 16:10:20.746710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.955 [2024-07-15 16:10:20.746720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.955 qpair failed and we were unable to recover it. 00:28:51.955 [2024-07-15 16:10:20.746926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.955 [2024-07-15 16:10:20.746936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.955 qpair failed and we were unable to recover it. 00:28:51.955 [2024-07-15 16:10:20.747169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.955 [2024-07-15 16:10:20.747179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.955 qpair failed and we were unable to recover it. 00:28:51.955 [2024-07-15 16:10:20.747294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.955 [2024-07-15 16:10:20.747304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.955 qpair failed and we were unable to recover it. 00:28:51.955 [2024-07-15 16:10:20.747496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.955 [2024-07-15 16:10:20.747506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.956 qpair failed and we were unable to recover it. 00:28:51.956 [2024-07-15 16:10:20.747726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.956 [2024-07-15 16:10:20.747736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.956 qpair failed and we were unable to recover it. 00:28:51.956 [2024-07-15 16:10:20.747900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.956 [2024-07-15 16:10:20.747909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.956 qpair failed and we were unable to recover it. 00:28:51.956 [2024-07-15 16:10:20.748116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.956 [2024-07-15 16:10:20.748125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.956 qpair failed and we were unable to recover it. 00:28:51.956 [2024-07-15 16:10:20.748304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.956 [2024-07-15 16:10:20.748314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.956 qpair failed and we were unable to recover it. 00:28:51.956 [2024-07-15 16:10:20.748434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.956 [2024-07-15 16:10:20.748444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.956 qpair failed and we were unable to recover it. 00:28:51.956 [2024-07-15 16:10:20.748597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.956 [2024-07-15 16:10:20.748608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.956 qpair failed and we were unable to recover it. 00:28:51.956 [2024-07-15 16:10:20.748734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.956 [2024-07-15 16:10:20.748744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.956 qpair failed and we were unable to recover it. 00:28:51.956 [2024-07-15 16:10:20.748978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.956 [2024-07-15 16:10:20.748988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.956 qpair failed and we were unable to recover it. 00:28:51.956 [2024-07-15 16:10:20.749157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.956 [2024-07-15 16:10:20.749167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.956 qpair failed and we were unable to recover it. 00:28:51.956 [2024-07-15 16:10:20.749304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.956 [2024-07-15 16:10:20.749315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.956 qpair failed and we were unable to recover it. 00:28:51.956 [2024-07-15 16:10:20.749441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.956 [2024-07-15 16:10:20.749452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.956 qpair failed and we were unable to recover it. 00:28:51.956 [2024-07-15 16:10:20.749624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.956 [2024-07-15 16:10:20.749634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.956 qpair failed and we were unable to recover it. 00:28:51.956 [2024-07-15 16:10:20.749742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.956 [2024-07-15 16:10:20.749752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.956 qpair failed and we were unable to recover it. 00:28:51.956 [2024-07-15 16:10:20.749876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.956 [2024-07-15 16:10:20.749886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.956 qpair failed and we were unable to recover it. 00:28:51.956 [2024-07-15 16:10:20.749980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.956 [2024-07-15 16:10:20.749990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.956 qpair failed and we were unable to recover it. 00:28:51.956 [2024-07-15 16:10:20.750158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.956 [2024-07-15 16:10:20.750167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.956 qpair failed and we were unable to recover it. 00:28:51.956 [2024-07-15 16:10:20.750398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.956 [2024-07-15 16:10:20.750408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.956 qpair failed and we were unable to recover it. 00:28:51.956 [2024-07-15 16:10:20.750583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.956 [2024-07-15 16:10:20.750593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.956 qpair failed and we were unable to recover it. 00:28:51.956 [2024-07-15 16:10:20.750791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.956 [2024-07-15 16:10:20.750801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.956 qpair failed and we were unable to recover it. 00:28:51.956 [2024-07-15 16:10:20.750898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.956 [2024-07-15 16:10:20.750910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.956 qpair failed and we were unable to recover it. 00:28:51.956 [2024-07-15 16:10:20.751086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.956 [2024-07-15 16:10:20.751096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.956 qpair failed and we were unable to recover it. 00:28:51.956 [2024-07-15 16:10:20.751264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.956 [2024-07-15 16:10:20.751273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.956 qpair failed and we were unable to recover it. 00:28:51.956 [2024-07-15 16:10:20.751377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.956 [2024-07-15 16:10:20.751386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.956 qpair failed and we were unable to recover it. 00:28:51.956 [2024-07-15 16:10:20.751583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.956 [2024-07-15 16:10:20.751593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.956 qpair failed and we were unable to recover it. 00:28:51.956 [2024-07-15 16:10:20.751830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.956 [2024-07-15 16:10:20.751839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.956 qpair failed and we were unable to recover it. 00:28:51.956 [2024-07-15 16:10:20.752042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.956 [2024-07-15 16:10:20.752052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.956 qpair failed and we were unable to recover it. 00:28:51.956 [2024-07-15 16:10:20.752293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.956 [2024-07-15 16:10:20.752303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.956 qpair failed and we were unable to recover it. 00:28:51.956 [2024-07-15 16:10:20.752501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.956 [2024-07-15 16:10:20.752510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.956 qpair failed and we were unable to recover it. 00:28:51.956 [2024-07-15 16:10:20.752614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.956 [2024-07-15 16:10:20.752623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.956 qpair failed and we were unable to recover it. 00:28:51.956 [2024-07-15 16:10:20.752792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.956 [2024-07-15 16:10:20.752801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.956 qpair failed and we were unable to recover it. 00:28:51.956 [2024-07-15 16:10:20.752979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.956 [2024-07-15 16:10:20.752988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.956 qpair failed and we were unable to recover it. 00:28:51.956 [2024-07-15 16:10:20.753196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.956 [2024-07-15 16:10:20.753206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.956 qpair failed and we were unable to recover it. 00:28:51.957 [2024-07-15 16:10:20.753380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.957 [2024-07-15 16:10:20.753391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.957 qpair failed and we were unable to recover it. 00:28:51.957 [2024-07-15 16:10:20.753501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.957 [2024-07-15 16:10:20.753511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.957 qpair failed and we were unable to recover it. 00:28:51.957 [2024-07-15 16:10:20.753669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.957 [2024-07-15 16:10:20.753679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.957 qpair failed and we were unable to recover it. 00:28:51.957 [2024-07-15 16:10:20.753870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.957 [2024-07-15 16:10:20.753880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.957 qpair failed and we were unable to recover it. 00:28:51.957 [2024-07-15 16:10:20.754048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.957 [2024-07-15 16:10:20.754058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.957 qpair failed and we were unable to recover it. 00:28:51.957 [2024-07-15 16:10:20.754163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.957 [2024-07-15 16:10:20.754172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.957 qpair failed and we were unable to recover it. 00:28:51.957 [2024-07-15 16:10:20.754395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.957 [2024-07-15 16:10:20.754405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.957 qpair failed and we were unable to recover it. 00:28:51.957 [2024-07-15 16:10:20.754573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.957 [2024-07-15 16:10:20.754582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.957 qpair failed and we were unable to recover it. 00:28:51.957 [2024-07-15 16:10:20.754693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.957 [2024-07-15 16:10:20.754702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.957 qpair failed and we were unable to recover it. 00:28:51.957 [2024-07-15 16:10:20.754974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.957 [2024-07-15 16:10:20.754985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.957 qpair failed and we were unable to recover it. 00:28:51.957 [2024-07-15 16:10:20.755076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.957 [2024-07-15 16:10:20.755086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.957 qpair failed and we were unable to recover it. 00:28:51.957 [2024-07-15 16:10:20.755278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.957 [2024-07-15 16:10:20.755289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.957 qpair failed and we were unable to recover it. 00:28:51.957 [2024-07-15 16:10:20.755510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.957 [2024-07-15 16:10:20.755519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.957 qpair failed and we were unable to recover it. 00:28:51.957 [2024-07-15 16:10:20.755626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.957 [2024-07-15 16:10:20.755636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.957 qpair failed and we were unable to recover it. 00:28:51.957 [2024-07-15 16:10:20.755789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.957 [2024-07-15 16:10:20.755799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.957 qpair failed and we were unable to recover it. 00:28:51.957 [2024-07-15 16:10:20.756012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.957 [2024-07-15 16:10:20.756022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.957 qpair failed and we were unable to recover it. 00:28:51.957 [2024-07-15 16:10:20.756192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.957 [2024-07-15 16:10:20.756201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.957 qpair failed and we were unable to recover it. 00:28:51.957 [2024-07-15 16:10:20.756373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.957 [2024-07-15 16:10:20.756383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.957 qpair failed and we were unable to recover it. 00:28:51.957 [2024-07-15 16:10:20.756548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.957 [2024-07-15 16:10:20.756558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.957 qpair failed and we were unable to recover it. 00:28:51.957 [2024-07-15 16:10:20.756681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.957 [2024-07-15 16:10:20.756690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.957 qpair failed and we were unable to recover it. 00:28:51.957 [2024-07-15 16:10:20.756912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.957 [2024-07-15 16:10:20.756921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.957 qpair failed and we were unable to recover it. 00:28:51.957 [2024-07-15 16:10:20.757193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.957 [2024-07-15 16:10:20.757204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.957 qpair failed and we were unable to recover it. 00:28:51.957 [2024-07-15 16:10:20.757375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.957 [2024-07-15 16:10:20.757385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.957 qpair failed and we were unable to recover it. 00:28:51.957 [2024-07-15 16:10:20.757555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.957 [2024-07-15 16:10:20.757566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.957 qpair failed and we were unable to recover it. 00:28:51.957 [2024-07-15 16:10:20.757743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.957 [2024-07-15 16:10:20.757753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.957 qpair failed and we were unable to recover it. 00:28:51.957 [2024-07-15 16:10:20.757938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.957 [2024-07-15 16:10:20.757947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.957 qpair failed and we were unable to recover it. 00:28:51.957 [2024-07-15 16:10:20.758102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.957 [2024-07-15 16:10:20.758112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.957 qpair failed and we were unable to recover it. 00:28:51.957 [2024-07-15 16:10:20.758298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.957 [2024-07-15 16:10:20.758311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.957 qpair failed and we were unable to recover it. 00:28:51.957 [2024-07-15 16:10:20.758533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.957 [2024-07-15 16:10:20.758543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.957 qpair failed and we were unable to recover it. 00:28:51.957 [2024-07-15 16:10:20.758674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.957 [2024-07-15 16:10:20.758683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.957 qpair failed and we were unable to recover it. 00:28:51.958 [2024-07-15 16:10:20.758840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.958 [2024-07-15 16:10:20.758849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.958 qpair failed and we were unable to recover it. 00:28:51.958 [2024-07-15 16:10:20.759097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.958 [2024-07-15 16:10:20.759107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.958 qpair failed and we were unable to recover it. 00:28:51.958 [2024-07-15 16:10:20.759299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.958 [2024-07-15 16:10:20.759309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.958 qpair failed and we were unable to recover it. 00:28:51.958 [2024-07-15 16:10:20.759462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.958 [2024-07-15 16:10:20.759472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.958 qpair failed and we were unable to recover it. 00:28:51.958 [2024-07-15 16:10:20.759651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.958 [2024-07-15 16:10:20.759660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.958 qpair failed and we were unable to recover it. 00:28:51.958 [2024-07-15 16:10:20.759786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.958 [2024-07-15 16:10:20.759796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.958 qpair failed and we were unable to recover it. 00:28:51.958 [2024-07-15 16:10:20.760006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.958 [2024-07-15 16:10:20.760016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.958 qpair failed and we were unable to recover it. 00:28:51.958 [2024-07-15 16:10:20.760238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.958 [2024-07-15 16:10:20.760247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.958 qpair failed and we were unable to recover it. 00:28:51.958 [2024-07-15 16:10:20.760417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.958 [2024-07-15 16:10:20.760427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.958 qpair failed and we were unable to recover it. 00:28:51.958 [2024-07-15 16:10:20.760585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.958 [2024-07-15 16:10:20.760595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.958 qpair failed and we were unable to recover it. 00:28:51.958 [2024-07-15 16:10:20.760708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.958 [2024-07-15 16:10:20.760718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.958 qpair failed and we were unable to recover it. 00:28:51.958 [2024-07-15 16:10:20.760916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.958 [2024-07-15 16:10:20.760926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.958 qpair failed and we were unable to recover it. 00:28:51.958 [2024-07-15 16:10:20.761083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.958 [2024-07-15 16:10:20.761092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.958 qpair failed and we were unable to recover it. 00:28:51.958 [2024-07-15 16:10:20.761208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.958 [2024-07-15 16:10:20.761218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.958 qpair failed and we were unable to recover it. 00:28:51.958 [2024-07-15 16:10:20.761352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.958 [2024-07-15 16:10:20.761362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.958 qpair failed and we were unable to recover it. 00:28:51.958 [2024-07-15 16:10:20.761535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.958 [2024-07-15 16:10:20.761544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.958 qpair failed and we were unable to recover it. 00:28:51.958 [2024-07-15 16:10:20.761769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.958 [2024-07-15 16:10:20.761778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.958 qpair failed and we were unable to recover it. 00:28:51.958 [2024-07-15 16:10:20.762058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.958 [2024-07-15 16:10:20.762067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.958 qpair failed and we were unable to recover it. 00:28:51.958 [2024-07-15 16:10:20.762178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.958 [2024-07-15 16:10:20.762187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.958 qpair failed and we were unable to recover it. 00:28:51.958 [2024-07-15 16:10:20.762351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.958 [2024-07-15 16:10:20.762361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.958 qpair failed and we were unable to recover it. 00:28:51.958 [2024-07-15 16:10:20.762535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.958 [2024-07-15 16:10:20.762545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.958 qpair failed and we were unable to recover it. 00:28:51.958 [2024-07-15 16:10:20.762721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.958 [2024-07-15 16:10:20.762731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.958 qpair failed and we were unable to recover it. 00:28:51.958 [2024-07-15 16:10:20.762854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.958 [2024-07-15 16:10:20.762863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.958 qpair failed and we were unable to recover it. 00:28:51.958 [2024-07-15 16:10:20.763147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.959 [2024-07-15 16:10:20.763157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.959 qpair failed and we were unable to recover it. 00:28:51.959 [2024-07-15 16:10:20.763276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.959 [2024-07-15 16:10:20.763287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.959 qpair failed and we were unable to recover it. 00:28:51.959 [2024-07-15 16:10:20.763421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.959 [2024-07-15 16:10:20.763431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.959 qpair failed and we were unable to recover it. 00:28:51.959 [2024-07-15 16:10:20.763653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.959 [2024-07-15 16:10:20.763662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.959 qpair failed and we were unable to recover it. 00:28:51.959 [2024-07-15 16:10:20.763954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.959 [2024-07-15 16:10:20.763964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.959 qpair failed and we were unable to recover it. 00:28:51.959 [2024-07-15 16:10:20.764190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.959 [2024-07-15 16:10:20.764199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.959 qpair failed and we were unable to recover it. 00:28:51.959 [2024-07-15 16:10:20.764347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.959 [2024-07-15 16:10:20.764357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.959 qpair failed and we were unable to recover it. 00:28:51.959 [2024-07-15 16:10:20.764590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.959 [2024-07-15 16:10:20.764600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.959 qpair failed and we were unable to recover it. 00:28:51.959 [2024-07-15 16:10:20.764809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.959 [2024-07-15 16:10:20.764818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.959 qpair failed and we were unable to recover it. 00:28:51.959 [2024-07-15 16:10:20.765041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.959 [2024-07-15 16:10:20.765051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.959 qpair failed and we were unable to recover it. 00:28:51.959 [2024-07-15 16:10:20.765227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.959 [2024-07-15 16:10:20.765238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.959 qpair failed and we were unable to recover it. 00:28:51.959 [2024-07-15 16:10:20.765375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.959 [2024-07-15 16:10:20.765384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.959 qpair failed and we were unable to recover it. 00:28:51.959 [2024-07-15 16:10:20.765632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.959 [2024-07-15 16:10:20.765641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.959 qpair failed and we were unable to recover it. 00:28:51.959 [2024-07-15 16:10:20.765800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.959 [2024-07-15 16:10:20.765810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.959 qpair failed and we were unable to recover it. 00:28:51.959 [2024-07-15 16:10:20.766070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.959 [2024-07-15 16:10:20.766082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.959 qpair failed and we were unable to recover it. 00:28:51.959 [2024-07-15 16:10:20.766308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.959 [2024-07-15 16:10:20.766319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.959 qpair failed and we were unable to recover it. 00:28:51.959 [2024-07-15 16:10:20.766516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.959 [2024-07-15 16:10:20.766526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.959 qpair failed and we were unable to recover it. 00:28:51.959 [2024-07-15 16:10:20.766643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.959 [2024-07-15 16:10:20.766653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.959 qpair failed and we were unable to recover it. 00:28:51.959 [2024-07-15 16:10:20.766896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.959 [2024-07-15 16:10:20.766905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.959 qpair failed and we were unable to recover it. 00:28:51.959 [2024-07-15 16:10:20.767126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.959 [2024-07-15 16:10:20.767136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.959 qpair failed and we were unable to recover it. 00:28:51.959 [2024-07-15 16:10:20.767422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.959 [2024-07-15 16:10:20.767432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.959 qpair failed and we were unable to recover it. 00:28:51.959 [2024-07-15 16:10:20.767552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.959 [2024-07-15 16:10:20.767562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.959 qpair failed and we were unable to recover it. 00:28:51.959 [2024-07-15 16:10:20.767696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.959 [2024-07-15 16:10:20.767706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.959 qpair failed and we were unable to recover it. 00:28:51.959 [2024-07-15 16:10:20.767962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.959 [2024-07-15 16:10:20.767972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.959 qpair failed and we were unable to recover it. 00:28:51.959 [2024-07-15 16:10:20.768074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.959 [2024-07-15 16:10:20.768083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.959 qpair failed and we were unable to recover it. 00:28:51.959 [2024-07-15 16:10:20.768248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.959 [2024-07-15 16:10:20.768258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.959 qpair failed and we were unable to recover it. 00:28:51.959 [2024-07-15 16:10:20.768497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.959 [2024-07-15 16:10:20.768507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.959 qpair failed and we were unable to recover it. 00:28:51.959 [2024-07-15 16:10:20.768664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.959 [2024-07-15 16:10:20.768674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.959 qpair failed and we were unable to recover it. 00:28:51.959 [2024-07-15 16:10:20.768779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.959 [2024-07-15 16:10:20.768789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.959 qpair failed and we were unable to recover it. 00:28:51.959 [2024-07-15 16:10:20.769054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.959 [2024-07-15 16:10:20.769063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.959 qpair failed and we were unable to recover it. 00:28:51.959 [2024-07-15 16:10:20.769237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.959 [2024-07-15 16:10:20.769247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.959 qpair failed and we were unable to recover it. 00:28:51.959 [2024-07-15 16:10:20.769364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.960 [2024-07-15 16:10:20.769374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.960 qpair failed and we were unable to recover it. 00:28:51.960 [2024-07-15 16:10:20.769486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.960 [2024-07-15 16:10:20.769495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.960 qpair failed and we were unable to recover it. 00:28:51.960 [2024-07-15 16:10:20.769608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.960 [2024-07-15 16:10:20.769618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.960 qpair failed and we were unable to recover it. 00:28:51.960 [2024-07-15 16:10:20.769717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.960 [2024-07-15 16:10:20.769726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.960 qpair failed and we were unable to recover it. 00:28:51.960 [2024-07-15 16:10:20.769872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.960 [2024-07-15 16:10:20.769881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.960 qpair failed and we were unable to recover it. 00:28:51.960 [2024-07-15 16:10:20.770068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.960 [2024-07-15 16:10:20.770078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.960 qpair failed and we were unable to recover it. 00:28:51.960 [2024-07-15 16:10:20.770263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.960 [2024-07-15 16:10:20.770274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.960 qpair failed and we were unable to recover it. 00:28:51.960 [2024-07-15 16:10:20.770374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.960 [2024-07-15 16:10:20.770383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.960 qpair failed and we were unable to recover it. 00:28:51.960 [2024-07-15 16:10:20.770561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.960 [2024-07-15 16:10:20.770571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.960 qpair failed and we were unable to recover it. 00:28:51.960 [2024-07-15 16:10:20.770674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.960 [2024-07-15 16:10:20.770684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.960 qpair failed and we were unable to recover it. 00:28:51.960 [2024-07-15 16:10:20.770917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.960 [2024-07-15 16:10:20.770927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.960 qpair failed and we were unable to recover it. 00:28:51.960 [2024-07-15 16:10:20.771097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.960 [2024-07-15 16:10:20.771106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.960 qpair failed and we were unable to recover it. 00:28:51.960 [2024-07-15 16:10:20.771351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.960 [2024-07-15 16:10:20.771361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.960 qpair failed and we were unable to recover it. 00:28:51.960 [2024-07-15 16:10:20.771463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.960 [2024-07-15 16:10:20.771472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.960 qpair failed and we were unable to recover it. 00:28:51.960 [2024-07-15 16:10:20.771577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.960 [2024-07-15 16:10:20.771586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.960 qpair failed and we were unable to recover it. 00:28:51.960 [2024-07-15 16:10:20.771807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.960 [2024-07-15 16:10:20.771817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.960 qpair failed and we were unable to recover it. 00:28:51.960 [2024-07-15 16:10:20.771941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.960 [2024-07-15 16:10:20.771951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.960 qpair failed and we were unable to recover it. 00:28:51.960 [2024-07-15 16:10:20.772057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.960 [2024-07-15 16:10:20.772067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.960 qpair failed and we were unable to recover it. 00:28:51.960 [2024-07-15 16:10:20.772259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.960 [2024-07-15 16:10:20.772270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.960 qpair failed and we were unable to recover it. 00:28:51.960 [2024-07-15 16:10:20.772426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.960 [2024-07-15 16:10:20.772436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.960 qpair failed and we were unable to recover it. 00:28:51.960 [2024-07-15 16:10:20.772552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.960 [2024-07-15 16:10:20.772562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.960 qpair failed and we were unable to recover it. 00:28:51.960 [2024-07-15 16:10:20.772813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.960 [2024-07-15 16:10:20.772822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.960 qpair failed and we were unable to recover it. 00:28:51.960 [2024-07-15 16:10:20.772989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.960 [2024-07-15 16:10:20.772999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.960 qpair failed and we were unable to recover it. 00:28:51.960 [2024-07-15 16:10:20.773267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.960 [2024-07-15 16:10:20.773280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.960 qpair failed and we were unable to recover it. 00:28:51.960 [2024-07-15 16:10:20.773404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.960 [2024-07-15 16:10:20.773414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.960 qpair failed and we were unable to recover it. 00:28:51.960 [2024-07-15 16:10:20.773530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.960 [2024-07-15 16:10:20.773540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.960 qpair failed and we were unable to recover it. 00:28:51.960 [2024-07-15 16:10:20.773735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.960 [2024-07-15 16:10:20.773745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.960 qpair failed and we were unable to recover it. 00:28:51.960 [2024-07-15 16:10:20.773852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.960 [2024-07-15 16:10:20.773862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.960 qpair failed and we were unable to recover it. 00:28:51.960 [2024-07-15 16:10:20.773961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.960 [2024-07-15 16:10:20.773971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.960 qpair failed and we were unable to recover it. 00:28:51.960 [2024-07-15 16:10:20.774202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.961 [2024-07-15 16:10:20.774212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.961 qpair failed and we were unable to recover it. 00:28:51.961 [2024-07-15 16:10:20.774396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.961 [2024-07-15 16:10:20.774406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.961 qpair failed and we were unable to recover it. 00:28:51.961 [2024-07-15 16:10:20.774534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.961 [2024-07-15 16:10:20.774544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.961 qpair failed and we were unable to recover it. 00:28:51.961 [2024-07-15 16:10:20.774651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.961 [2024-07-15 16:10:20.774661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.961 qpair failed and we were unable to recover it. 00:28:51.961 [2024-07-15 16:10:20.774773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.961 [2024-07-15 16:10:20.774782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.961 qpair failed and we were unable to recover it. 00:28:51.961 [2024-07-15 16:10:20.774959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.961 [2024-07-15 16:10:20.774968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.961 qpair failed and we were unable to recover it. 00:28:51.961 [2024-07-15 16:10:20.775217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.961 [2024-07-15 16:10:20.775231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.961 qpair failed and we were unable to recover it. 00:28:51.961 [2024-07-15 16:10:20.775394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.961 [2024-07-15 16:10:20.775404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.961 qpair failed and we were unable to recover it. 00:28:51.961 [2024-07-15 16:10:20.775577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.961 [2024-07-15 16:10:20.775587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.961 qpair failed and we were unable to recover it. 00:28:51.961 [2024-07-15 16:10:20.775755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.961 [2024-07-15 16:10:20.775765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.961 qpair failed and we were unable to recover it. 00:28:51.961 [2024-07-15 16:10:20.775950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.961 [2024-07-15 16:10:20.775960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.961 qpair failed and we were unable to recover it. 00:28:51.961 [2024-07-15 16:10:20.776074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.961 [2024-07-15 16:10:20.776084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.961 qpair failed and we were unable to recover it. 00:28:51.961 [2024-07-15 16:10:20.776260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.961 [2024-07-15 16:10:20.776270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.961 qpair failed and we were unable to recover it. 00:28:51.961 [2024-07-15 16:10:20.776492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.961 [2024-07-15 16:10:20.776502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.961 qpair failed and we were unable to recover it. 00:28:51.961 [2024-07-15 16:10:20.776674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.961 [2024-07-15 16:10:20.776684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.961 qpair failed and we were unable to recover it. 00:28:51.961 [2024-07-15 16:10:20.776849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.961 [2024-07-15 16:10:20.776858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.961 qpair failed and we were unable to recover it. 00:28:51.961 [2024-07-15 16:10:20.777112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.961 [2024-07-15 16:10:20.777122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.961 qpair failed and we were unable to recover it. 00:28:51.961 [2024-07-15 16:10:20.777221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.961 [2024-07-15 16:10:20.777246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.961 qpair failed and we were unable to recover it. 00:28:51.961 [2024-07-15 16:10:20.777348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.961 [2024-07-15 16:10:20.777359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.961 qpair failed and we were unable to recover it. 00:28:51.961 [2024-07-15 16:10:20.777527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.961 [2024-07-15 16:10:20.777537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.961 qpair failed and we were unable to recover it. 00:28:51.961 [2024-07-15 16:10:20.777712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.961 [2024-07-15 16:10:20.777722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.961 qpair failed and we were unable to recover it. 00:28:51.961 [2024-07-15 16:10:20.777975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.961 [2024-07-15 16:10:20.777985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.961 qpair failed and we were unable to recover it. 00:28:51.961 [2024-07-15 16:10:20.778211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.961 [2024-07-15 16:10:20.778221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.961 qpair failed and we were unable to recover it. 00:28:51.961 [2024-07-15 16:10:20.778399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.961 [2024-07-15 16:10:20.778409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.961 qpair failed and we were unable to recover it. 00:28:51.961 [2024-07-15 16:10:20.778523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.961 [2024-07-15 16:10:20.778533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.961 qpair failed and we were unable to recover it. 00:28:51.961 [2024-07-15 16:10:20.778708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.961 [2024-07-15 16:10:20.778717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.961 qpair failed and we were unable to recover it. 00:28:51.961 [2024-07-15 16:10:20.778835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.961 [2024-07-15 16:10:20.778845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.961 qpair failed and we were unable to recover it. 00:28:51.961 [2024-07-15 16:10:20.779095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.961 [2024-07-15 16:10:20.779104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.961 qpair failed and we were unable to recover it. 00:28:51.961 [2024-07-15 16:10:20.779376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.961 [2024-07-15 16:10:20.779385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.961 qpair failed and we were unable to recover it. 00:28:51.961 [2024-07-15 16:10:20.779508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.961 [2024-07-15 16:10:20.779518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.961 qpair failed and we were unable to recover it. 00:28:51.961 [2024-07-15 16:10:20.779755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.962 [2024-07-15 16:10:20.779765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.962 qpair failed and we were unable to recover it. 00:28:51.962 [2024-07-15 16:10:20.779967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.962 [2024-07-15 16:10:20.779976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.962 qpair failed and we were unable to recover it. 00:28:51.962 [2024-07-15 16:10:20.780199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.962 [2024-07-15 16:10:20.780208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.962 qpair failed and we were unable to recover it. 00:28:51.962 [2024-07-15 16:10:20.780327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.962 [2024-07-15 16:10:20.780337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.962 qpair failed and we were unable to recover it. 00:28:51.962 [2024-07-15 16:10:20.780534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.962 [2024-07-15 16:10:20.780546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.962 qpair failed and we were unable to recover it. 00:28:51.962 [2024-07-15 16:10:20.780662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.962 [2024-07-15 16:10:20.780672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.962 qpair failed and we were unable to recover it. 00:28:51.962 [2024-07-15 16:10:20.780797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.962 [2024-07-15 16:10:20.780807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.962 qpair failed and we were unable to recover it. 00:28:51.962 [2024-07-15 16:10:20.780988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.962 [2024-07-15 16:10:20.780997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.962 qpair failed and we were unable to recover it. 00:28:51.962 [2024-07-15 16:10:20.781208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.962 [2024-07-15 16:10:20.781219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.962 qpair failed and we were unable to recover it. 00:28:51.962 [2024-07-15 16:10:20.781393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.962 [2024-07-15 16:10:20.781403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.962 qpair failed and we were unable to recover it. 00:28:51.962 [2024-07-15 16:10:20.781527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.962 [2024-07-15 16:10:20.781537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.962 qpair failed and we were unable to recover it. 00:28:51.962 [2024-07-15 16:10:20.781646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.962 [2024-07-15 16:10:20.781656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.962 qpair failed and we were unable to recover it. 00:28:51.962 [2024-07-15 16:10:20.781770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.962 [2024-07-15 16:10:20.781780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.962 qpair failed and we were unable to recover it. 00:28:51.962 [2024-07-15 16:10:20.781986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.962 [2024-07-15 16:10:20.781995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.962 qpair failed and we were unable to recover it. 00:28:51.962 [2024-07-15 16:10:20.782244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.962 [2024-07-15 16:10:20.782254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.962 qpair failed and we were unable to recover it. 00:28:51.962 [2024-07-15 16:10:20.782368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.962 [2024-07-15 16:10:20.782378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.962 qpair failed and we were unable to recover it. 00:28:51.962 [2024-07-15 16:10:20.782489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.962 [2024-07-15 16:10:20.782499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.962 qpair failed and we were unable to recover it. 00:28:51.962 [2024-07-15 16:10:20.782670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.962 [2024-07-15 16:10:20.782679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.962 qpair failed and we were unable to recover it. 00:28:51.962 [2024-07-15 16:10:20.782856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.962 [2024-07-15 16:10:20.782866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.962 qpair failed and we were unable to recover it. 00:28:51.962 [2024-07-15 16:10:20.783009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.962 [2024-07-15 16:10:20.783019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.962 qpair failed and we were unable to recover it. 00:28:51.962 [2024-07-15 16:10:20.783126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.962 [2024-07-15 16:10:20.783136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.962 qpair failed and we were unable to recover it. 00:28:51.962 [2024-07-15 16:10:20.783288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.962 [2024-07-15 16:10:20.783298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.962 qpair failed and we were unable to recover it. 00:28:51.962 [2024-07-15 16:10:20.783418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.962 [2024-07-15 16:10:20.783428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.962 qpair failed and we were unable to recover it. 00:28:51.962 [2024-07-15 16:10:20.783638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.962 [2024-07-15 16:10:20.783648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.962 qpair failed and we were unable to recover it. 00:28:51.962 [2024-07-15 16:10:20.783810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.962 [2024-07-15 16:10:20.783820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.962 qpair failed and we were unable to recover it. 00:28:51.962 [2024-07-15 16:10:20.783988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.962 [2024-07-15 16:10:20.783998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.962 qpair failed and we were unable to recover it. 00:28:51.962 [2024-07-15 16:10:20.784157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.962 [2024-07-15 16:10:20.784167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.962 qpair failed and we were unable to recover it. 00:28:51.962 [2024-07-15 16:10:20.784365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.962 [2024-07-15 16:10:20.784375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.962 qpair failed and we were unable to recover it. 00:28:51.962 [2024-07-15 16:10:20.784487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.962 [2024-07-15 16:10:20.784497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.962 qpair failed and we were unable to recover it. 00:28:51.962 [2024-07-15 16:10:20.784652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.962 [2024-07-15 16:10:20.784662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.962 qpair failed and we were unable to recover it. 00:28:51.962 [2024-07-15 16:10:20.784866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.963 [2024-07-15 16:10:20.784875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.963 qpair failed and we were unable to recover it. 00:28:51.963 [2024-07-15 16:10:20.785100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.963 [2024-07-15 16:10:20.785110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.963 qpair failed and we were unable to recover it. 00:28:51.963 [2024-07-15 16:10:20.785289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.963 [2024-07-15 16:10:20.785299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.963 qpair failed and we were unable to recover it. 00:28:51.963 [2024-07-15 16:10:20.785473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.963 [2024-07-15 16:10:20.785483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.963 qpair failed and we were unable to recover it. 00:28:51.963 [2024-07-15 16:10:20.785673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.963 [2024-07-15 16:10:20.785683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.963 qpair failed and we were unable to recover it. 00:28:51.963 [2024-07-15 16:10:20.785800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.963 [2024-07-15 16:10:20.785809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.963 qpair failed and we were unable to recover it. 00:28:51.963 [2024-07-15 16:10:20.786025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.963 [2024-07-15 16:10:20.786035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.963 qpair failed and we were unable to recover it. 00:28:51.963 [2024-07-15 16:10:20.786263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.963 [2024-07-15 16:10:20.786273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.963 qpair failed and we were unable to recover it. 00:28:51.963 [2024-07-15 16:10:20.786384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.963 [2024-07-15 16:10:20.786393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.963 qpair failed and we were unable to recover it. 00:28:51.963 [2024-07-15 16:10:20.786566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.963 [2024-07-15 16:10:20.786575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.963 qpair failed and we were unable to recover it. 00:28:51.963 [2024-07-15 16:10:20.786679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.963 [2024-07-15 16:10:20.786689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.963 qpair failed and we were unable to recover it. 00:28:51.963 [2024-07-15 16:10:20.786957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.963 [2024-07-15 16:10:20.786967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.963 qpair failed and we were unable to recover it. 00:28:51.963 [2024-07-15 16:10:20.787243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.963 [2024-07-15 16:10:20.787253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.963 qpair failed and we were unable to recover it. 00:28:51.963 [2024-07-15 16:10:20.787447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.963 [2024-07-15 16:10:20.787456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.963 qpair failed and we were unable to recover it. 00:28:51.963 [2024-07-15 16:10:20.787614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.963 [2024-07-15 16:10:20.787625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.963 qpair failed and we were unable to recover it. 00:28:51.963 [2024-07-15 16:10:20.787796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.963 [2024-07-15 16:10:20.787805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.963 qpair failed and we were unable to recover it. 00:28:51.963 [2024-07-15 16:10:20.788059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.963 [2024-07-15 16:10:20.788069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.963 qpair failed and we were unable to recover it. 00:28:51.963 [2024-07-15 16:10:20.788257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.963 [2024-07-15 16:10:20.788267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.963 qpair failed and we were unable to recover it. 00:28:51.963 [2024-07-15 16:10:20.788379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.963 [2024-07-15 16:10:20.788389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:51.963 qpair failed and we were unable to recover it. 00:28:52.228 [2024-07-15 16:10:20.788558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.228 [2024-07-15 16:10:20.788568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.228 qpair failed and we were unable to recover it. 00:28:52.228 [2024-07-15 16:10:20.788739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.228 [2024-07-15 16:10:20.788749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.228 qpair failed and we were unable to recover it. 00:28:52.228 [2024-07-15 16:10:20.788926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.228 [2024-07-15 16:10:20.788935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.228 qpair failed and we were unable to recover it. 00:28:52.228 [2024-07-15 16:10:20.789133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.228 [2024-07-15 16:10:20.789144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.228 qpair failed and we were unable to recover it. 00:28:52.228 [2024-07-15 16:10:20.789351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.228 [2024-07-15 16:10:20.789361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.228 qpair failed and we were unable to recover it. 00:28:52.228 [2024-07-15 16:10:20.789579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.228 [2024-07-15 16:10:20.789589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.228 qpair failed and we were unable to recover it. 00:28:52.228 [2024-07-15 16:10:20.789790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.228 [2024-07-15 16:10:20.789799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.228 qpair failed and we were unable to recover it. 00:28:52.228 [2024-07-15 16:10:20.790060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.228 [2024-07-15 16:10:20.790070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.228 qpair failed and we were unable to recover it. 00:28:52.228 [2024-07-15 16:10:20.790326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.228 [2024-07-15 16:10:20.790337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.228 qpair failed and we were unable to recover it. 00:28:52.228 [2024-07-15 16:10:20.790443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.228 [2024-07-15 16:10:20.790453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.228 qpair failed and we were unable to recover it. 00:28:52.228 [2024-07-15 16:10:20.790569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.228 [2024-07-15 16:10:20.790578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.228 qpair failed and we were unable to recover it. 00:28:52.228 [2024-07-15 16:10:20.790737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.228 [2024-07-15 16:10:20.790746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.228 qpair failed and we were unable to recover it. 00:28:52.228 [2024-07-15 16:10:20.790984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.228 [2024-07-15 16:10:20.790993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.228 qpair failed and we were unable to recover it. 00:28:52.228 [2024-07-15 16:10:20.791164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.228 [2024-07-15 16:10:20.791174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.228 qpair failed and we were unable to recover it. 00:28:52.228 [2024-07-15 16:10:20.791403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.228 [2024-07-15 16:10:20.791413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.228 qpair failed and we were unable to recover it. 00:28:52.228 [2024-07-15 16:10:20.791601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.228 [2024-07-15 16:10:20.791611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.228 qpair failed and we were unable to recover it. 00:28:52.228 [2024-07-15 16:10:20.791763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.228 [2024-07-15 16:10:20.791772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.228 qpair failed and we were unable to recover it. 00:28:52.228 [2024-07-15 16:10:20.791944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.228 [2024-07-15 16:10:20.791954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.228 qpair failed and we were unable to recover it. 00:28:52.228 [2024-07-15 16:10:20.792079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.228 [2024-07-15 16:10:20.792090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.228 qpair failed and we were unable to recover it. 00:28:52.228 [2024-07-15 16:10:20.792260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.229 [2024-07-15 16:10:20.792278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.229 qpair failed and we were unable to recover it. 00:28:52.229 [2024-07-15 16:10:20.792483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.229 [2024-07-15 16:10:20.792493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.229 qpair failed and we were unable to recover it. 00:28:52.229 [2024-07-15 16:10:20.792621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.229 [2024-07-15 16:10:20.792630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.229 qpair failed and we were unable to recover it. 00:28:52.229 [2024-07-15 16:10:20.792806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.229 [2024-07-15 16:10:20.792817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.229 qpair failed and we were unable to recover it. 00:28:52.229 [2024-07-15 16:10:20.792920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.229 [2024-07-15 16:10:20.792930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.229 qpair failed and we were unable to recover it. 00:28:52.229 [2024-07-15 16:10:20.793199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.229 [2024-07-15 16:10:20.793209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.229 qpair failed and we were unable to recover it. 00:28:52.229 [2024-07-15 16:10:20.793330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.229 [2024-07-15 16:10:20.793340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.229 qpair failed and we were unable to recover it. 00:28:52.229 [2024-07-15 16:10:20.793513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.229 [2024-07-15 16:10:20.793523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.229 qpair failed and we were unable to recover it. 00:28:52.229 [2024-07-15 16:10:20.793698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.229 [2024-07-15 16:10:20.793708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.229 qpair failed and we were unable to recover it. 00:28:52.229 [2024-07-15 16:10:20.793977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.229 [2024-07-15 16:10:20.793987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.229 qpair failed and we were unable to recover it. 00:28:52.229 [2024-07-15 16:10:20.794210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.229 [2024-07-15 16:10:20.794220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.229 qpair failed and we were unable to recover it. 00:28:52.229 [2024-07-15 16:10:20.794422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.229 [2024-07-15 16:10:20.794432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.229 qpair failed and we were unable to recover it. 00:28:52.229 [2024-07-15 16:10:20.794599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.229 [2024-07-15 16:10:20.794608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.229 qpair failed and we were unable to recover it. 00:28:52.229 [2024-07-15 16:10:20.794785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.229 [2024-07-15 16:10:20.794794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.229 qpair failed and we were unable to recover it. 00:28:52.229 [2024-07-15 16:10:20.794888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.229 [2024-07-15 16:10:20.794898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.229 qpair failed and we were unable to recover it. 00:28:52.229 [2024-07-15 16:10:20.794997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.229 [2024-07-15 16:10:20.795006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.229 qpair failed and we were unable to recover it. 00:28:52.229 [2024-07-15 16:10:20.795264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.229 [2024-07-15 16:10:20.795276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.229 qpair failed and we were unable to recover it. 00:28:52.229 [2024-07-15 16:10:20.795396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.229 [2024-07-15 16:10:20.795406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.229 qpair failed and we were unable to recover it. 00:28:52.229 [2024-07-15 16:10:20.795573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.229 [2024-07-15 16:10:20.795582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.229 qpair failed and we were unable to recover it. 00:28:52.229 [2024-07-15 16:10:20.795748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.229 [2024-07-15 16:10:20.795757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.229 qpair failed and we were unable to recover it. 00:28:52.229 [2024-07-15 16:10:20.795946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.229 [2024-07-15 16:10:20.795957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.229 qpair failed and we were unable to recover it. 00:28:52.229 [2024-07-15 16:10:20.796115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.229 [2024-07-15 16:10:20.796125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.229 qpair failed and we were unable to recover it. 00:28:52.229 [2024-07-15 16:10:20.796309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.229 [2024-07-15 16:10:20.796319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.229 qpair failed and we were unable to recover it. 00:28:52.229 [2024-07-15 16:10:20.796524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.229 [2024-07-15 16:10:20.796533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.229 qpair failed and we were unable to recover it. 00:28:52.229 [2024-07-15 16:10:20.796666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.229 [2024-07-15 16:10:20.796676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.229 qpair failed and we were unable to recover it. 00:28:52.229 [2024-07-15 16:10:20.796801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.229 [2024-07-15 16:10:20.796810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.229 qpair failed and we were unable to recover it. 00:28:52.229 [2024-07-15 16:10:20.796978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.229 [2024-07-15 16:10:20.796988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.229 qpair failed and we were unable to recover it. 00:28:52.229 [2024-07-15 16:10:20.797151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.229 [2024-07-15 16:10:20.797161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.229 qpair failed and we were unable to recover it. 00:28:52.229 [2024-07-15 16:10:20.797303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.229 [2024-07-15 16:10:20.797313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.229 qpair failed and we were unable to recover it. 00:28:52.229 [2024-07-15 16:10:20.797485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.229 [2024-07-15 16:10:20.797495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.229 qpair failed and we were unable to recover it. 00:28:52.229 [2024-07-15 16:10:20.797713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.229 [2024-07-15 16:10:20.797723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.229 qpair failed and we were unable to recover it. 00:28:52.229 [2024-07-15 16:10:20.797948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.229 [2024-07-15 16:10:20.797958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.229 qpair failed and we were unable to recover it. 00:28:52.229 [2024-07-15 16:10:20.798276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.229 [2024-07-15 16:10:20.798287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.229 qpair failed and we were unable to recover it. 00:28:52.229 [2024-07-15 16:10:20.798391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.229 [2024-07-15 16:10:20.798401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.229 qpair failed and we were unable to recover it. 00:28:52.229 [2024-07-15 16:10:20.798564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.229 [2024-07-15 16:10:20.798574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.229 qpair failed and we were unable to recover it. 00:28:52.229 [2024-07-15 16:10:20.798793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.229 [2024-07-15 16:10:20.798803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.229 qpair failed and we were unable to recover it. 00:28:52.229 [2024-07-15 16:10:20.799045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.229 [2024-07-15 16:10:20.799055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.229 qpair failed and we were unable to recover it. 00:28:52.229 [2024-07-15 16:10:20.799218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.229 [2024-07-15 16:10:20.799234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.229 qpair failed and we were unable to recover it. 00:28:52.229 [2024-07-15 16:10:20.799393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.229 [2024-07-15 16:10:20.799403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.229 qpair failed and we were unable to recover it. 00:28:52.229 [2024-07-15 16:10:20.799643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.229 [2024-07-15 16:10:20.799654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.229 qpair failed and we were unable to recover it. 00:28:52.229 [2024-07-15 16:10:20.799830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.229 [2024-07-15 16:10:20.799840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.229 qpair failed and we were unable to recover it. 00:28:52.230 [2024-07-15 16:10:20.800010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.230 [2024-07-15 16:10:20.800020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.230 qpair failed and we were unable to recover it. 00:28:52.230 [2024-07-15 16:10:20.800176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.230 [2024-07-15 16:10:20.800186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.230 qpair failed and we were unable to recover it. 00:28:52.230 [2024-07-15 16:10:20.800454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.230 [2024-07-15 16:10:20.800465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.230 qpair failed and we were unable to recover it. 00:28:52.230 [2024-07-15 16:10:20.800625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.230 [2024-07-15 16:10:20.800635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.230 qpair failed and we were unable to recover it. 00:28:52.230 [2024-07-15 16:10:20.800759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.230 [2024-07-15 16:10:20.800769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.230 qpair failed and we were unable to recover it. 00:28:52.230 [2024-07-15 16:10:20.801019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.230 [2024-07-15 16:10:20.801028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.230 qpair failed and we were unable to recover it. 00:28:52.230 [2024-07-15 16:10:20.801273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.230 [2024-07-15 16:10:20.801283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.230 qpair failed and we were unable to recover it. 00:28:52.230 [2024-07-15 16:10:20.801392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.230 [2024-07-15 16:10:20.801402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.230 qpair failed and we were unable to recover it. 00:28:52.230 [2024-07-15 16:10:20.801515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.230 [2024-07-15 16:10:20.801524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.230 qpair failed and we were unable to recover it. 00:28:52.230 [2024-07-15 16:10:20.801795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.230 [2024-07-15 16:10:20.801805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.230 qpair failed and we were unable to recover it. 00:28:52.230 [2024-07-15 16:10:20.802075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.230 [2024-07-15 16:10:20.802085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.230 qpair failed and we were unable to recover it. 00:28:52.230 [2024-07-15 16:10:20.802194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.230 [2024-07-15 16:10:20.802205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.230 qpair failed and we were unable to recover it. 00:28:52.230 [2024-07-15 16:10:20.802452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.230 [2024-07-15 16:10:20.802462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.230 qpair failed and we were unable to recover it. 00:28:52.230 [2024-07-15 16:10:20.802579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.230 [2024-07-15 16:10:20.802589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.230 qpair failed and we were unable to recover it. 00:28:52.230 [2024-07-15 16:10:20.802762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.230 [2024-07-15 16:10:20.802772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.230 qpair failed and we were unable to recover it. 00:28:52.230 [2024-07-15 16:10:20.802975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.230 [2024-07-15 16:10:20.802987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.230 qpair failed and we were unable to recover it. 00:28:52.230 [2024-07-15 16:10:20.803172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.230 [2024-07-15 16:10:20.803182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.230 qpair failed and we were unable to recover it. 00:28:52.230 [2024-07-15 16:10:20.803404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.230 [2024-07-15 16:10:20.803414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.230 qpair failed and we were unable to recover it. 00:28:52.230 [2024-07-15 16:10:20.803525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.230 [2024-07-15 16:10:20.803535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.230 qpair failed and we were unable to recover it. 00:28:52.230 [2024-07-15 16:10:20.803778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.230 [2024-07-15 16:10:20.803789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.230 qpair failed and we were unable to recover it. 00:28:52.230 [2024-07-15 16:10:20.803898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.230 [2024-07-15 16:10:20.803908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.230 qpair failed and we were unable to recover it. 00:28:52.230 [2024-07-15 16:10:20.804148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.230 [2024-07-15 16:10:20.804158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.230 qpair failed and we were unable to recover it. 00:28:52.230 [2024-07-15 16:10:20.804431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.230 [2024-07-15 16:10:20.804441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.230 qpair failed and we were unable to recover it. 00:28:52.230 [2024-07-15 16:10:20.804563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.230 [2024-07-15 16:10:20.804573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.230 qpair failed and we were unable to recover it. 00:28:52.230 [2024-07-15 16:10:20.804735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.230 [2024-07-15 16:10:20.804745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.230 qpair failed and we were unable to recover it. 00:28:52.230 [2024-07-15 16:10:20.804991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.230 [2024-07-15 16:10:20.805001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.230 qpair failed and we were unable to recover it. 00:28:52.230 [2024-07-15 16:10:20.805263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.230 [2024-07-15 16:10:20.805274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.230 qpair failed and we were unable to recover it. 00:28:52.230 [2024-07-15 16:10:20.805378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.230 [2024-07-15 16:10:20.805389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.230 qpair failed and we were unable to recover it. 00:28:52.230 [2024-07-15 16:10:20.805564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.230 [2024-07-15 16:10:20.805575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.230 qpair failed and we were unable to recover it. 00:28:52.230 [2024-07-15 16:10:20.805747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.230 [2024-07-15 16:10:20.805757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.230 qpair failed and we were unable to recover it. 00:28:52.230 [2024-07-15 16:10:20.805908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.230 [2024-07-15 16:10:20.805918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.230 qpair failed and we were unable to recover it. 00:28:52.230 [2024-07-15 16:10:20.806184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.230 [2024-07-15 16:10:20.806195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.230 qpair failed and we were unable to recover it. 00:28:52.230 [2024-07-15 16:10:20.806457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.230 [2024-07-15 16:10:20.806467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.230 qpair failed and we were unable to recover it. 00:28:52.230 [2024-07-15 16:10:20.806653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.230 [2024-07-15 16:10:20.806664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.230 qpair failed and we were unable to recover it. 00:28:52.230 [2024-07-15 16:10:20.806910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.230 [2024-07-15 16:10:20.806920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.230 qpair failed and we were unable to recover it. 00:28:52.230 [2024-07-15 16:10:20.807022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.230 [2024-07-15 16:10:20.807032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.230 qpair failed and we were unable to recover it. 00:28:52.230 [2024-07-15 16:10:20.807229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.230 [2024-07-15 16:10:20.807240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.230 qpair failed and we were unable to recover it. 00:28:52.230 [2024-07-15 16:10:20.807413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.230 [2024-07-15 16:10:20.807423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.230 qpair failed and we were unable to recover it. 00:28:52.230 [2024-07-15 16:10:20.807542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.230 [2024-07-15 16:10:20.807552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.230 qpair failed and we were unable to recover it. 00:28:52.230 [2024-07-15 16:10:20.807723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.230 [2024-07-15 16:10:20.807733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.230 qpair failed and we were unable to recover it. 00:28:52.230 [2024-07-15 16:10:20.807850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.230 [2024-07-15 16:10:20.807860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.230 qpair failed and we were unable to recover it. 00:28:52.231 [2024-07-15 16:10:20.808027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.231 [2024-07-15 16:10:20.808037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.231 qpair failed and we were unable to recover it. 00:28:52.231 [2024-07-15 16:10:20.808192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.231 [2024-07-15 16:10:20.808202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.231 qpair failed and we were unable to recover it. 00:28:52.231 [2024-07-15 16:10:20.808419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.231 [2024-07-15 16:10:20.808430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.231 qpair failed and we were unable to recover it. 00:28:52.231 [2024-07-15 16:10:20.808541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.231 [2024-07-15 16:10:20.808551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.231 qpair failed and we were unable to recover it. 00:28:52.231 [2024-07-15 16:10:20.808717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.231 [2024-07-15 16:10:20.808728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.231 qpair failed and we were unable to recover it. 00:28:52.231 [2024-07-15 16:10:20.808907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.231 [2024-07-15 16:10:20.808918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.231 qpair failed and we were unable to recover it. 00:28:52.231 [2024-07-15 16:10:20.809174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.231 [2024-07-15 16:10:20.809185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.231 qpair failed and we were unable to recover it. 00:28:52.231 [2024-07-15 16:10:20.809354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.231 [2024-07-15 16:10:20.809364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.231 qpair failed and we were unable to recover it. 00:28:52.231 [2024-07-15 16:10:20.809554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.231 [2024-07-15 16:10:20.809564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.231 qpair failed and we were unable to recover it. 00:28:52.231 [2024-07-15 16:10:20.809720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.231 [2024-07-15 16:10:20.809731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.231 qpair failed and we were unable to recover it. 00:28:52.231 [2024-07-15 16:10:20.809840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.231 [2024-07-15 16:10:20.809850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.231 qpair failed and we were unable to recover it. 00:28:52.231 [2024-07-15 16:10:20.810013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.231 [2024-07-15 16:10:20.810023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.231 qpair failed and we were unable to recover it. 00:28:52.231 [2024-07-15 16:10:20.810205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.231 [2024-07-15 16:10:20.810215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.231 qpair failed and we were unable to recover it. 00:28:52.231 [2024-07-15 16:10:20.810358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.231 [2024-07-15 16:10:20.810368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.231 qpair failed and we were unable to recover it. 00:28:52.231 [2024-07-15 16:10:20.810563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.231 [2024-07-15 16:10:20.810576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.231 qpair failed and we were unable to recover it. 00:28:52.231 [2024-07-15 16:10:20.810746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.231 [2024-07-15 16:10:20.810757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.231 qpair failed and we were unable to recover it. 00:28:52.231 [2024-07-15 16:10:20.810932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.231 [2024-07-15 16:10:20.810942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.231 qpair failed and we were unable to recover it. 00:28:52.231 [2024-07-15 16:10:20.811051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.231 [2024-07-15 16:10:20.811061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.231 qpair failed and we were unable to recover it. 00:28:52.231 [2024-07-15 16:10:20.811288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.231 [2024-07-15 16:10:20.811299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.231 qpair failed and we were unable to recover it. 00:28:52.231 [2024-07-15 16:10:20.811407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.231 [2024-07-15 16:10:20.811418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.231 qpair failed and we were unable to recover it. 00:28:52.231 [2024-07-15 16:10:20.811589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.231 [2024-07-15 16:10:20.811599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.231 qpair failed and we were unable to recover it. 00:28:52.231 [2024-07-15 16:10:20.811706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.231 [2024-07-15 16:10:20.811716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.231 qpair failed and we were unable to recover it. 00:28:52.231 [2024-07-15 16:10:20.811811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.231 [2024-07-15 16:10:20.811821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.231 qpair failed and we were unable to recover it. 00:28:52.231 [2024-07-15 16:10:20.812005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.231 [2024-07-15 16:10:20.812015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.231 qpair failed and we were unable to recover it. 00:28:52.231 [2024-07-15 16:10:20.812275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.231 [2024-07-15 16:10:20.812286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.231 qpair failed and we were unable to recover it. 00:28:52.231 [2024-07-15 16:10:20.812386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.231 [2024-07-15 16:10:20.812397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.231 qpair failed and we were unable to recover it. 00:28:52.231 [2024-07-15 16:10:20.812515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.231 [2024-07-15 16:10:20.812524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.231 qpair failed and we were unable to recover it. 00:28:52.231 [2024-07-15 16:10:20.812631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.231 [2024-07-15 16:10:20.812641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.231 qpair failed and we were unable to recover it. 00:28:52.231 [2024-07-15 16:10:20.812815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.231 [2024-07-15 16:10:20.812825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.231 qpair failed and we were unable to recover it. 00:28:52.231 [2024-07-15 16:10:20.813018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.231 [2024-07-15 16:10:20.813029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.231 qpair failed and we were unable to recover it. 00:28:52.231 [2024-07-15 16:10:20.813213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.231 [2024-07-15 16:10:20.813227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.231 qpair failed and we were unable to recover it. 00:28:52.231 [2024-07-15 16:10:20.813377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.231 [2024-07-15 16:10:20.813387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.231 qpair failed and we were unable to recover it. 00:28:52.231 [2024-07-15 16:10:20.813578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.231 [2024-07-15 16:10:20.813589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.231 qpair failed and we were unable to recover it. 00:28:52.231 [2024-07-15 16:10:20.813763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.231 [2024-07-15 16:10:20.813773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.231 qpair failed and we were unable to recover it. 00:28:52.231 [2024-07-15 16:10:20.814013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.231 [2024-07-15 16:10:20.814025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.231 qpair failed and we were unable to recover it. 00:28:52.231 [2024-07-15 16:10:20.814321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.231 [2024-07-15 16:10:20.814332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.231 qpair failed and we were unable to recover it. 00:28:52.231 [2024-07-15 16:10:20.814501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.231 [2024-07-15 16:10:20.814512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.231 qpair failed and we were unable to recover it. 00:28:52.231 [2024-07-15 16:10:20.814689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.231 [2024-07-15 16:10:20.814698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.231 qpair failed and we were unable to recover it. 00:28:52.231 [2024-07-15 16:10:20.814864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.231 [2024-07-15 16:10:20.814874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.231 qpair failed and we were unable to recover it. 00:28:52.231 [2024-07-15 16:10:20.814994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.231 [2024-07-15 16:10:20.815004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.231 qpair failed and we were unable to recover it. 00:28:52.231 [2024-07-15 16:10:20.815166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.232 [2024-07-15 16:10:20.815177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.232 qpair failed and we were unable to recover it. 00:28:52.232 [2024-07-15 16:10:20.815470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.232 [2024-07-15 16:10:20.815481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.232 qpair failed and we were unable to recover it. 00:28:52.232 [2024-07-15 16:10:20.815756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.232 [2024-07-15 16:10:20.815766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.232 qpair failed and we were unable to recover it. 00:28:52.232 [2024-07-15 16:10:20.815886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.232 [2024-07-15 16:10:20.815896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.232 qpair failed and we were unable to recover it. 00:28:52.232 [2024-07-15 16:10:20.816067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.232 [2024-07-15 16:10:20.816077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.232 qpair failed and we were unable to recover it. 00:28:52.232 [2024-07-15 16:10:20.816276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.232 [2024-07-15 16:10:20.816287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.232 qpair failed and we were unable to recover it. 00:28:52.232 [2024-07-15 16:10:20.816391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.232 [2024-07-15 16:10:20.816402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.232 qpair failed and we were unable to recover it. 00:28:52.232 [2024-07-15 16:10:20.816496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.232 [2024-07-15 16:10:20.816506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.232 qpair failed and we were unable to recover it. 00:28:52.232 [2024-07-15 16:10:20.816665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.232 [2024-07-15 16:10:20.816675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.232 qpair failed and we were unable to recover it. 00:28:52.232 [2024-07-15 16:10:20.816792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.232 [2024-07-15 16:10:20.816802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.232 qpair failed and we were unable to recover it. 00:28:52.232 [2024-07-15 16:10:20.817047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.232 [2024-07-15 16:10:20.817057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.232 qpair failed and we were unable to recover it. 00:28:52.232 [2024-07-15 16:10:20.817297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.232 [2024-07-15 16:10:20.817309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.232 qpair failed and we were unable to recover it. 00:28:52.232 [2024-07-15 16:10:20.817509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.232 [2024-07-15 16:10:20.817519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.232 qpair failed and we were unable to recover it. 00:28:52.232 [2024-07-15 16:10:20.817676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.232 [2024-07-15 16:10:20.817686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.232 qpair failed and we were unable to recover it. 00:28:52.232 [2024-07-15 16:10:20.817803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.232 [2024-07-15 16:10:20.817814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.232 qpair failed and we were unable to recover it. 00:28:52.232 [2024-07-15 16:10:20.818010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.232 [2024-07-15 16:10:20.818020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.232 qpair failed and we were unable to recover it. 00:28:52.232 [2024-07-15 16:10:20.818182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.232 [2024-07-15 16:10:20.818192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.232 qpair failed and we were unable to recover it. 00:28:52.232 [2024-07-15 16:10:20.818395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.232 [2024-07-15 16:10:20.818405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.232 qpair failed and we were unable to recover it. 00:28:52.232 [2024-07-15 16:10:20.818598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.232 [2024-07-15 16:10:20.818608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.232 qpair failed and we were unable to recover it. 00:28:52.232 [2024-07-15 16:10:20.818705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.232 [2024-07-15 16:10:20.818715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.232 qpair failed and we were unable to recover it. 00:28:52.232 [2024-07-15 16:10:20.818874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.232 [2024-07-15 16:10:20.818884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.232 qpair failed and we were unable to recover it. 00:28:52.232 [2024-07-15 16:10:20.819061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.232 [2024-07-15 16:10:20.819071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.232 qpair failed and we were unable to recover it. 00:28:52.232 [2024-07-15 16:10:20.819323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.232 [2024-07-15 16:10:20.819334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.232 qpair failed and we were unable to recover it. 00:28:52.232 [2024-07-15 16:10:20.819452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.232 [2024-07-15 16:10:20.819462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.232 qpair failed and we were unable to recover it. 00:28:52.232 [2024-07-15 16:10:20.819576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.232 [2024-07-15 16:10:20.819586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.232 qpair failed and we were unable to recover it. 00:28:52.232 [2024-07-15 16:10:20.819771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.232 [2024-07-15 16:10:20.819781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.232 qpair failed and we were unable to recover it. 00:28:52.232 [2024-07-15 16:10:20.819884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.232 [2024-07-15 16:10:20.819894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.232 qpair failed and we were unable to recover it. 00:28:52.232 [2024-07-15 16:10:20.820059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.232 [2024-07-15 16:10:20.820069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.232 qpair failed and we were unable to recover it. 00:28:52.232 [2024-07-15 16:10:20.820188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.232 [2024-07-15 16:10:20.820198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.232 qpair failed and we were unable to recover it. 00:28:52.232 [2024-07-15 16:10:20.820381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.232 [2024-07-15 16:10:20.820393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.232 qpair failed and we were unable to recover it. 00:28:52.232 [2024-07-15 16:10:20.820510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.232 [2024-07-15 16:10:20.820520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.232 qpair failed and we were unable to recover it. 00:28:52.232 [2024-07-15 16:10:20.820631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.232 [2024-07-15 16:10:20.820640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.232 qpair failed and we were unable to recover it. 00:28:52.232 [2024-07-15 16:10:20.820800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.232 [2024-07-15 16:10:20.820810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.232 qpair failed and we were unable to recover it. 00:28:52.232 [2024-07-15 16:10:20.821115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.232 [2024-07-15 16:10:20.821125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.232 qpair failed and we were unable to recover it. 00:28:52.232 [2024-07-15 16:10:20.821317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.233 [2024-07-15 16:10:20.821327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.233 qpair failed and we were unable to recover it. 00:28:52.233 [2024-07-15 16:10:20.821502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.233 [2024-07-15 16:10:20.821512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.233 qpair failed and we were unable to recover it. 00:28:52.233 [2024-07-15 16:10:20.821623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.233 [2024-07-15 16:10:20.821633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.233 qpair failed and we were unable to recover it. 00:28:52.233 [2024-07-15 16:10:20.821734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.233 [2024-07-15 16:10:20.821744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.233 qpair failed and we were unable to recover it. 00:28:52.233 [2024-07-15 16:10:20.821968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.233 [2024-07-15 16:10:20.821979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.233 qpair failed and we were unable to recover it. 00:28:52.233 [2024-07-15 16:10:20.822133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.233 [2024-07-15 16:10:20.822143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.233 qpair failed and we were unable to recover it. 00:28:52.233 [2024-07-15 16:10:20.822306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.233 [2024-07-15 16:10:20.822317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.233 qpair failed and we were unable to recover it. 00:28:52.233 [2024-07-15 16:10:20.822487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.233 [2024-07-15 16:10:20.822497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.233 qpair failed and we were unable to recover it. 00:28:52.233 [2024-07-15 16:10:20.822623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.233 [2024-07-15 16:10:20.822633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.233 qpair failed and we were unable to recover it. 00:28:52.233 [2024-07-15 16:10:20.822793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.233 [2024-07-15 16:10:20.822803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.233 qpair failed and we were unable to recover it. 00:28:52.233 [2024-07-15 16:10:20.822963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.233 [2024-07-15 16:10:20.822973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.233 qpair failed and we were unable to recover it. 00:28:52.233 [2024-07-15 16:10:20.823168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.233 [2024-07-15 16:10:20.823177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.233 qpair failed and we were unable to recover it. 00:28:52.233 [2024-07-15 16:10:20.823287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.233 [2024-07-15 16:10:20.823297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.233 qpair failed and we were unable to recover it. 00:28:52.233 [2024-07-15 16:10:20.823408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.233 [2024-07-15 16:10:20.823418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.233 qpair failed and we were unable to recover it. 00:28:52.233 [2024-07-15 16:10:20.823639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.233 [2024-07-15 16:10:20.823648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.233 qpair failed and we were unable to recover it. 00:28:52.233 [2024-07-15 16:10:20.823821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.233 [2024-07-15 16:10:20.823831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.233 qpair failed and we were unable to recover it. 00:28:52.233 [2024-07-15 16:10:20.824009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.233 [2024-07-15 16:10:20.824021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.233 qpair failed and we were unable to recover it. 00:28:52.233 [2024-07-15 16:10:20.824283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.233 [2024-07-15 16:10:20.824294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.233 qpair failed and we were unable to recover it. 00:28:52.233 [2024-07-15 16:10:20.824474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.233 [2024-07-15 16:10:20.824485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.233 qpair failed and we were unable to recover it. 00:28:52.233 [2024-07-15 16:10:20.824610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.233 [2024-07-15 16:10:20.824620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.233 qpair failed and we were unable to recover it. 00:28:52.233 [2024-07-15 16:10:20.824794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.233 [2024-07-15 16:10:20.824806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.233 qpair failed and we were unable to recover it. 00:28:52.233 [2024-07-15 16:10:20.824979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.233 [2024-07-15 16:10:20.824990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.233 qpair failed and we were unable to recover it. 00:28:52.233 [2024-07-15 16:10:20.825212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.233 [2024-07-15 16:10:20.825222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.233 qpair failed and we were unable to recover it. 00:28:52.233 [2024-07-15 16:10:20.825451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.233 [2024-07-15 16:10:20.825461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.233 qpair failed and we were unable to recover it. 00:28:52.233 [2024-07-15 16:10:20.825634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.233 [2024-07-15 16:10:20.825644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.233 qpair failed and we were unable to recover it. 00:28:52.233 [2024-07-15 16:10:20.825751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.233 [2024-07-15 16:10:20.825761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.233 qpair failed and we were unable to recover it. 00:28:52.233 [2024-07-15 16:10:20.826005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.233 [2024-07-15 16:10:20.826016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.233 qpair failed and we were unable to recover it. 00:28:52.233 [2024-07-15 16:10:20.826270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.233 [2024-07-15 16:10:20.826281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.233 qpair failed and we were unable to recover it. 00:28:52.233 [2024-07-15 16:10:20.826391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.233 [2024-07-15 16:10:20.826401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.233 qpair failed and we were unable to recover it. 00:28:52.233 [2024-07-15 16:10:20.826624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.233 [2024-07-15 16:10:20.826634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.233 qpair failed and we were unable to recover it. 00:28:52.233 [2024-07-15 16:10:20.826799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.233 [2024-07-15 16:10:20.826809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.233 qpair failed and we were unable to recover it. 00:28:52.233 [2024-07-15 16:10:20.826962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.233 [2024-07-15 16:10:20.826972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.233 qpair failed and we were unable to recover it. 00:28:52.233 [2024-07-15 16:10:20.827193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.233 [2024-07-15 16:10:20.827204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.233 qpair failed and we were unable to recover it. 00:28:52.233 [2024-07-15 16:10:20.827348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.233 [2024-07-15 16:10:20.827359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.233 qpair failed and we were unable to recover it. 00:28:52.233 [2024-07-15 16:10:20.827537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.233 [2024-07-15 16:10:20.827546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.233 qpair failed and we were unable to recover it. 00:28:52.233 [2024-07-15 16:10:20.827765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.233 [2024-07-15 16:10:20.827775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.233 qpair failed and we were unable to recover it. 00:28:52.233 [2024-07-15 16:10:20.827955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.233 [2024-07-15 16:10:20.827965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.233 qpair failed and we were unable to recover it. 00:28:52.233 [2024-07-15 16:10:20.828136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.233 [2024-07-15 16:10:20.828146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.233 qpair failed and we were unable to recover it. 00:28:52.233 [2024-07-15 16:10:20.828379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.233 [2024-07-15 16:10:20.828389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.233 qpair failed and we were unable to recover it. 00:28:52.233 [2024-07-15 16:10:20.828578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.233 [2024-07-15 16:10:20.828589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.233 qpair failed and we were unable to recover it. 00:28:52.233 [2024-07-15 16:10:20.828683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.233 [2024-07-15 16:10:20.828693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.233 qpair failed and we were unable to recover it. 00:28:52.234 [2024-07-15 16:10:20.828878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.234 [2024-07-15 16:10:20.828888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.234 qpair failed and we were unable to recover it. 00:28:52.234 [2024-07-15 16:10:20.829057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.234 [2024-07-15 16:10:20.829067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.234 qpair failed and we were unable to recover it. 00:28:52.234 [2024-07-15 16:10:20.829297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.234 [2024-07-15 16:10:20.829307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.234 qpair failed and we were unable to recover it. 00:28:52.234 [2024-07-15 16:10:20.829500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.234 [2024-07-15 16:10:20.829511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.234 qpair failed and we were unable to recover it. 00:28:52.234 [2024-07-15 16:10:20.829706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.234 [2024-07-15 16:10:20.829716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.234 qpair failed and we were unable to recover it. 00:28:52.234 [2024-07-15 16:10:20.829806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.234 [2024-07-15 16:10:20.829816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.234 qpair failed and we were unable to recover it. 00:28:52.234 A controller has encountered a failure and is being reset. 00:28:52.234 [2024-07-15 16:10:20.829967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.234 [2024-07-15 16:10:20.829999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.234 qpair failed and we were unable to recover it. 00:28:52.234 [2024-07-15 16:10:20.830130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.234 [2024-07-15 16:10:20.830146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.234 qpair failed and we were unable to recover it. 00:28:52.234 [2024-07-15 16:10:20.830261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.234 [2024-07-15 16:10:20.830276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.234 qpair failed and we were unable to recover it. 00:28:52.234 [2024-07-15 16:10:20.830405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.234 [2024-07-15 16:10:20.830419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.234 qpair failed and we were unable to recover it. 00:28:52.234 [2024-07-15 16:10:20.830550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.234 [2024-07-15 16:10:20.830564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.234 qpair failed and we were unable to recover it. 00:28:52.234 [2024-07-15 16:10:20.830744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.234 [2024-07-15 16:10:20.830757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.234 qpair failed and we were unable to recover it. 00:28:52.234 [2024-07-15 16:10:20.830879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.234 [2024-07-15 16:10:20.830893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.234 qpair failed and we were unable to recover it. 00:28:52.234 [2024-07-15 16:10:20.831014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.234 [2024-07-15 16:10:20.831028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.234 qpair failed and we were unable to recover it. 00:28:52.234 [2024-07-15 16:10:20.831167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.234 [2024-07-15 16:10:20.831181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.234 qpair failed and we were unable to recover it. 00:28:52.234 [2024-07-15 16:10:20.831301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.234 [2024-07-15 16:10:20.831317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.234 qpair failed and we were unable to recover it. 00:28:52.234 [2024-07-15 16:10:20.831437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.234 [2024-07-15 16:10:20.831451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.234 qpair failed and we were unable to recover it. 00:28:52.234 [2024-07-15 16:10:20.831577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.234 [2024-07-15 16:10:20.831592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.234 qpair failed and we were unable to recover it. 00:28:52.234 [2024-07-15 16:10:20.831754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.234 [2024-07-15 16:10:20.831767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.234 qpair failed and we were unable to recover it. 00:28:52.234 [2024-07-15 16:10:20.831906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.234 [2024-07-15 16:10:20.831924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.234 qpair failed and we were unable to recover it. 00:28:52.234 [2024-07-15 16:10:20.832103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.234 [2024-07-15 16:10:20.832117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.234 qpair failed and we were unable to recover it. 00:28:52.234 [2024-07-15 16:10:20.832287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.234 [2024-07-15 16:10:20.832301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.234 qpair failed and we were unable to recover it. 00:28:52.234 [2024-07-15 16:10:20.832401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.234 [2024-07-15 16:10:20.832415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.234 qpair failed and we were unable to recover it. 00:28:52.234 [2024-07-15 16:10:20.832593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.234 [2024-07-15 16:10:20.832608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.234 qpair failed and we were unable to recover it. 00:28:52.234 [2024-07-15 16:10:20.832722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.234 [2024-07-15 16:10:20.832736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.234 qpair failed and we were unable to recover it. 00:28:52.234 [2024-07-15 16:10:20.832910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.234 [2024-07-15 16:10:20.832924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.234 qpair failed and we were unable to recover it. 00:28:52.234 [2024-07-15 16:10:20.833032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.234 [2024-07-15 16:10:20.833046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.234 qpair failed and we were unable to recover it. 00:28:52.234 [2024-07-15 16:10:20.833142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.234 [2024-07-15 16:10:20.833156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.234 qpair failed and we were unable to recover it. 00:28:52.234 [2024-07-15 16:10:20.833286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.234 [2024-07-15 16:10:20.833301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.234 qpair failed and we were unable to recover it. 00:28:52.234 [2024-07-15 16:10:20.833420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.234 [2024-07-15 16:10:20.833434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.234 qpair failed and we were unable to recover it. 00:28:52.234 [2024-07-15 16:10:20.833548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.234 [2024-07-15 16:10:20.833562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.234 qpair failed and we were unable to recover it. 00:28:52.234 [2024-07-15 16:10:20.833670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.234 [2024-07-15 16:10:20.833684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.234 qpair failed and we were unable to recover it. 00:28:52.234 [2024-07-15 16:10:20.833786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.234 [2024-07-15 16:10:20.833800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.234 qpair failed and we were unable to recover it. 00:28:52.234 [2024-07-15 16:10:20.833917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.234 [2024-07-15 16:10:20.833931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.234 qpair failed and we were unable to recover it. 00:28:52.234 [2024-07-15 16:10:20.834029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.234 [2024-07-15 16:10:20.834042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.234 qpair failed and we were unable to recover it. 00:28:52.234 [2024-07-15 16:10:20.834211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.234 [2024-07-15 16:10:20.834229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.234 qpair failed and we were unable to recover it. 00:28:52.234 [2024-07-15 16:10:20.834339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.234 [2024-07-15 16:10:20.834354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.234 qpair failed and we were unable to recover it. 00:28:52.234 [2024-07-15 16:10:20.834472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.234 [2024-07-15 16:10:20.834486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.234 qpair failed and we were unable to recover it. 00:28:52.234 [2024-07-15 16:10:20.834678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.234 [2024-07-15 16:10:20.834692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.234 qpair failed and we were unable to recover it. 00:28:52.234 [2024-07-15 16:10:20.834789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.234 [2024-07-15 16:10:20.834802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.234 qpair failed and we were unable to recover it. 00:28:52.234 [2024-07-15 16:10:20.834968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.235 [2024-07-15 16:10:20.834982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.235 qpair failed and we were unable to recover it. 00:28:52.235 [2024-07-15 16:10:20.835143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.235 [2024-07-15 16:10:20.835158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.235 qpair failed and we were unable to recover it. 00:28:52.235 [2024-07-15 16:10:20.835252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.235 [2024-07-15 16:10:20.835267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.235 qpair failed and we were unable to recover it. 00:28:52.235 [2024-07-15 16:10:20.835355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.235 [2024-07-15 16:10:20.835369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.235 qpair failed and we were unable to recover it. 00:28:52.235 [2024-07-15 16:10:20.835484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.235 [2024-07-15 16:10:20.835498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.235 qpair failed and we were unable to recover it. 00:28:52.235 [2024-07-15 16:10:20.835607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.235 [2024-07-15 16:10:20.835622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.235 qpair failed and we were unable to recover it. 00:28:52.235 [2024-07-15 16:10:20.835793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.235 [2024-07-15 16:10:20.835807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.235 qpair failed and we were unable to recover it. 00:28:52.235 [2024-07-15 16:10:20.836002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.235 [2024-07-15 16:10:20.836015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.235 qpair failed and we were unable to recover it. 00:28:52.235 [2024-07-15 16:10:20.836130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.235 [2024-07-15 16:10:20.836144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.235 qpair failed and we were unable to recover it. 00:28:52.235 [2024-07-15 16:10:20.836256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.235 [2024-07-15 16:10:20.836270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.235 qpair failed and we were unable to recover it. 00:28:52.235 [2024-07-15 16:10:20.836443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.235 [2024-07-15 16:10:20.836457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.235 qpair failed and we were unable to recover it. 00:28:52.235 [2024-07-15 16:10:20.836581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.235 [2024-07-15 16:10:20.836595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.235 qpair failed and we were unable to recover it. 00:28:52.235 [2024-07-15 16:10:20.836710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.235 [2024-07-15 16:10:20.836724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.235 qpair failed and we were unable to recover it. 00:28:52.235 [2024-07-15 16:10:20.836836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.235 [2024-07-15 16:10:20.836851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.235 qpair failed and we were unable to recover it. 00:28:52.235 [2024-07-15 16:10:20.836948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.235 [2024-07-15 16:10:20.836962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.235 qpair failed and we were unable to recover it. 00:28:52.235 [2024-07-15 16:10:20.837068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.235 [2024-07-15 16:10:20.837082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.235 qpair failed and we were unable to recover it. 00:28:52.235 [2024-07-15 16:10:20.837193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.235 [2024-07-15 16:10:20.837207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.235 qpair failed and we were unable to recover it. 00:28:52.235 [2024-07-15 16:10:20.837377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.235 [2024-07-15 16:10:20.837390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.235 qpair failed and we were unable to recover it. 00:28:52.235 [2024-07-15 16:10:20.837563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.235 [2024-07-15 16:10:20.837577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.235 qpair failed and we were unable to recover it. 00:28:52.235 [2024-07-15 16:10:20.837770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.235 [2024-07-15 16:10:20.837787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.235 qpair failed and we were unable to recover it. 00:28:52.235 [2024-07-15 16:10:20.837900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.235 [2024-07-15 16:10:20.837914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.235 qpair failed and we were unable to recover it. 00:28:52.235 [2024-07-15 16:10:20.838037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.235 [2024-07-15 16:10:20.838051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.235 qpair failed and we were unable to recover it. 00:28:52.235 [2024-07-15 16:10:20.838158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.235 [2024-07-15 16:10:20.838172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.235 qpair failed and we were unable to recover it. 00:28:52.235 [2024-07-15 16:10:20.838337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.235 [2024-07-15 16:10:20.838352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.235 qpair failed and we were unable to recover it. 00:28:52.235 [2024-07-15 16:10:20.838464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.235 [2024-07-15 16:10:20.838477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.235 qpair failed and we were unable to recover it. 00:28:52.235 [2024-07-15 16:10:20.838653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.235 [2024-07-15 16:10:20.838667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.235 qpair failed and we were unable to recover it. 00:28:52.235 [2024-07-15 16:10:20.838770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.235 [2024-07-15 16:10:20.838784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.235 qpair failed and we were unable to recover it. 00:28:52.235 [2024-07-15 16:10:20.838901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.235 [2024-07-15 16:10:20.838914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.235 qpair failed and we were unable to recover it. 00:28:52.235 [2024-07-15 16:10:20.839006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.235 [2024-07-15 16:10:20.839020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.235 qpair failed and we were unable to recover it. 00:28:52.235 [2024-07-15 16:10:20.839255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.235 [2024-07-15 16:10:20.839270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.235 qpair failed and we were unable to recover it. 00:28:52.235 [2024-07-15 16:10:20.839434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.235 [2024-07-15 16:10:20.839449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.235 qpair failed and we were unable to recover it. 00:28:52.235 [2024-07-15 16:10:20.839608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.235 [2024-07-15 16:10:20.839622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.235 qpair failed and we were unable to recover it. 00:28:52.235 [2024-07-15 16:10:20.839743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.235 [2024-07-15 16:10:20.839757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.235 qpair failed and we were unable to recover it. 00:28:52.235 [2024-07-15 16:10:20.839868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.235 [2024-07-15 16:10:20.839883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.235 qpair failed and we were unable to recover it. 00:28:52.235 [2024-07-15 16:10:20.839993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.235 [2024-07-15 16:10:20.840006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.235 qpair failed and we were unable to recover it. 00:28:52.235 [2024-07-15 16:10:20.840105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.235 [2024-07-15 16:10:20.840119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.235 qpair failed and we were unable to recover it. 00:28:52.235 [2024-07-15 16:10:20.840207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.235 [2024-07-15 16:10:20.840220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.235 qpair failed and we were unable to recover it. 00:28:52.235 [2024-07-15 16:10:20.840345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.235 [2024-07-15 16:10:20.840359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.235 qpair failed and we were unable to recover it. 00:28:52.235 [2024-07-15 16:10:20.840467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.235 [2024-07-15 16:10:20.840481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.235 qpair failed and we were unable to recover it. 00:28:52.235 [2024-07-15 16:10:20.840584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.235 [2024-07-15 16:10:20.840598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.235 qpair failed and we were unable to recover it. 00:28:52.235 [2024-07-15 16:10:20.840698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.235 [2024-07-15 16:10:20.840711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.235 qpair failed and we were unable to recover it. 00:28:52.235 [2024-07-15 16:10:20.840888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.236 [2024-07-15 16:10:20.840902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.236 qpair failed and we were unable to recover it. 00:28:52.236 [2024-07-15 16:10:20.841005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.236 [2024-07-15 16:10:20.841019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.236 qpair failed and we were unable to recover it. 00:28:52.236 [2024-07-15 16:10:20.841126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.236 [2024-07-15 16:10:20.841139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.236 qpair failed and we were unable to recover it. 00:28:52.236 [2024-07-15 16:10:20.841235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.236 [2024-07-15 16:10:20.841249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.236 qpair failed and we were unable to recover it. 00:28:52.236 [2024-07-15 16:10:20.841414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.236 [2024-07-15 16:10:20.841428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.236 qpair failed and we were unable to recover it. 00:28:52.236 [2024-07-15 16:10:20.841606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.236 [2024-07-15 16:10:20.841620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.236 qpair failed and we were unable to recover it. 00:28:52.236 [2024-07-15 16:10:20.841731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.236 [2024-07-15 16:10:20.841745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.236 qpair failed and we were unable to recover it. 00:28:52.236 [2024-07-15 16:10:20.841923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.236 [2024-07-15 16:10:20.841937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.236 qpair failed and we were unable to recover it. 00:28:52.236 [2024-07-15 16:10:20.842079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.236 [2024-07-15 16:10:20.842092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.236 qpair failed and we were unable to recover it. 00:28:52.236 [2024-07-15 16:10:20.842190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.236 [2024-07-15 16:10:20.842203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.236 qpair failed and we were unable to recover it. 00:28:52.236 [2024-07-15 16:10:20.842318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.236 [2024-07-15 16:10:20.842332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.236 qpair failed and we were unable to recover it. 00:28:52.236 [2024-07-15 16:10:20.842443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.236 [2024-07-15 16:10:20.842457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.236 qpair failed and we were unable to recover it. 00:28:52.236 [2024-07-15 16:10:20.842567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.236 [2024-07-15 16:10:20.842582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.236 qpair failed and we were unable to recover it. 00:28:52.236 [2024-07-15 16:10:20.842771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.236 [2024-07-15 16:10:20.842786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.236 qpair failed and we were unable to recover it. 00:28:52.236 [2024-07-15 16:10:20.842905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.236 [2024-07-15 16:10:20.842918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.236 qpair failed and we were unable to recover it. 00:28:52.236 [2024-07-15 16:10:20.843017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.236 [2024-07-15 16:10:20.843030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.236 qpair failed and we were unable to recover it. 00:28:52.236 [2024-07-15 16:10:20.843145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.236 [2024-07-15 16:10:20.843160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.236 qpair failed and we were unable to recover it. 00:28:52.236 [2024-07-15 16:10:20.843266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.236 [2024-07-15 16:10:20.843281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.236 qpair failed and we were unable to recover it. 00:28:52.236 [2024-07-15 16:10:20.843444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.236 [2024-07-15 16:10:20.843460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.236 qpair failed and we were unable to recover it. 00:28:52.236 [2024-07-15 16:10:20.843567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.236 [2024-07-15 16:10:20.843581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.236 qpair failed and we were unable to recover it. 00:28:52.236 [2024-07-15 16:10:20.843695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.236 [2024-07-15 16:10:20.843709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.236 qpair failed and we were unable to recover it. 00:28:52.236 [2024-07-15 16:10:20.843812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.236 [2024-07-15 16:10:20.843826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.236 qpair failed and we were unable to recover it. 00:28:52.236 [2024-07-15 16:10:20.843926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.236 [2024-07-15 16:10:20.843940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.236 qpair failed and we were unable to recover it. 00:28:52.236 [2024-07-15 16:10:20.844107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.236 [2024-07-15 16:10:20.844121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.236 qpair failed and we were unable to recover it. 00:28:52.236 [2024-07-15 16:10:20.844214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.236 [2024-07-15 16:10:20.844232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.236 qpair failed and we were unable to recover it. 00:28:52.236 [2024-07-15 16:10:20.844419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.236 [2024-07-15 16:10:20.844433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.236 qpair failed and we were unable to recover it. 00:28:52.236 [2024-07-15 16:10:20.844534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.236 [2024-07-15 16:10:20.844547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.236 qpair failed and we were unable to recover it. 00:28:52.236 [2024-07-15 16:10:20.844660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.236 [2024-07-15 16:10:20.844673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.236 qpair failed and we were unable to recover it. 00:28:52.236 [2024-07-15 16:10:20.844840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.236 [2024-07-15 16:10:20.844853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.236 qpair failed and we were unable to recover it. 00:28:52.236 [2024-07-15 16:10:20.844950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.236 [2024-07-15 16:10:20.844964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.236 qpair failed and we were unable to recover it. 00:28:52.236 [2024-07-15 16:10:20.845067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.236 [2024-07-15 16:10:20.845081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.236 qpair failed and we were unable to recover it. 00:28:52.236 [2024-07-15 16:10:20.845179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.236 [2024-07-15 16:10:20.845192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.236 qpair failed and we were unable to recover it. 00:28:52.236 [2024-07-15 16:10:20.845311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.236 [2024-07-15 16:10:20.845325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.236 qpair failed and we were unable to recover it. 00:28:52.236 [2024-07-15 16:10:20.845431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.236 [2024-07-15 16:10:20.845445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.236 qpair failed and we were unable to recover it. 00:28:52.236 [2024-07-15 16:10:20.845556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.236 [2024-07-15 16:10:20.845570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.236 qpair failed and we were unable to recover it. 00:28:52.236 [2024-07-15 16:10:20.845685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.236 [2024-07-15 16:10:20.845699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.236 qpair failed and we were unable to recover it. 00:28:52.236 [2024-07-15 16:10:20.845795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.236 [2024-07-15 16:10:20.845808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.236 qpair failed and we were unable to recover it. 00:28:52.236 [2024-07-15 16:10:20.845971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.236 [2024-07-15 16:10:20.845984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.236 qpair failed and we were unable to recover it. 00:28:52.236 [2024-07-15 16:10:20.846154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.236 [2024-07-15 16:10:20.846167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.236 qpair failed and we were unable to recover it. 00:28:52.236 [2024-07-15 16:10:20.846329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.236 [2024-07-15 16:10:20.846343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.236 qpair failed and we were unable to recover it. 00:28:52.236 [2024-07-15 16:10:20.846451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.236 [2024-07-15 16:10:20.846464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.237 qpair failed and we were unable to recover it. 00:28:52.237 [2024-07-15 16:10:20.846585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.237 [2024-07-15 16:10:20.846599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.237 qpair failed and we were unable to recover it. 00:28:52.237 [2024-07-15 16:10:20.846698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.237 [2024-07-15 16:10:20.846712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.237 qpair failed and we were unable to recover it. 00:28:52.237 [2024-07-15 16:10:20.846808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.237 [2024-07-15 16:10:20.846822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa4c000b90 with addr=10.0.0.2, port=4420 00:28:52.237 qpair failed and we were unable to recover it. 00:28:52.237 [2024-07-15 16:10:20.847048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.237 [2024-07-15 16:10:20.847061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ffa54000b90 with addr=10.0.0.2, port=4420 00:28:52.237 qpair failed and we were unable to recover it. 00:28:52.237 [2024-07-15 16:10:20.847258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.237 [2024-07-15 16:10:20.847291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13a7000 with addr=10.0.0.2, port=4420 00:28:52.237 [2024-07-15 16:10:20.847304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a7000 is same with the state(5) to be set 00:28:52.237 [2024-07-15 16:10:20.847321] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a7000 (9): Bad file descriptor 00:28:52.237 [2024-07-15 16:10:20.847334] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.237 [2024-07-15 16:10:20.847343] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.237 [2024-07-15 16:10:20.847354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.237 Unable to reset the controller. 00:28:52.237 16:10:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:52.237 16:10:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:28:52.237 16:10:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:52.237 16:10:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:52.237 16:10:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:52.237 16:10:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:52.237 16:10:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:52.237 16:10:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.237 16:10:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:52.237 Malloc0 00:28:52.237 16:10:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.237 16:10:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:52.237 16:10:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.237 16:10:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:52.237 [2024-07-15 16:10:21.065447] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:52.237 16:10:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.237 16:10:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:52.237 16:10:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.237 16:10:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:52.237 16:10:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.237 16:10:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:52.237 16:10:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.237 16:10:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:52.237 16:10:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.237 16:10:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:52.237 16:10:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.237 16:10:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:52.237 [2024-07-15 16:10:21.097693] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:52.237 16:10:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.237 16:10:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:52.237 16:10:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.237 16:10:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:52.237 16:10:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.237 16:10:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3925037 00:28:53.170 Controller properly reset. 00:28:58.433 Initializing NVMe Controllers 00:28:58.433 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:58.433 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:58.433 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:28:58.433 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:28:58.433 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:28:58.433 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:28:58.433 Initialization complete. Launching workers. 00:28:58.433 Starting thread on core 1 00:28:58.433 Starting thread on core 2 00:28:58.433 Starting thread on core 3 00:28:58.433 Starting thread on core 0 00:28:58.433 16:10:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:28:58.433 00:28:58.433 real 0m11.299s 00:28:58.433 user 0m36.018s 00:28:58.433 sys 0m5.606s 00:28:58.433 16:10:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:58.433 16:10:26 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:58.433 ************************************ 00:28:58.433 END TEST nvmf_target_disconnect_tc2 00:28:58.433 ************************************ 00:28:58.433 16:10:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:28:58.433 16:10:26 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:28:58.433 16:10:26 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:28:58.433 16:10:26 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:28:58.433 16:10:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:58.433 16:10:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:28:58.433 16:10:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:58.433 16:10:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:28:58.433 16:10:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:58.433 16:10:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:58.433 rmmod nvme_tcp 00:28:58.433 rmmod nvme_fabrics 00:28:58.433 rmmod nvme_keyring 00:28:58.433 16:10:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:58.433 16:10:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:28:58.433 16:10:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:28:58.433 16:10:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 3925724 ']' 00:28:58.433 16:10:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 3925724 00:28:58.433 16:10:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 3925724 ']' 00:28:58.433 16:10:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 3925724 00:28:58.433 16:10:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:28:58.433 16:10:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:58.433 16:10:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3925724 00:28:58.433 16:10:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:28:58.433 16:10:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:28:58.433 16:10:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3925724' 00:28:58.433 killing process with pid 3925724 00:28:58.433 16:10:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 3925724 00:28:58.433 16:10:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 3925724 00:28:58.433 16:10:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:58.433 16:10:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:58.433 16:10:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:58.433 16:10:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:58.433 16:10:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:58.433 16:10:26 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.433 16:10:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:58.433 16:10:26 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:00.339 16:10:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:00.339 00:29:00.339 real 0m19.466s 00:29:00.339 user 1m2.948s 00:29:00.339 sys 0m10.300s 00:29:00.339 16:10:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:00.339 16:10:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:00.339 ************************************ 00:29:00.339 END TEST nvmf_target_disconnect 00:29:00.339 ************************************ 00:29:00.339 16:10:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:00.339 16:10:28 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:29:00.339 16:10:28 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:00.339 16:10:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:00.339 16:10:28 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:29:00.339 00:29:00.339 real 20m59.435s 00:29:00.339 user 45m46.871s 00:29:00.339 sys 6m22.249s 00:29:00.339 16:10:28 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:00.339 16:10:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:00.339 ************************************ 00:29:00.339 END TEST nvmf_tcp 00:29:00.339 ************************************ 00:29:00.339 16:10:28 -- common/autotest_common.sh@1142 -- # return 0 00:29:00.339 16:10:28 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:29:00.339 16:10:28 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:00.339 16:10:28 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:00.339 16:10:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:00.339 16:10:28 -- common/autotest_common.sh@10 -- # set +x 00:29:00.339 ************************************ 00:29:00.339 START TEST spdkcli_nvmf_tcp 00:29:00.339 ************************************ 00:29:00.339 16:10:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:00.339 * Looking for test storage... 00:29:00.339 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:29:00.339 16:10:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:29:00.339 16:10:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:29:00.339 16:10:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:29:00.339 16:10:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:00.339 16:10:29 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:29:00.339 16:10:29 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:00.339 16:10:29 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:00.339 16:10:29 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:00.339 16:10:29 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:00.339 16:10:29 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:00.339 16:10:29 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:00.339 16:10:29 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:00.339 16:10:29 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:00.339 16:10:29 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:00.339 16:10:29 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:00.339 16:10:29 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:00.339 16:10:29 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:00.339 16:10:29 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:00.339 16:10:29 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:00.339 16:10:29 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:00.339 16:10:29 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:00.339 16:10:29 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:00.339 16:10:29 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:00.339 16:10:29 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:00.339 16:10:29 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:00.339 16:10:29 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.339 16:10:29 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.339 16:10:29 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.339 16:10:29 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:29:00.339 16:10:29 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.340 16:10:29 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:29:00.340 16:10:29 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:00.340 16:10:29 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:00.340 16:10:29 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:00.340 16:10:29 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:00.340 16:10:29 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:00.340 16:10:29 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:00.340 16:10:29 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:00.340 16:10:29 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:00.340 16:10:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:29:00.340 16:10:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:29:00.340 16:10:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:29:00.340 16:10:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:29:00.340 16:10:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:00.340 16:10:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:00.340 16:10:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:29:00.340 16:10:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3927264 00:29:00.340 16:10:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3927264 00:29:00.340 16:10:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:29:00.340 16:10:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 3927264 ']' 00:29:00.340 16:10:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:00.340 16:10:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:00.340 16:10:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:00.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:00.340 16:10:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:00.340 16:10:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:00.340 [2024-07-15 16:10:29.160923] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:29:00.340 [2024-07-15 16:10:29.160972] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3927264 ] 00:29:00.340 EAL: No free 2048 kB hugepages reported on node 1 00:29:00.340 [2024-07-15 16:10:29.215590] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:00.598 [2024-07-15 16:10:29.291116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:00.598 [2024-07-15 16:10:29.291119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.165 16:10:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:01.165 16:10:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:29:01.165 16:10:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:29:01.165 16:10:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:01.165 16:10:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:01.165 16:10:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:29:01.165 16:10:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:29:01.165 16:10:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:01.165 16:10:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:01.165 16:10:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:01.165 16:10:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:01.165 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:01.165 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:01.165 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:01.165 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:01.165 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:01.165 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:01.165 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:01.165 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:01.165 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:01.165 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:01.165 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:01.165 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:01.165 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:01.165 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:01.165 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:01.165 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:01.165 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:01.165 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:01.165 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:01.165 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:01.165 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:01.165 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:01.165 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:29:01.165 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:01.165 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:01.165 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:01.165 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:01.165 ' 00:29:03.728 [2024-07-15 16:10:32.379061] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:04.663 [2024-07-15 16:10:33.555111] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:29:07.196 [2024-07-15 16:10:35.717746] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:29:09.101 [2024-07-15 16:10:37.575565] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:29:10.478 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:29:10.478 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:29:10.478 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:29:10.478 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:29:10.478 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:29:10.478 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:29:10.478 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:29:10.478 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:10.478 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:29:10.478 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:29:10.478 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:10.478 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:10.478 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:29:10.478 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:10.478 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:10.478 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:29:10.478 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:10.478 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:10.478 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:10.478 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:10.478 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:29:10.478 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:29:10.479 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:10.479 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:29:10.479 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:10.479 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:29:10.479 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:29:10.479 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:29:10.479 16:10:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:29:10.479 16:10:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:10.479 16:10:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:10.479 16:10:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:29:10.479 16:10:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:10.479 16:10:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:10.479 16:10:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:29:10.479 16:10:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:29:10.738 16:10:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:29:10.738 16:10:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:29:10.738 16:10:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:29:10.738 16:10:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:10.738 16:10:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:10.738 16:10:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:29:10.738 16:10:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:10.738 16:10:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:10.738 16:10:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:29:10.738 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:29:10.738 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:10.738 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:29:10.738 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:29:10.738 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:29:10.738 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:29:10.738 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:10.738 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:29:10.738 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:29:10.738 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:29:10.738 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:29:10.738 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:29:10.738 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:29:10.738 ' 00:29:16.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:29:16.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:29:16.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:16.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:29:16.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:29:16.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:29:16.007 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:29:16.007 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:16.007 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:29:16.007 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:29:16.007 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:29:16.007 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:29:16.007 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:29:16.007 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:29:16.007 16:10:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:29:16.007 16:10:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:16.007 16:10:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:16.007 16:10:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3927264 00:29:16.007 16:10:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 3927264 ']' 00:29:16.007 16:10:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 3927264 00:29:16.007 16:10:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:29:16.007 16:10:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:16.007 16:10:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3927264 00:29:16.007 16:10:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:16.007 16:10:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:16.007 16:10:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3927264' 00:29:16.007 killing process with pid 3927264 00:29:16.007 16:10:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 3927264 00:29:16.007 16:10:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 3927264 00:29:16.007 16:10:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:29:16.007 16:10:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:29:16.007 16:10:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3927264 ']' 00:29:16.007 16:10:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3927264 00:29:16.007 16:10:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 3927264 ']' 00:29:16.007 16:10:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 3927264 00:29:16.007 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3927264) - No such process 00:29:16.007 16:10:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 3927264 is not found' 00:29:16.007 Process with pid 3927264 is not found 00:29:16.007 16:10:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:29:16.007 16:10:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:29:16.007 16:10:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:29:16.007 00:29:16.007 real 0m15.806s 00:29:16.007 user 0m32.780s 00:29:16.007 sys 0m0.671s 00:29:16.007 16:10:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:16.007 16:10:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:16.007 ************************************ 00:29:16.007 END TEST spdkcli_nvmf_tcp 00:29:16.007 ************************************ 00:29:16.007 16:10:44 -- common/autotest_common.sh@1142 -- # return 0 00:29:16.007 16:10:44 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:16.007 16:10:44 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:16.007 16:10:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:16.007 16:10:44 -- common/autotest_common.sh@10 -- # set +x 00:29:16.007 ************************************ 00:29:16.007 START TEST nvmf_identify_passthru 00:29:16.007 ************************************ 00:29:16.007 16:10:44 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:16.007 * Looking for test storage... 00:29:16.266 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:16.266 16:10:44 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:16.266 16:10:44 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:29:16.266 16:10:44 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:16.266 16:10:44 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:16.266 16:10:44 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:16.266 16:10:44 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:16.266 16:10:44 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:16.266 16:10:44 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:16.266 16:10:44 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:16.266 16:10:44 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:16.266 16:10:44 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:16.266 16:10:44 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:16.266 16:10:44 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:16.266 16:10:44 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:16.266 16:10:44 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:16.266 16:10:44 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:16.266 16:10:44 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:16.266 16:10:44 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:16.266 16:10:44 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:16.266 16:10:44 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:16.266 16:10:44 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:16.266 16:10:44 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:16.266 16:10:44 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.266 16:10:44 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.266 16:10:44 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.266 16:10:44 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:16.266 16:10:44 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.266 16:10:44 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:29:16.266 16:10:44 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:16.266 16:10:44 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:16.266 16:10:44 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:16.267 16:10:44 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:16.267 16:10:44 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:16.267 16:10:44 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:16.267 16:10:44 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:16.267 16:10:44 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:16.267 16:10:44 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:16.267 16:10:44 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:16.267 16:10:44 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:16.267 16:10:44 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:16.267 16:10:44 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.267 16:10:44 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.267 16:10:44 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.267 16:10:44 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:16.267 16:10:44 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.267 16:10:44 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:29:16.267 16:10:44 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:16.267 16:10:44 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:16.267 16:10:44 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:16.267 16:10:44 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:16.267 16:10:44 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:16.267 16:10:44 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:16.267 16:10:44 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:16.267 16:10:44 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:16.267 16:10:44 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:16.267 16:10:44 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:16.267 16:10:44 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:29:16.267 16:10:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:21.569 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:21.569 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:21.569 Found net devices under 0000:86:00.0: cvl_0_0 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:21.569 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:21.570 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:21.570 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:21.570 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:21.570 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:21.570 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:21.570 Found net devices under 0000:86:00.1: cvl_0_1 00:29:21.570 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:21.570 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:21.570 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:29:21.570 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:21.570 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:21.570 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:21.570 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:21.570 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:21.570 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:21.570 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:21.570 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:21.570 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:21.570 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:21.570 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:21.570 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:21.570 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:21.570 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:21.570 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:21.570 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:21.570 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:21.570 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:21.570 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:21.570 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:21.570 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:21.570 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:21.570 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:21.570 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:21.570 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:29:21.570 00:29:21.570 --- 10.0.0.2 ping statistics --- 00:29:21.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:21.570 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:29:21.570 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:21.570 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:21.570 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:29:21.570 00:29:21.570 --- 10.0.0.1 ping statistics --- 00:29:21.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:21.570 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:29:21.570 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:21.570 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:29:21.570 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:21.570 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:21.570 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:21.570 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:21.570 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:21.570 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:21.570 16:10:49 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:21.570 16:10:49 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:29:21.570 16:10:49 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:21.570 16:10:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:21.570 16:10:49 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:29:21.570 16:10:49 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:29:21.570 16:10:49 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:29:21.570 16:10:49 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:29:21.570 16:10:49 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:29:21.570 16:10:49 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:29:21.570 16:10:49 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:29:21.570 16:10:49 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:21.570 16:10:49 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:21.570 16:10:49 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:29:21.570 16:10:49 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:29:21.570 16:10:49 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:29:21.570 16:10:49 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:5e:00.0 00:29:21.570 16:10:49 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:29:21.570 16:10:49 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:29:21.570 16:10:49 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:29:21.570 16:10:49 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:29:21.570 16:10:49 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:29:21.570 EAL: No free 2048 kB hugepages reported on node 1 00:29:25.755 16:10:54 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:29:25.755 16:10:54 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:29:25.755 16:10:54 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:29:25.755 16:10:54 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:29:25.755 EAL: No free 2048 kB hugepages reported on node 1 00:29:29.941 16:10:58 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:29:29.941 16:10:58 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:29:29.941 16:10:58 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:29.941 16:10:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:29.941 16:10:58 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:29:29.941 16:10:58 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:29.941 16:10:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:29.941 16:10:58 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3934280 00:29:29.941 16:10:58 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:29.941 16:10:58 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3934280 00:29:29.941 16:10:58 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 3934280 ']' 00:29:29.941 16:10:58 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:29.941 16:10:58 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:29.941 16:10:58 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:29.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:29.941 16:10:58 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:29.941 16:10:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:29.941 16:10:58 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:29.941 [2024-07-15 16:10:58.316123] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:29:29.941 [2024-07-15 16:10:58.316171] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:29.941 EAL: No free 2048 kB hugepages reported on node 1 00:29:29.941 [2024-07-15 16:10:58.373677] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:29.941 [2024-07-15 16:10:58.453586] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:29.941 [2024-07-15 16:10:58.453623] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:29.941 [2024-07-15 16:10:58.453633] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:29.941 [2024-07-15 16:10:58.453639] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:29.941 [2024-07-15 16:10:58.453644] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:29.941 [2024-07-15 16:10:58.453693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:29.941 [2024-07-15 16:10:58.453710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:29.941 [2024-07-15 16:10:58.454008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:29.941 [2024-07-15 16:10:58.454010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:30.200 16:10:59 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:30.200 16:10:59 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:29:30.200 16:10:59 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:29:30.200 16:10:59 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.200 16:10:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:30.200 INFO: Log level set to 20 00:29:30.200 INFO: Requests: 00:29:30.200 { 00:29:30.200 "jsonrpc": "2.0", 00:29:30.200 "method": "nvmf_set_config", 00:29:30.200 "id": 1, 00:29:30.200 "params": { 00:29:30.200 "admin_cmd_passthru": { 00:29:30.200 "identify_ctrlr": true 00:29:30.200 } 00:29:30.200 } 00:29:30.200 } 00:29:30.200 00:29:30.459 INFO: response: 00:29:30.459 { 00:29:30.459 "jsonrpc": "2.0", 00:29:30.459 "id": 1, 00:29:30.459 "result": true 00:29:30.459 } 00:29:30.459 00:29:30.459 16:10:59 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:30.459 16:10:59 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:29:30.459 16:10:59 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.459 16:10:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:30.459 INFO: Setting log level to 20 00:29:30.459 INFO: Setting log level to 20 00:29:30.459 INFO: Log level set to 20 00:29:30.459 INFO: Log level set to 20 00:29:30.459 INFO: Requests: 00:29:30.459 { 00:29:30.459 "jsonrpc": "2.0", 00:29:30.459 "method": "framework_start_init", 00:29:30.459 "id": 1 00:29:30.459 } 00:29:30.459 00:29:30.459 INFO: Requests: 00:29:30.459 { 00:29:30.459 "jsonrpc": "2.0", 00:29:30.459 "method": "framework_start_init", 00:29:30.459 "id": 1 00:29:30.459 } 00:29:30.459 00:29:30.459 [2024-07-15 16:10:59.228160] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:29:30.459 INFO: response: 00:29:30.459 { 00:29:30.459 "jsonrpc": "2.0", 00:29:30.459 "id": 1, 00:29:30.459 "result": true 00:29:30.459 } 00:29:30.459 00:29:30.459 INFO: response: 00:29:30.459 { 00:29:30.459 "jsonrpc": "2.0", 00:29:30.459 "id": 1, 00:29:30.459 "result": true 00:29:30.459 } 00:29:30.459 00:29:30.459 16:10:59 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:30.459 16:10:59 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:30.459 16:10:59 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.459 16:10:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:30.459 INFO: Setting log level to 40 00:29:30.459 INFO: Setting log level to 40 00:29:30.459 INFO: Setting log level to 40 00:29:30.459 [2024-07-15 16:10:59.241719] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:30.459 16:10:59 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:30.459 16:10:59 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:29:30.459 16:10:59 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:30.459 16:10:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:30.459 16:10:59 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:29:30.459 16:10:59 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.459 16:10:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:33.745 Nvme0n1 00:29:33.745 16:11:02 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.745 16:11:02 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:29:33.745 16:11:02 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.745 16:11:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:33.745 16:11:02 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.745 16:11:02 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:33.745 16:11:02 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.745 16:11:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:33.745 16:11:02 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.745 16:11:02 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:33.745 16:11:02 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.745 16:11:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:33.745 [2024-07-15 16:11:02.136273] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:33.745 16:11:02 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.745 16:11:02 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:29:33.745 16:11:02 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.745 16:11:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:33.745 [ 00:29:33.745 { 00:29:33.745 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:33.745 "subtype": "Discovery", 00:29:33.745 "listen_addresses": [], 00:29:33.745 "allow_any_host": true, 00:29:33.745 "hosts": [] 00:29:33.745 }, 00:29:33.745 { 00:29:33.745 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:33.745 "subtype": "NVMe", 00:29:33.745 "listen_addresses": [ 00:29:33.745 { 00:29:33.745 "trtype": "TCP", 00:29:33.745 "adrfam": "IPv4", 00:29:33.745 "traddr": "10.0.0.2", 00:29:33.745 "trsvcid": "4420" 00:29:33.745 } 00:29:33.745 ], 00:29:33.745 "allow_any_host": true, 00:29:33.745 "hosts": [], 00:29:33.745 "serial_number": "SPDK00000000000001", 00:29:33.745 "model_number": "SPDK bdev Controller", 00:29:33.745 "max_namespaces": 1, 00:29:33.745 "min_cntlid": 1, 00:29:33.745 "max_cntlid": 65519, 00:29:33.745 "namespaces": [ 00:29:33.745 { 00:29:33.745 "nsid": 1, 00:29:33.745 "bdev_name": "Nvme0n1", 00:29:33.746 "name": "Nvme0n1", 00:29:33.746 "nguid": "ED58E36C621048EF9F0C0CC6766E0DAA", 00:29:33.746 "uuid": "ed58e36c-6210-48ef-9f0c-0cc6766e0daa" 00:29:33.746 } 00:29:33.746 ] 00:29:33.746 } 00:29:33.746 ] 00:29:33.746 16:11:02 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.746 16:11:02 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:33.746 16:11:02 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:29:33.746 16:11:02 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:29:33.746 EAL: No free 2048 kB hugepages reported on node 1 00:29:33.746 16:11:02 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:29:33.746 16:11:02 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:33.746 16:11:02 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:29:33.746 16:11:02 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:29:33.746 EAL: No free 2048 kB hugepages reported on node 1 00:29:33.746 16:11:02 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:29:33.746 16:11:02 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:29:33.746 16:11:02 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:29:33.746 16:11:02 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:33.746 16:11:02 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.746 16:11:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:33.746 16:11:02 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.746 16:11:02 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:29:33.746 16:11:02 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:29:33.746 16:11:02 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:33.746 16:11:02 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:29:33.746 16:11:02 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:33.746 16:11:02 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:29:33.746 16:11:02 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:33.746 16:11:02 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:33.746 rmmod nvme_tcp 00:29:33.746 rmmod nvme_fabrics 00:29:33.746 rmmod nvme_keyring 00:29:33.746 16:11:02 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:33.746 16:11:02 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:29:33.746 16:11:02 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:29:33.746 16:11:02 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 3934280 ']' 00:29:33.746 16:11:02 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 3934280 00:29:33.746 16:11:02 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 3934280 ']' 00:29:33.746 16:11:02 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 3934280 00:29:33.746 16:11:02 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:29:33.746 16:11:02 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:33.746 16:11:02 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3934280 00:29:33.746 16:11:02 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:33.746 16:11:02 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:33.746 16:11:02 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3934280' 00:29:33.746 killing process with pid 3934280 00:29:33.746 16:11:02 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 3934280 00:29:33.746 16:11:02 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 3934280 00:29:35.649 16:11:04 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:35.649 16:11:04 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:35.649 16:11:04 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:35.649 16:11:04 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:35.649 16:11:04 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:35.649 16:11:04 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:35.649 16:11:04 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:35.649 16:11:04 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:37.555 16:11:06 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:37.555 00:29:37.555 real 0m21.264s 00:29:37.555 user 0m29.643s 00:29:37.555 sys 0m4.415s 00:29:37.555 16:11:06 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:37.555 16:11:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:37.555 ************************************ 00:29:37.555 END TEST nvmf_identify_passthru 00:29:37.555 ************************************ 00:29:37.555 16:11:06 -- common/autotest_common.sh@1142 -- # return 0 00:29:37.555 16:11:06 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:29:37.555 16:11:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:37.555 16:11:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:37.555 16:11:06 -- common/autotest_common.sh@10 -- # set +x 00:29:37.555 ************************************ 00:29:37.555 START TEST nvmf_dif 00:29:37.555 ************************************ 00:29:37.555 16:11:06 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:29:37.555 * Looking for test storage... 00:29:37.555 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:37.555 16:11:06 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:37.555 16:11:06 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:29:37.555 16:11:06 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:37.555 16:11:06 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:37.555 16:11:06 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:37.555 16:11:06 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:37.555 16:11:06 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:37.555 16:11:06 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:37.555 16:11:06 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:37.555 16:11:06 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:37.555 16:11:06 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:37.555 16:11:06 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:37.555 16:11:06 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:37.555 16:11:06 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:37.555 16:11:06 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:37.555 16:11:06 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:37.555 16:11:06 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:37.555 16:11:06 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:37.555 16:11:06 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:37.555 16:11:06 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:37.555 16:11:06 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:37.555 16:11:06 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:37.555 16:11:06 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.555 16:11:06 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.555 16:11:06 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.555 16:11:06 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:29:37.555 16:11:06 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.555 16:11:06 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:29:37.555 16:11:06 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:37.555 16:11:06 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:37.555 16:11:06 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:37.556 16:11:06 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:37.556 16:11:06 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:37.556 16:11:06 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:37.556 16:11:06 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:37.556 16:11:06 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:37.556 16:11:06 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:29:37.556 16:11:06 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:29:37.556 16:11:06 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:29:37.556 16:11:06 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:29:37.556 16:11:06 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:29:37.556 16:11:06 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:37.556 16:11:06 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:37.556 16:11:06 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:37.556 16:11:06 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:37.556 16:11:06 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:37.556 16:11:06 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:37.556 16:11:06 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:37.556 16:11:06 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:37.556 16:11:06 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:37.556 16:11:06 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:37.556 16:11:06 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:29:37.556 16:11:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:42.823 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:42.823 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:42.823 Found net devices under 0000:86:00.0: cvl_0_0 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:42.823 Found net devices under 0000:86:00.1: cvl_0_1 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:42.823 16:11:11 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:42.824 16:11:11 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:42.824 16:11:11 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:42.824 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:42.824 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:29:42.824 00:29:42.824 --- 10.0.0.2 ping statistics --- 00:29:42.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:42.824 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:29:42.824 16:11:11 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:42.824 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:42.824 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:29:42.824 00:29:42.824 --- 10.0.0.1 ping statistics --- 00:29:42.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:42.824 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:29:42.824 16:11:11 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:42.824 16:11:11 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:29:42.824 16:11:11 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:29:42.824 16:11:11 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:44.732 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:29:44.732 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:29:44.732 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:29:44.732 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:29:44.991 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:29:44.991 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:29:44.991 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:29:44.991 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:29:44.991 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:29:44.991 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:29:44.991 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:29:44.991 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:29:44.991 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:29:44.991 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:29:44.991 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:29:44.991 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:29:44.991 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:29:44.991 16:11:13 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:44.991 16:11:13 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:44.991 16:11:13 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:44.991 16:11:13 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:44.991 16:11:13 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:44.991 16:11:13 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:44.991 16:11:13 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:29:44.991 16:11:13 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:29:44.991 16:11:13 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:44.991 16:11:13 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:44.991 16:11:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:44.991 16:11:13 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=3939740 00:29:44.991 16:11:13 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 3939740 00:29:44.991 16:11:13 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:29:44.991 16:11:13 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 3939740 ']' 00:29:44.991 16:11:13 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:44.991 16:11:13 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:44.991 16:11:13 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:44.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:44.991 16:11:13 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:44.991 16:11:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:44.991 [2024-07-15 16:11:13.893498] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:29:44.991 [2024-07-15 16:11:13.893539] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:44.991 EAL: No free 2048 kB hugepages reported on node 1 00:29:45.250 [2024-07-15 16:11:13.950472] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:45.250 [2024-07-15 16:11:14.029080] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:45.250 [2024-07-15 16:11:14.029112] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:45.250 [2024-07-15 16:11:14.029119] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:45.250 [2024-07-15 16:11:14.029128] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:45.250 [2024-07-15 16:11:14.029133] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:45.250 [2024-07-15 16:11:14.029148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:45.817 16:11:14 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:45.817 16:11:14 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:29:45.817 16:11:14 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:45.817 16:11:14 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:45.817 16:11:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:45.817 16:11:14 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:45.817 16:11:14 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:29:45.817 16:11:14 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:29:45.817 16:11:14 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.817 16:11:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:45.817 [2024-07-15 16:11:14.735424] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:45.817 16:11:14 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.817 16:11:14 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:29:45.817 16:11:14 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:45.817 16:11:14 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:45.817 16:11:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:46.076 ************************************ 00:29:46.076 START TEST fio_dif_1_default 00:29:46.076 ************************************ 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:46.076 bdev_null0 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:46.076 [2024-07-15 16:11:14.807706] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:46.076 { 00:29:46.076 "params": { 00:29:46.076 "name": "Nvme$subsystem", 00:29:46.076 "trtype": "$TEST_TRANSPORT", 00:29:46.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:46.076 "adrfam": "ipv4", 00:29:46.076 "trsvcid": "$NVMF_PORT", 00:29:46.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:46.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:46.076 "hdgst": ${hdgst:-false}, 00:29:46.076 "ddgst": ${ddgst:-false} 00:29:46.076 }, 00:29:46.076 "method": "bdev_nvme_attach_controller" 00:29:46.076 } 00:29:46.076 EOF 00:29:46.076 )") 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:46.076 "params": { 00:29:46.076 "name": "Nvme0", 00:29:46.076 "trtype": "tcp", 00:29:46.076 "traddr": "10.0.0.2", 00:29:46.076 "adrfam": "ipv4", 00:29:46.076 "trsvcid": "4420", 00:29:46.076 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:46.076 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:46.076 "hdgst": false, 00:29:46.076 "ddgst": false 00:29:46.076 }, 00:29:46.076 "method": "bdev_nvme_attach_controller" 00:29:46.076 }' 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:46.076 16:11:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:46.077 16:11:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:46.077 16:11:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:46.077 16:11:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:46.077 16:11:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:46.335 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:46.335 fio-3.35 00:29:46.335 Starting 1 thread 00:29:46.335 EAL: No free 2048 kB hugepages reported on node 1 00:29:58.567 00:29:58.567 filename0: (groupid=0, jobs=1): err= 0: pid=3940115: Mon Jul 15 16:11:25 2024 00:29:58.567 read: IOPS=95, BW=382KiB/s (391kB/s)(3824KiB/10020msec) 00:29:58.567 slat (nsec): min=6087, max=62056, avg=6713.87, stdev=2379.97 00:29:58.567 clat (usec): min=40915, max=43469, avg=41903.37, stdev=297.17 00:29:58.567 lat (usec): min=40921, max=43499, avg=41910.08, stdev=297.18 00:29:58.567 clat percentiles (usec): 00:29:58.567 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[42206], 00:29:58.567 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:29:58.567 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:29:58.567 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:29:58.567 | 99.99th=[43254] 00:29:58.567 bw ( KiB/s): min= 352, max= 384, per=99.57%, avg=380.80, stdev= 9.85, samples=20 00:29:58.567 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:29:58.567 lat (msec) : 50=100.00% 00:29:58.567 cpu : usr=95.03%, sys=4.69%, ctx=18, majf=0, minf=215 00:29:58.567 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:58.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.567 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.567 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:58.567 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:58.567 00:29:58.567 Run status group 0 (all jobs): 00:29:58.567 READ: bw=382KiB/s (391kB/s), 382KiB/s-382KiB/s (391kB/s-391kB/s), io=3824KiB (3916kB), run=10020-10020msec 00:29:58.567 16:11:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:29:58.567 16:11:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:29:58.567 16:11:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:29:58.567 16:11:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:58.567 16:11:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:29:58.567 16:11:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:58.568 16:11:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.568 16:11:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:58.568 16:11:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.568 16:11:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:58.568 16:11:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.568 16:11:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:58.568 16:11:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.568 00:29:58.568 real 0m11.200s 00:29:58.568 user 0m16.500s 00:29:58.568 sys 0m0.745s 00:29:58.568 16:11:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:58.568 16:11:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:58.568 ************************************ 00:29:58.568 END TEST fio_dif_1_default 00:29:58.568 ************************************ 00:29:58.568 16:11:26 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:29:58.568 16:11:26 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:29:58.568 16:11:26 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:58.568 16:11:26 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:58.568 16:11:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:58.568 ************************************ 00:29:58.568 START TEST fio_dif_1_multi_subsystems 00:29:58.568 ************************************ 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:58.568 bdev_null0 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:58.568 [2024-07-15 16:11:26.081508] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:58.568 bdev_null1 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:58.568 { 00:29:58.568 "params": { 00:29:58.568 "name": "Nvme$subsystem", 00:29:58.568 "trtype": "$TEST_TRANSPORT", 00:29:58.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:58.568 "adrfam": "ipv4", 00:29:58.568 "trsvcid": "$NVMF_PORT", 00:29:58.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:58.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:58.568 "hdgst": ${hdgst:-false}, 00:29:58.568 "ddgst": ${ddgst:-false} 00:29:58.568 }, 00:29:58.568 "method": "bdev_nvme_attach_controller" 00:29:58.568 } 00:29:58.568 EOF 00:29:58.568 )") 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:58.568 { 00:29:58.568 "params": { 00:29:58.568 "name": "Nvme$subsystem", 00:29:58.568 "trtype": "$TEST_TRANSPORT", 00:29:58.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:58.568 "adrfam": "ipv4", 00:29:58.568 "trsvcid": "$NVMF_PORT", 00:29:58.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:58.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:58.568 "hdgst": ${hdgst:-false}, 00:29:58.568 "ddgst": ${ddgst:-false} 00:29:58.568 }, 00:29:58.568 "method": "bdev_nvme_attach_controller" 00:29:58.568 } 00:29:58.568 EOF 00:29:58.568 )") 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:29:58.568 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:58.568 "params": { 00:29:58.568 "name": "Nvme0", 00:29:58.568 "trtype": "tcp", 00:29:58.568 "traddr": "10.0.0.2", 00:29:58.568 "adrfam": "ipv4", 00:29:58.568 "trsvcid": "4420", 00:29:58.568 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:58.568 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:58.568 "hdgst": false, 00:29:58.568 "ddgst": false 00:29:58.568 }, 00:29:58.568 "method": "bdev_nvme_attach_controller" 00:29:58.568 },{ 00:29:58.568 "params": { 00:29:58.568 "name": "Nvme1", 00:29:58.568 "trtype": "tcp", 00:29:58.568 "traddr": "10.0.0.2", 00:29:58.568 "adrfam": "ipv4", 00:29:58.568 "trsvcid": "4420", 00:29:58.568 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:58.569 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:58.569 "hdgst": false, 00:29:58.569 "ddgst": false 00:29:58.569 }, 00:29:58.569 "method": "bdev_nvme_attach_controller" 00:29:58.569 }' 00:29:58.569 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:58.569 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:58.569 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:58.569 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:58.569 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:58.569 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:58.569 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:58.569 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:58.569 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:58.569 16:11:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:58.569 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:58.569 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:58.569 fio-3.35 00:29:58.569 Starting 2 threads 00:29:58.569 EAL: No free 2048 kB hugepages reported on node 1 00:30:08.561 00:30:08.561 filename0: (groupid=0, jobs=1): err= 0: pid=3942087: Mon Jul 15 16:11:37 2024 00:30:08.561 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10016msec) 00:30:08.562 slat (nsec): min=4211, max=22767, avg=7782.53, stdev=2527.02 00:30:08.562 clat (usec): min=40823, max=45287, avg=41023.24, stdev=325.48 00:30:08.562 lat (usec): min=40829, max=45301, avg=41031.02, stdev=325.45 00:30:08.562 clat percentiles (usec): 00:30:08.562 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:30:08.562 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:08.562 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:08.562 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:30:08.562 | 99.99th=[45351] 00:30:08.562 bw ( KiB/s): min= 384, max= 416, per=33.78%, avg=388.80, stdev=11.72, samples=20 00:30:08.562 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:30:08.562 lat (msec) : 50=100.00% 00:30:08.562 cpu : usr=97.75%, sys=2.00%, ctx=13, majf=0, minf=138 00:30:08.562 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:08.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.562 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.562 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:08.562 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:08.562 filename1: (groupid=0, jobs=1): err= 0: pid=3942088: Mon Jul 15 16:11:37 2024 00:30:08.562 read: IOPS=189, BW=760KiB/s (778kB/s)(7600KiB/10004msec) 00:30:08.562 slat (nsec): min=6002, max=25832, avg=7161.80, stdev=2095.07 00:30:08.562 clat (usec): min=538, max=43717, avg=21040.17, stdev=20427.71 00:30:08.562 lat (usec): min=545, max=43743, avg=21047.33, stdev=20427.07 00:30:08.562 clat percentiles (usec): 00:30:08.562 | 1.00th=[ 545], 5.00th=[ 553], 10.00th=[ 553], 20.00th=[ 562], 00:30:08.562 | 30.00th=[ 570], 40.00th=[ 578], 50.00th=[41157], 60.00th=[41157], 00:30:08.562 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:30:08.562 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:30:08.562 | 99.99th=[43779] 00:30:08.562 bw ( KiB/s): min= 704, max= 768, per=66.26%, avg=761.26, stdev=20.18, samples=19 00:30:08.562 iops : min= 176, max= 192, avg=190.32, stdev= 5.04, samples=19 00:30:08.562 lat (usec) : 750=49.16%, 1000=0.74% 00:30:08.562 lat (msec) : 50=50.11% 00:30:08.562 cpu : usr=97.88%, sys=1.87%, ctx=10, majf=0, minf=102 00:30:08.562 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:08.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.562 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.562 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:08.562 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:08.562 00:30:08.562 Run status group 0 (all jobs): 00:30:08.562 READ: bw=1149KiB/s (1176kB/s), 390KiB/s-760KiB/s (399kB/s-778kB/s), io=11.2MiB (11.8MB), run=10004-10016msec 00:30:08.562 16:11:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:30:08.562 16:11:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:30:08.562 16:11:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:08.562 16:11:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:08.562 16:11:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:30:08.562 16:11:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:08.562 16:11:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:08.562 16:11:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:08.562 16:11:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:08.562 16:11:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:08.562 16:11:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:08.562 16:11:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:08.562 16:11:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:08.562 16:11:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:08.562 16:11:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:08.562 16:11:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:30:08.562 16:11:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:08.562 16:11:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:08.562 16:11:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:08.562 16:11:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:08.562 16:11:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:08.562 16:11:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:08.562 16:11:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:08.562 16:11:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:08.562 00:30:08.562 real 0m11.335s 00:30:08.562 user 0m26.304s 00:30:08.562 sys 0m0.676s 00:30:08.562 16:11:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:08.562 16:11:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:08.562 ************************************ 00:30:08.562 END TEST fio_dif_1_multi_subsystems 00:30:08.562 ************************************ 00:30:08.562 16:11:37 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:30:08.562 16:11:37 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:30:08.562 16:11:37 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:08.562 16:11:37 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:08.562 16:11:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:08.562 ************************************ 00:30:08.562 START TEST fio_dif_rand_params 00:30:08.562 ************************************ 00:30:08.562 16:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:30:08.562 16:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:30:08.562 16:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:30:08.562 16:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:30:08.562 16:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:30:08.562 16:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:30:08.562 16:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:30:08.562 16:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:30:08.562 16:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:30:08.562 16:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:08.562 16:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:08.562 16:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:08.562 16:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:08.562 16:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:08.562 16:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:08.562 16:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:08.562 bdev_null0 00:30:08.562 16:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:08.562 16:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:08.562 16:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:08.562 16:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:08.562 16:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:08.562 16:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:08.562 16:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:08.562 16:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:08.562 16:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:08.562 16:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:08.562 16:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:08.562 16:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:08.562 [2024-07-15 16:11:37.477680] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:08.562 16:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:08.562 16:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:30:08.562 16:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:30:08.562 16:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:08.562 16:11:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:08.562 16:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:08.562 16:11:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:08.562 16:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:08.562 16:11:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:08.562 16:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:08.562 16:11:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:08.562 { 00:30:08.562 "params": { 00:30:08.562 "name": "Nvme$subsystem", 00:30:08.562 "trtype": "$TEST_TRANSPORT", 00:30:08.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:08.562 "adrfam": "ipv4", 00:30:08.562 "trsvcid": "$NVMF_PORT", 00:30:08.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:08.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:08.562 "hdgst": ${hdgst:-false}, 00:30:08.562 "ddgst": ${ddgst:-false} 00:30:08.562 }, 00:30:08.562 "method": "bdev_nvme_attach_controller" 00:30:08.562 } 00:30:08.562 EOF 00:30:08.562 )") 00:30:08.562 16:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:08.562 16:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:08.562 16:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:08.562 16:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:08.562 16:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:08.562 16:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:08.562 16:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:08.562 16:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:08.562 16:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:08.562 16:11:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:08.563 16:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:08.563 16:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:08.563 16:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:08.563 16:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:08.563 16:11:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:08.563 16:11:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:08.563 16:11:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:08.563 16:11:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:08.563 "params": { 00:30:08.563 "name": "Nvme0", 00:30:08.563 "trtype": "tcp", 00:30:08.563 "traddr": "10.0.0.2", 00:30:08.563 "adrfam": "ipv4", 00:30:08.563 "trsvcid": "4420", 00:30:08.563 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:08.563 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:08.563 "hdgst": false, 00:30:08.563 "ddgst": false 00:30:08.563 }, 00:30:08.563 "method": "bdev_nvme_attach_controller" 00:30:08.563 }' 00:30:08.834 16:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:08.834 16:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:08.834 16:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:08.834 16:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:08.834 16:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:08.834 16:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:08.834 16:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:08.835 16:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:08.835 16:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:08.835 16:11:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:09.097 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:09.097 ... 00:30:09.097 fio-3.35 00:30:09.097 Starting 3 threads 00:30:09.097 EAL: No free 2048 kB hugepages reported on node 1 00:30:15.655 00:30:15.655 filename0: (groupid=0, jobs=1): err= 0: pid=3944050: Mon Jul 15 16:11:43 2024 00:30:15.655 read: IOPS=241, BW=30.2MiB/s (31.6MB/s)(152MiB/5051msec) 00:30:15.655 slat (nsec): min=3161, max=44835, avg=9608.95, stdev=3047.24 00:30:15.655 clat (usec): min=3893, max=53350, avg=12382.27, stdev=13843.49 00:30:15.655 lat (usec): min=3900, max=53357, avg=12391.87, stdev=13843.70 00:30:15.655 clat percentiles (usec): 00:30:15.655 | 1.00th=[ 4228], 5.00th=[ 4490], 10.00th=[ 4621], 20.00th=[ 5342], 00:30:15.655 | 30.00th=[ 6194], 40.00th=[ 6652], 50.00th=[ 7111], 60.00th=[ 7963], 00:30:15.655 | 70.00th=[ 8848], 80.00th=[10159], 90.00th=[46924], 95.00th=[47973], 00:30:15.655 | 99.00th=[50070], 99.50th=[51643], 99.90th=[52691], 99.95th=[53216], 00:30:15.655 | 99.99th=[53216] 00:30:15.655 bw ( KiB/s): min=23808, max=38912, per=31.50%, avg=31129.60, stdev=4927.51, samples=10 00:30:15.655 iops : min= 186, max= 304, avg=243.20, stdev=38.50, samples=10 00:30:15.655 lat (msec) : 4=0.25%, 10=79.33%, 20=7.47%, 50=11.81%, 100=1.15% 00:30:15.655 cpu : usr=94.99%, sys=4.71%, ctx=12, majf=0, minf=48 00:30:15.655 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:15.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:15.655 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:15.655 issued rwts: total=1219,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:15.655 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:15.655 filename0: (groupid=0, jobs=1): err= 0: pid=3944051: Mon Jul 15 16:11:43 2024 00:30:15.655 read: IOPS=250, BW=31.2MiB/s (32.8MB/s)(156MiB/5004msec) 00:30:15.655 slat (nsec): min=6229, max=49422, avg=9443.14, stdev=2937.47 00:30:15.655 clat (usec): min=3816, max=90634, avg=11987.69, stdev=14172.78 00:30:15.655 lat (usec): min=3824, max=90640, avg=11997.13, stdev=14173.07 00:30:15.655 clat percentiles (usec): 00:30:15.655 | 1.00th=[ 4015], 5.00th=[ 4293], 10.00th=[ 4490], 20.00th=[ 4883], 00:30:15.655 | 30.00th=[ 5997], 40.00th=[ 6587], 50.00th=[ 6980], 60.00th=[ 7635], 00:30:15.655 | 70.00th=[ 8848], 80.00th=[ 9896], 90.00th=[46924], 95.00th=[48497], 00:30:15.655 | 99.00th=[51119], 99.50th=[51643], 99.90th=[90702], 99.95th=[90702], 00:30:15.655 | 99.99th=[90702] 00:30:15.655 bw ( KiB/s): min=11776, max=44800, per=32.33%, avg=31948.80, stdev=8955.37, samples=10 00:30:15.655 iops : min= 92, max= 350, avg=249.60, stdev=69.96, samples=10 00:30:15.655 lat (msec) : 4=0.88%, 10=79.54%, 20=7.75%, 50=9.75%, 100=2.08% 00:30:15.656 cpu : usr=94.72%, sys=5.00%, ctx=7, majf=0, minf=171 00:30:15.656 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:15.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:15.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:15.656 issued rwts: total=1251,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:15.656 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:15.656 filename0: (groupid=0, jobs=1): err= 0: pid=3944052: Mon Jul 15 16:11:43 2024 00:30:15.656 read: IOPS=285, BW=35.7MiB/s (37.5MB/s)(179MiB/5004msec) 00:30:15.656 slat (usec): min=6, max=120, avg= 9.50, stdev= 3.97 00:30:15.656 clat (usec): min=3411, max=51560, avg=10484.85, stdev=12207.99 00:30:15.656 lat (usec): min=3417, max=51571, avg=10494.34, stdev=12208.13 00:30:15.656 clat percentiles (usec): 00:30:15.656 | 1.00th=[ 3785], 5.00th=[ 4015], 10.00th=[ 4228], 20.00th=[ 4555], 00:30:15.656 | 30.00th=[ 5276], 40.00th=[ 6128], 50.00th=[ 6652], 60.00th=[ 7111], 00:30:15.656 | 70.00th=[ 8160], 80.00th=[ 9372], 90.00th=[12649], 95.00th=[47449], 00:30:15.656 | 99.00th=[50070], 99.50th=[50594], 99.90th=[51119], 99.95th=[51643], 00:30:15.656 | 99.99th=[51643] 00:30:15.656 bw ( KiB/s): min=28672, max=54016, per=37.53%, avg=37091.56, stdev=8392.00, samples=9 00:30:15.656 iops : min= 224, max= 422, avg=289.78, stdev=65.56, samples=9 00:30:15.656 lat (msec) : 4=4.41%, 10=79.58%, 20=6.57%, 50=8.53%, 100=0.91% 00:30:15.656 cpu : usr=95.38%, sys=4.30%, ctx=15, majf=0, minf=77 00:30:15.656 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:15.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:15.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:15.656 issued rwts: total=1430,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:15.656 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:15.656 00:30:15.656 Run status group 0 (all jobs): 00:30:15.656 READ: bw=96.5MiB/s (101MB/s), 30.2MiB/s-35.7MiB/s (31.6MB/s-37.5MB/s), io=488MiB (511MB), run=5004-5051msec 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:15.656 bdev_null0 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:15.656 [2024-07-15 16:11:43.658604] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:15.656 bdev_null1 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:15.656 bdev_null2 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:15.656 { 00:30:15.656 "params": { 00:30:15.656 "name": "Nvme$subsystem", 00:30:15.656 "trtype": "$TEST_TRANSPORT", 00:30:15.656 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:15.656 "adrfam": "ipv4", 00:30:15.656 "trsvcid": "$NVMF_PORT", 00:30:15.656 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:15.656 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:15.656 "hdgst": ${hdgst:-false}, 00:30:15.656 "ddgst": ${ddgst:-false} 00:30:15.656 }, 00:30:15.656 "method": "bdev_nvme_attach_controller" 00:30:15.656 } 00:30:15.656 EOF 00:30:15.656 )") 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:15.656 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:15.657 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:15.657 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:15.657 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:15.657 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:15.657 16:11:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:15.657 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:15.657 16:11:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:15.657 16:11:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:15.657 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:15.657 16:11:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:15.657 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:15.657 16:11:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:15.657 16:11:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:15.657 { 00:30:15.657 "params": { 00:30:15.657 "name": "Nvme$subsystem", 00:30:15.657 "trtype": "$TEST_TRANSPORT", 00:30:15.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:15.657 "adrfam": "ipv4", 00:30:15.657 "trsvcid": "$NVMF_PORT", 00:30:15.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:15.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:15.657 "hdgst": ${hdgst:-false}, 00:30:15.657 "ddgst": ${ddgst:-false} 00:30:15.657 }, 00:30:15.657 "method": "bdev_nvme_attach_controller" 00:30:15.657 } 00:30:15.657 EOF 00:30:15.657 )") 00:30:15.657 16:11:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:15.657 16:11:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:15.657 16:11:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:15.657 16:11:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:15.657 16:11:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:15.657 16:11:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:15.657 { 00:30:15.657 "params": { 00:30:15.657 "name": "Nvme$subsystem", 00:30:15.657 "trtype": "$TEST_TRANSPORT", 00:30:15.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:15.657 "adrfam": "ipv4", 00:30:15.657 "trsvcid": "$NVMF_PORT", 00:30:15.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:15.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:15.657 "hdgst": ${hdgst:-false}, 00:30:15.657 "ddgst": ${ddgst:-false} 00:30:15.657 }, 00:30:15.657 "method": "bdev_nvme_attach_controller" 00:30:15.657 } 00:30:15.657 EOF 00:30:15.657 )") 00:30:15.657 16:11:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:15.657 16:11:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:15.657 16:11:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:15.657 16:11:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:15.657 16:11:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:15.657 16:11:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:15.657 "params": { 00:30:15.657 "name": "Nvme0", 00:30:15.657 "trtype": "tcp", 00:30:15.657 "traddr": "10.0.0.2", 00:30:15.657 "adrfam": "ipv4", 00:30:15.657 "trsvcid": "4420", 00:30:15.657 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:15.657 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:15.657 "hdgst": false, 00:30:15.657 "ddgst": false 00:30:15.657 }, 00:30:15.657 "method": "bdev_nvme_attach_controller" 00:30:15.657 },{ 00:30:15.657 "params": { 00:30:15.657 "name": "Nvme1", 00:30:15.657 "trtype": "tcp", 00:30:15.657 "traddr": "10.0.0.2", 00:30:15.657 "adrfam": "ipv4", 00:30:15.657 "trsvcid": "4420", 00:30:15.657 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:15.657 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:15.657 "hdgst": false, 00:30:15.657 "ddgst": false 00:30:15.657 }, 00:30:15.657 "method": "bdev_nvme_attach_controller" 00:30:15.657 },{ 00:30:15.657 "params": { 00:30:15.657 "name": "Nvme2", 00:30:15.657 "trtype": "tcp", 00:30:15.657 "traddr": "10.0.0.2", 00:30:15.657 "adrfam": "ipv4", 00:30:15.657 "trsvcid": "4420", 00:30:15.657 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:15.657 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:15.657 "hdgst": false, 00:30:15.657 "ddgst": false 00:30:15.657 }, 00:30:15.657 "method": "bdev_nvme_attach_controller" 00:30:15.657 }' 00:30:15.657 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:15.657 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:15.657 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:15.657 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:15.657 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:15.657 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:15.657 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:15.657 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:15.657 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:15.657 16:11:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:15.657 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:15.657 ... 00:30:15.657 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:15.657 ... 00:30:15.657 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:15.657 ... 00:30:15.657 fio-3.35 00:30:15.657 Starting 24 threads 00:30:15.657 EAL: No free 2048 kB hugepages reported on node 1 00:30:27.877 00:30:27.877 filename0: (groupid=0, jobs=1): err= 0: pid=3945193: Mon Jul 15 16:11:55 2024 00:30:27.877 read: IOPS=646, BW=2584KiB/s (2646kB/s)(25.3MiB/10019msec) 00:30:27.877 slat (usec): min=6, max=163, avg=25.11, stdev=23.38 00:30:27.877 clat (usec): min=319, max=32752, avg=24554.22, stdev=4194.73 00:30:27.877 lat (usec): min=338, max=32791, avg=24579.33, stdev=4197.34 00:30:27.877 clat percentiles (usec): 00:30:27.877 | 1.00th=[ 1893], 5.00th=[16581], 10.00th=[21627], 20.00th=[25035], 00:30:27.877 | 30.00th=[25297], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:30:27.877 | 70.00th=[25822], 80.00th=[26084], 90.00th=[27132], 95.00th=[27657], 00:30:27.877 | 99.00th=[28443], 99.50th=[28967], 99.90th=[32637], 99.95th=[32637], 00:30:27.877 | 99.99th=[32637] 00:30:27.877 bw ( KiB/s): min= 2304, max= 4456, per=4.35%, avg=2581.20, stdev=457.28, samples=20 00:30:27.877 iops : min= 576, max= 1114, avg=645.10, stdev=114.36, samples=20 00:30:27.877 lat (usec) : 500=0.02%, 1000=0.05% 00:30:27.877 lat (msec) : 2=0.94%, 4=0.74%, 10=0.59%, 20=6.98%, 50=90.68% 00:30:27.877 cpu : usr=99.05%, sys=0.57%, ctx=16, majf=0, minf=59 00:30:27.877 IO depths : 1=5.5%, 2=11.1%, 4=22.9%, 8=53.4%, 16=7.0%, 32=0.0%, >=64=0.0% 00:30:27.877 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.877 complete : 0=0.0%, 4=93.5%, 8=0.7%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.877 issued rwts: total=6473,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:27.877 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:27.877 filename0: (groupid=0, jobs=1): err= 0: pid=3945194: Mon Jul 15 16:11:55 2024 00:30:27.877 read: IOPS=619, BW=2476KiB/s (2536kB/s)(24.2MiB/10002msec) 00:30:27.877 slat (nsec): min=6896, max=87837, avg=41804.16, stdev=15385.23 00:30:27.877 clat (usec): min=4290, max=33532, avg=25484.78, stdev=1687.04 00:30:27.877 lat (usec): min=4306, max=33585, avg=25526.58, stdev=1689.21 00:30:27.877 clat percentiles (usec): 00:30:27.877 | 1.00th=[22414], 5.00th=[24773], 10.00th=[24773], 20.00th=[25035], 00:30:27.877 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:30:27.877 | 70.00th=[25822], 80.00th=[26084], 90.00th=[26870], 95.00th=[27395], 00:30:27.877 | 99.00th=[27919], 99.50th=[28181], 99.90th=[33162], 99.95th=[33424], 00:30:27.877 | 99.99th=[33424] 00:30:27.877 bw ( KiB/s): min= 2299, max= 2693, per=4.18%, avg=2478.84, stdev=88.48, samples=19 00:30:27.877 iops : min= 574, max= 673, avg=619.63, stdev=22.15, samples=19 00:30:27.877 lat (msec) : 10=0.26%, 20=0.52%, 50=99.22% 00:30:27.877 cpu : usr=98.72%, sys=0.76%, ctx=42, majf=0, minf=77 00:30:27.877 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:27.877 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.877 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.877 issued rwts: total=6192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:27.877 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:27.877 filename0: (groupid=0, jobs=1): err= 0: pid=3945195: Mon Jul 15 16:11:55 2024 00:30:27.877 read: IOPS=616, BW=2466KiB/s (2525kB/s)(24.1MiB/10019msec) 00:30:27.877 slat (usec): min=7, max=110, avg=48.10, stdev=18.14 00:30:27.877 clat (usec): min=13449, max=41132, avg=25560.02, stdev=1132.63 00:30:27.877 lat (usec): min=13466, max=41155, avg=25608.12, stdev=1131.72 00:30:27.877 clat percentiles (usec): 00:30:27.877 | 1.00th=[24249], 5.00th=[24511], 10.00th=[24773], 20.00th=[25035], 00:30:27.877 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:30:27.877 | 70.00th=[25822], 80.00th=[26084], 90.00th=[26870], 95.00th=[27395], 00:30:27.877 | 99.00th=[27919], 99.50th=[28181], 99.90th=[37487], 99.95th=[37487], 00:30:27.877 | 99.99th=[41157] 00:30:27.877 bw ( KiB/s): min= 2427, max= 2560, per=4.16%, avg=2463.45, stdev=56.50, samples=20 00:30:27.877 iops : min= 606, max= 640, avg=615.80, stdev=14.11, samples=20 00:30:27.877 lat (msec) : 20=0.26%, 50=99.74% 00:30:27.877 cpu : usr=98.80%, sys=0.68%, ctx=31, majf=0, minf=47 00:30:27.877 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:27.877 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.877 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.877 issued rwts: total=6176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:27.877 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:27.877 filename0: (groupid=0, jobs=1): err= 0: pid=3945196: Mon Jul 15 16:11:55 2024 00:30:27.877 read: IOPS=614, BW=2460KiB/s (2519kB/s)(24.1MiB/10017msec) 00:30:27.877 slat (nsec): min=6319, max=73949, avg=24497.77, stdev=13586.60 00:30:27.877 clat (usec): min=14130, max=55277, avg=25785.60, stdev=1733.86 00:30:27.877 lat (usec): min=14138, max=55296, avg=25810.10, stdev=1733.55 00:30:27.877 clat percentiles (usec): 00:30:27.877 | 1.00th=[24773], 5.00th=[25035], 10.00th=[25035], 20.00th=[25297], 00:30:27.877 | 30.00th=[25297], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:30:27.877 | 70.00th=[25822], 80.00th=[26346], 90.00th=[27132], 95.00th=[27657], 00:30:27.877 | 99.00th=[27919], 99.50th=[28443], 99.90th=[55313], 99.95th=[55313], 00:30:27.877 | 99.99th=[55313] 00:30:27.877 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2458.63, stdev=68.04, samples=19 00:30:27.877 iops : min= 576, max= 640, avg=614.63, stdev=16.97, samples=19 00:30:27.877 lat (msec) : 20=0.03%, 50=99.71%, 100=0.26% 00:30:27.877 cpu : usr=98.97%, sys=0.66%, ctx=12, majf=0, minf=43 00:30:27.877 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:27.877 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.877 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.877 issued rwts: total=6160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:27.877 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:27.877 filename0: (groupid=0, jobs=1): err= 0: pid=3945197: Mon Jul 15 16:11:55 2024 00:30:27.878 read: IOPS=615, BW=2460KiB/s (2519kB/s)(24.1MiB/10015msec) 00:30:27.878 slat (nsec): min=6601, max=91857, avg=41653.95, stdev=14925.83 00:30:27.878 clat (usec): min=14179, max=57431, avg=25650.35, stdev=1893.41 00:30:27.878 lat (usec): min=14231, max=57449, avg=25692.00, stdev=1892.26 00:30:27.878 clat percentiles (usec): 00:30:27.878 | 1.00th=[24511], 5.00th=[24773], 10.00th=[24773], 20.00th=[25035], 00:30:27.878 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:30:27.878 | 70.00th=[25822], 80.00th=[26084], 90.00th=[26870], 95.00th=[27395], 00:30:27.878 | 99.00th=[28181], 99.50th=[28443], 99.90th=[57410], 99.95th=[57410], 00:30:27.878 | 99.99th=[57410] 00:30:27.878 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2457.05, stdev=66.80, samples=20 00:30:27.878 iops : min= 576, max= 640, avg=614.20, stdev=16.64, samples=20 00:30:27.878 lat (msec) : 20=0.26%, 50=99.48%, 100=0.26% 00:30:27.878 cpu : usr=97.66%, sys=1.21%, ctx=67, majf=0, minf=43 00:30:27.878 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:30:27.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.878 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.878 issued rwts: total=6160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:27.878 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:27.878 filename0: (groupid=0, jobs=1): err= 0: pid=3945198: Mon Jul 15 16:11:55 2024 00:30:27.878 read: IOPS=615, BW=2460KiB/s (2519kB/s)(24.1MiB/10015msec) 00:30:27.878 slat (nsec): min=6412, max=93866, avg=50819.53, stdev=16752.72 00:30:27.878 clat (usec): min=18180, max=50443, avg=25605.08, stdev=1514.69 00:30:27.878 lat (usec): min=18189, max=50464, avg=25655.90, stdev=1512.27 00:30:27.878 clat percentiles (usec): 00:30:27.878 | 1.00th=[24511], 5.00th=[24511], 10.00th=[24773], 20.00th=[25035], 00:30:27.878 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:30:27.878 | 70.00th=[25560], 80.00th=[26084], 90.00th=[26870], 95.00th=[27395], 00:30:27.878 | 99.00th=[28181], 99.50th=[28443], 99.90th=[50594], 99.95th=[50594], 00:30:27.878 | 99.99th=[50594] 00:30:27.878 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2457.30, stdev=77.18, samples=20 00:30:27.878 iops : min= 576, max= 640, avg=614.30, stdev=19.26, samples=20 00:30:27.878 lat (msec) : 20=0.03%, 50=99.71%, 100=0.26% 00:30:27.878 cpu : usr=97.22%, sys=1.57%, ctx=137, majf=0, minf=73 00:30:27.878 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:27.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.878 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.878 issued rwts: total=6160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:27.878 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:27.878 filename0: (groupid=0, jobs=1): err= 0: pid=3945199: Mon Jul 15 16:11:55 2024 00:30:27.878 read: IOPS=614, BW=2458KiB/s (2517kB/s)(24.0MiB/10007msec) 00:30:27.878 slat (nsec): min=5246, max=75628, avg=21031.15, stdev=13537.40 00:30:27.878 clat (usec): min=8112, max=74365, avg=25898.17, stdev=2363.14 00:30:27.878 lat (usec): min=8122, max=74381, avg=25919.20, stdev=2362.35 00:30:27.878 clat percentiles (usec): 00:30:27.878 | 1.00th=[24773], 5.00th=[25035], 10.00th=[25035], 20.00th=[25297], 00:30:27.878 | 30.00th=[25297], 40.00th=[25560], 50.00th=[25560], 60.00th=[25560], 00:30:27.878 | 70.00th=[25822], 80.00th=[26346], 90.00th=[27132], 95.00th=[27919], 00:30:27.878 | 99.00th=[28443], 99.50th=[28705], 99.90th=[63701], 99.95th=[63701], 00:30:27.878 | 99.99th=[73925] 00:30:27.878 bw ( KiB/s): min= 2176, max= 2560, per=4.14%, avg=2454.74, stdev=94.55, samples=19 00:30:27.878 iops : min= 544, max= 640, avg=613.68, stdev=23.64, samples=19 00:30:27.878 lat (msec) : 10=0.10%, 20=0.23%, 50=99.41%, 100=0.26% 00:30:27.878 cpu : usr=98.67%, sys=0.77%, ctx=100, majf=0, minf=45 00:30:27.878 IO depths : 1=2.0%, 2=5.3%, 4=13.5%, 8=65.8%, 16=13.4%, 32=0.0%, >=64=0.0% 00:30:27.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.878 complete : 0=0.0%, 4=91.9%, 8=5.1%, 16=3.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.878 issued rwts: total=6150,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:27.878 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:27.878 filename0: (groupid=0, jobs=1): err= 0: pid=3945200: Mon Jul 15 16:11:55 2024 00:30:27.878 read: IOPS=615, BW=2462KiB/s (2521kB/s)(24.1MiB/10009msec) 00:30:27.878 slat (nsec): min=5106, max=94981, avg=53554.22, stdev=13969.86 00:30:27.878 clat (usec): min=11169, max=58718, avg=25528.94, stdev=2023.81 00:30:27.878 lat (usec): min=11183, max=58732, avg=25582.49, stdev=2022.91 00:30:27.878 clat percentiles (usec): 00:30:27.878 | 1.00th=[24249], 5.00th=[24511], 10.00th=[24773], 20.00th=[24773], 00:30:27.878 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25297], 60.00th=[25297], 00:30:27.878 | 70.00th=[25560], 80.00th=[26084], 90.00th=[26870], 95.00th=[27395], 00:30:27.878 | 99.00th=[27919], 99.50th=[28443], 99.90th=[58459], 99.95th=[58459], 00:30:27.878 | 99.99th=[58459] 00:30:27.878 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2458.58, stdev=79.99, samples=19 00:30:27.878 iops : min= 576, max= 640, avg=614.58, stdev=19.98, samples=19 00:30:27.878 lat (msec) : 20=0.29%, 50=99.45%, 100=0.26% 00:30:27.878 cpu : usr=98.94%, sys=0.66%, ctx=42, majf=0, minf=44 00:30:27.878 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:27.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.878 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.878 issued rwts: total=6160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:27.878 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:27.878 filename1: (groupid=0, jobs=1): err= 0: pid=3945201: Mon Jul 15 16:11:55 2024 00:30:27.878 read: IOPS=616, BW=2467KiB/s (2526kB/s)(24.1MiB/10014msec) 00:30:27.878 slat (nsec): min=6868, max=91668, avg=20394.07, stdev=15880.48 00:30:27.878 clat (usec): min=14706, max=40778, avg=25788.18, stdev=1158.76 00:30:27.878 lat (usec): min=14715, max=40798, avg=25808.58, stdev=1157.08 00:30:27.878 clat percentiles (usec): 00:30:27.878 | 1.00th=[24511], 5.00th=[25035], 10.00th=[25035], 20.00th=[25297], 00:30:27.878 | 30.00th=[25297], 40.00th=[25297], 50.00th=[25560], 60.00th=[25560], 00:30:27.878 | 70.00th=[25822], 80.00th=[26346], 90.00th=[27132], 95.00th=[27919], 00:30:27.878 | 99.00th=[28443], 99.50th=[28443], 99.90th=[35390], 99.95th=[35390], 00:30:27.878 | 99.99th=[40633] 00:30:27.878 bw ( KiB/s): min= 2304, max= 2560, per=4.16%, avg=2463.50, stdev=81.97, samples=20 00:30:27.878 iops : min= 576, max= 640, avg=615.80, stdev=20.53, samples=20 00:30:27.878 lat (msec) : 20=0.45%, 50=99.55% 00:30:27.878 cpu : usr=97.99%, sys=1.16%, ctx=112, majf=0, minf=69 00:30:27.878 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:30:27.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.878 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.878 issued rwts: total=6176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:27.878 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:27.878 filename1: (groupid=0, jobs=1): err= 0: pid=3945202: Mon Jul 15 16:11:55 2024 00:30:27.878 read: IOPS=614, BW=2457KiB/s (2516kB/s)(24.0MiB/10004msec) 00:30:27.878 slat (nsec): min=4454, max=91953, avg=42349.60, stdev=14278.12 00:30:27.878 clat (usec): min=23808, max=62706, avg=25692.57, stdev=2053.57 00:30:27.878 lat (usec): min=23824, max=62722, avg=25734.92, stdev=2051.83 00:30:27.878 clat percentiles (usec): 00:30:27.878 | 1.00th=[24511], 5.00th=[24773], 10.00th=[24773], 20.00th=[25035], 00:30:27.878 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:30:27.878 | 70.00th=[25822], 80.00th=[26084], 90.00th=[26870], 95.00th=[27395], 00:30:27.878 | 99.00th=[28181], 99.50th=[28181], 99.90th=[62653], 99.95th=[62653], 00:30:27.878 | 99.99th=[62653] 00:30:27.878 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2458.68, stdev=80.82, samples=19 00:30:27.878 iops : min= 576, max= 640, avg=614.63, stdev=20.22, samples=19 00:30:27.878 lat (msec) : 50=99.74%, 100=0.26% 00:30:27.878 cpu : usr=98.75%, sys=0.79%, ctx=74, majf=0, minf=60 00:30:27.878 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:30:27.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.878 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.878 issued rwts: total=6144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:27.878 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:27.878 filename1: (groupid=0, jobs=1): err= 0: pid=3945204: Mon Jul 15 16:11:55 2024 00:30:27.878 read: IOPS=613, BW=2456KiB/s (2515kB/s)(24.0MiB/10008msec) 00:30:27.878 slat (usec): min=6, max=168, avg=44.05, stdev=19.58 00:30:27.878 clat (usec): min=19039, max=68193, avg=25658.62, stdev=2065.33 00:30:27.878 lat (usec): min=19057, max=68212, avg=25702.67, stdev=2064.08 00:30:27.878 clat percentiles (usec): 00:30:27.878 | 1.00th=[24511], 5.00th=[24773], 10.00th=[24773], 20.00th=[25035], 00:30:27.878 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:30:27.878 | 70.00th=[25822], 80.00th=[26084], 90.00th=[26870], 95.00th=[27395], 00:30:27.878 | 99.00th=[28181], 99.50th=[30540], 99.90th=[61604], 99.95th=[61604], 00:30:27.878 | 99.99th=[68682] 00:30:27.878 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2458.89, stdev=80.40, samples=19 00:30:27.878 iops : min= 576, max= 640, avg=614.68, stdev=20.12, samples=19 00:30:27.878 lat (msec) : 20=0.03%, 50=99.71%, 100=0.26% 00:30:27.878 cpu : usr=98.95%, sys=0.68%, ctx=14, majf=0, minf=56 00:30:27.878 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:27.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.878 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.878 issued rwts: total=6144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:27.878 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:27.878 filename1: (groupid=0, jobs=1): err= 0: pid=3945205: Mon Jul 15 16:11:55 2024 00:30:27.878 read: IOPS=624, BW=2499KiB/s (2559kB/s)(24.4MiB/10006msec) 00:30:27.878 slat (nsec): min=5320, max=93884, avg=35039.39, stdev=21152.90 00:30:27.878 clat (usec): min=9292, max=62841, avg=25300.51, stdev=3013.79 00:30:27.878 lat (usec): min=9305, max=62856, avg=25335.55, stdev=3015.89 00:30:27.878 clat percentiles (usec): 00:30:27.878 | 1.00th=[15795], 5.00th=[21103], 10.00th=[24773], 20.00th=[25035], 00:30:27.878 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:30:27.878 | 70.00th=[25822], 80.00th=[26084], 90.00th=[26870], 95.00th=[27657], 00:30:27.878 | 99.00th=[28705], 99.50th=[35390], 99.90th=[62653], 99.95th=[62653], 00:30:27.878 | 99.99th=[62653] 00:30:27.878 bw ( KiB/s): min= 2176, max= 2816, per=4.21%, avg=2497.68, stdev=134.17, samples=19 00:30:27.878 iops : min= 544, max= 704, avg=624.42, stdev=33.54, samples=19 00:30:27.878 lat (msec) : 10=0.19%, 20=4.32%, 50=95.23%, 100=0.26% 00:30:27.878 cpu : usr=97.22%, sys=1.57%, ctx=151, majf=0, minf=54 00:30:27.878 IO depths : 1=5.4%, 2=11.0%, 4=22.7%, 8=53.5%, 16=7.4%, 32=0.0%, >=64=0.0% 00:30:27.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.878 complete : 0=0.0%, 4=93.5%, 8=0.9%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.878 issued rwts: total=6252,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:27.878 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:27.878 filename1: (groupid=0, jobs=1): err= 0: pid=3945206: Mon Jul 15 16:11:55 2024 00:30:27.878 read: IOPS=619, BW=2478KiB/s (2538kB/s)(24.2MiB/10019msec) 00:30:27.878 slat (usec): min=4, max=116, avg=50.96, stdev=16.17 00:30:27.879 clat (usec): min=4669, max=34097, avg=25385.92, stdev=1845.12 00:30:27.879 lat (usec): min=4702, max=34114, avg=25436.88, stdev=1847.89 00:30:27.879 clat percentiles (usec): 00:30:27.879 | 1.00th=[16909], 5.00th=[24511], 10.00th=[24773], 20.00th=[24773], 00:30:27.879 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25297], 60.00th=[25560], 00:30:27.879 | 70.00th=[25560], 80.00th=[26084], 90.00th=[26870], 95.00th=[27395], 00:30:27.879 | 99.00th=[28181], 99.50th=[28443], 99.90th=[33817], 99.95th=[33817], 00:30:27.879 | 99.99th=[34341] 00:30:27.879 bw ( KiB/s): min= 2304, max= 2693, per=4.18%, avg=2475.45, stdev=86.50, samples=20 00:30:27.879 iops : min= 576, max= 673, avg=618.65, stdev=21.63, samples=20 00:30:27.879 lat (msec) : 10=0.52%, 20=0.55%, 50=98.94% 00:30:27.879 cpu : usr=99.08%, sys=0.55%, ctx=14, majf=0, minf=48 00:30:27.879 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:27.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.879 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.879 issued rwts: total=6208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:27.879 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:27.879 filename1: (groupid=0, jobs=1): err= 0: pid=3945207: Mon Jul 15 16:11:55 2024 00:30:27.879 read: IOPS=616, BW=2466KiB/s (2525kB/s)(24.1MiB/10017msec) 00:30:27.879 slat (usec): min=4, max=108, avg=49.08, stdev=18.22 00:30:27.879 clat (usec): min=16421, max=36043, avg=25560.15, stdev=1089.52 00:30:27.879 lat (usec): min=16450, max=36058, avg=25609.22, stdev=1087.35 00:30:27.879 clat percentiles (usec): 00:30:27.879 | 1.00th=[24511], 5.00th=[24511], 10.00th=[24773], 20.00th=[25035], 00:30:27.879 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:30:27.879 | 70.00th=[25822], 80.00th=[26084], 90.00th=[26870], 95.00th=[27395], 00:30:27.879 | 99.00th=[28181], 99.50th=[28443], 99.90th=[35914], 99.95th=[35914], 00:30:27.879 | 99.99th=[35914] 00:30:27.879 bw ( KiB/s): min= 2427, max= 2560, per=4.16%, avg=2463.65, stdev=56.39, samples=20 00:30:27.879 iops : min= 606, max= 640, avg=615.85, stdev=14.08, samples=20 00:30:27.879 lat (msec) : 20=0.29%, 50=99.71% 00:30:27.879 cpu : usr=98.06%, sys=1.09%, ctx=114, majf=0, minf=54 00:30:27.879 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:27.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.879 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.879 issued rwts: total=6176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:27.879 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:27.879 filename1: (groupid=0, jobs=1): err= 0: pid=3945208: Mon Jul 15 16:11:55 2024 00:30:27.879 read: IOPS=614, BW=2460KiB/s (2519kB/s)(24.1MiB/10017msec) 00:30:27.879 slat (nsec): min=6211, max=72148, avg=19469.09, stdev=10631.43 00:30:27.879 clat (usec): min=13690, max=66526, avg=25850.30, stdev=1852.30 00:30:27.879 lat (usec): min=13701, max=66543, avg=25869.77, stdev=1851.47 00:30:27.879 clat percentiles (usec): 00:30:27.879 | 1.00th=[24773], 5.00th=[25035], 10.00th=[25035], 20.00th=[25297], 00:30:27.879 | 30.00th=[25297], 40.00th=[25297], 50.00th=[25560], 60.00th=[25560], 00:30:27.879 | 70.00th=[25822], 80.00th=[26346], 90.00th=[27132], 95.00th=[27657], 00:30:27.879 | 99.00th=[28181], 99.50th=[28705], 99.90th=[55837], 99.95th=[55837], 00:30:27.879 | 99.99th=[66323] 00:30:27.879 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2458.63, stdev=68.04, samples=19 00:30:27.879 iops : min= 576, max= 640, avg=614.63, stdev=16.97, samples=19 00:30:27.879 lat (msec) : 20=0.10%, 50=99.64%, 100=0.26% 00:30:27.879 cpu : usr=99.06%, sys=0.56%, ctx=16, majf=0, minf=83 00:30:27.879 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:27.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.879 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.879 issued rwts: total=6160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:27.879 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:27.879 filename1: (groupid=0, jobs=1): err= 0: pid=3945209: Mon Jul 15 16:11:55 2024 00:30:27.879 read: IOPS=615, BW=2463KiB/s (2522kB/s)(24.1MiB/10006msec) 00:30:27.879 slat (usec): min=5, max=112, avg=52.17, stdev=16.23 00:30:27.879 clat (usec): min=11355, max=54851, avg=25511.78, stdev=1846.75 00:30:27.879 lat (usec): min=11371, max=54870, avg=25563.96, stdev=1846.66 00:30:27.879 clat percentiles (usec): 00:30:27.879 | 1.00th=[24511], 5.00th=[24511], 10.00th=[24773], 20.00th=[24773], 00:30:27.879 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25297], 60.00th=[25297], 00:30:27.879 | 70.00th=[25560], 80.00th=[26084], 90.00th=[26870], 95.00th=[27395], 00:30:27.879 | 99.00th=[27919], 99.50th=[28181], 99.90th=[54789], 99.95th=[54789], 00:30:27.879 | 99.99th=[54789] 00:30:27.879 bw ( KiB/s): min= 2308, max= 2560, per=4.15%, avg=2459.16, stdev=68.02, samples=19 00:30:27.879 iops : min= 577, max= 640, avg=614.79, stdev=17.01, samples=19 00:30:27.879 lat (msec) : 20=0.26%, 50=99.48%, 100=0.26% 00:30:27.879 cpu : usr=99.17%, sys=0.44%, ctx=31, majf=0, minf=49 00:30:27.879 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:30:27.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.879 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.879 issued rwts: total=6160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:27.879 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:27.879 filename2: (groupid=0, jobs=1): err= 0: pid=3945210: Mon Jul 15 16:11:55 2024 00:30:27.879 read: IOPS=615, BW=2463KiB/s (2522kB/s)(24.1MiB/10005msec) 00:30:27.879 slat (nsec): min=5882, max=95385, avg=53347.15, stdev=14120.21 00:30:27.879 clat (usec): min=11269, max=54819, avg=25524.23, stdev=1845.63 00:30:27.879 lat (usec): min=11285, max=54840, avg=25577.58, stdev=1845.01 00:30:27.879 clat percentiles (usec): 00:30:27.879 | 1.00th=[24249], 5.00th=[24511], 10.00th=[24773], 20.00th=[24773], 00:30:27.879 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25297], 60.00th=[25297], 00:30:27.879 | 70.00th=[25560], 80.00th=[26084], 90.00th=[26870], 95.00th=[27395], 00:30:27.879 | 99.00th=[27919], 99.50th=[28443], 99.90th=[54789], 99.95th=[54789], 00:30:27.879 | 99.99th=[54789] 00:30:27.879 bw ( KiB/s): min= 2308, max= 2560, per=4.15%, avg=2459.16, stdev=68.02, samples=19 00:30:27.879 iops : min= 577, max= 640, avg=614.79, stdev=17.01, samples=19 00:30:27.879 lat (msec) : 20=0.26%, 50=99.48%, 100=0.26% 00:30:27.879 cpu : usr=97.41%, sys=1.50%, ctx=97, majf=0, minf=43 00:30:27.879 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:30:27.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.879 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.879 issued rwts: total=6160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:27.879 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:27.879 filename2: (groupid=0, jobs=1): err= 0: pid=3945211: Mon Jul 15 16:11:55 2024 00:30:27.879 read: IOPS=615, BW=2462KiB/s (2521kB/s)(24.1MiB/10009msec) 00:30:27.879 slat (nsec): min=7078, max=91923, avg=41327.44, stdev=14952.38 00:30:27.879 clat (usec): min=14127, max=52957, avg=25659.85, stdev=1711.58 00:30:27.879 lat (usec): min=14138, max=52976, avg=25701.18, stdev=1709.84 00:30:27.879 clat percentiles (usec): 00:30:27.879 | 1.00th=[24511], 5.00th=[24773], 10.00th=[24773], 20.00th=[25035], 00:30:27.879 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:30:27.879 | 70.00th=[25822], 80.00th=[26084], 90.00th=[26870], 95.00th=[27657], 00:30:27.879 | 99.00th=[28181], 99.50th=[28443], 99.90th=[52691], 99.95th=[52691], 00:30:27.879 | 99.99th=[53216] 00:30:27.879 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2458.37, stdev=68.16, samples=19 00:30:27.879 iops : min= 576, max= 640, avg=614.53, stdev=17.02, samples=19 00:30:27.879 lat (msec) : 20=0.29%, 50=99.45%, 100=0.26% 00:30:27.879 cpu : usr=97.49%, sys=1.42%, ctx=67, majf=0, minf=76 00:30:27.879 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:27.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.879 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.879 issued rwts: total=6160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:27.879 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:27.879 filename2: (groupid=0, jobs=1): err= 0: pid=3945212: Mon Jul 15 16:11:55 2024 00:30:27.879 read: IOPS=616, BW=2466KiB/s (2525kB/s)(24.1MiB/10019msec) 00:30:27.879 slat (nsec): min=7583, max=97410, avg=49668.77, stdev=16738.46 00:30:27.879 clat (usec): min=13466, max=41151, avg=25555.52, stdev=1155.11 00:30:27.879 lat (usec): min=13488, max=41180, avg=25605.19, stdev=1154.28 00:30:27.879 clat percentiles (usec): 00:30:27.879 | 1.00th=[24511], 5.00th=[24773], 10.00th=[24773], 20.00th=[25035], 00:30:27.879 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:30:27.879 | 70.00th=[25560], 80.00th=[26084], 90.00th=[26870], 95.00th=[27395], 00:30:27.879 | 99.00th=[28181], 99.50th=[28443], 99.90th=[37487], 99.95th=[37487], 00:30:27.879 | 99.99th=[41157] 00:30:27.879 bw ( KiB/s): min= 2411, max= 2560, per=4.16%, avg=2463.45, stdev=55.13, samples=20 00:30:27.879 iops : min= 602, max= 640, avg=615.80, stdev=13.78, samples=20 00:30:27.879 lat (msec) : 20=0.32%, 50=99.68% 00:30:27.879 cpu : usr=96.90%, sys=1.74%, ctx=94, majf=0, minf=73 00:30:27.879 IO depths : 1=4.9%, 2=11.1%, 4=25.0%, 8=51.4%, 16=7.6%, 32=0.0%, >=64=0.0% 00:30:27.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.879 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.879 issued rwts: total=6176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:27.879 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:27.879 filename2: (groupid=0, jobs=1): err= 0: pid=3945213: Mon Jul 15 16:11:55 2024 00:30:27.879 read: IOPS=614, BW=2460KiB/s (2519kB/s)(24.1MiB/10017msec) 00:30:27.879 slat (nsec): min=6432, max=73443, avg=23900.81, stdev=12924.73 00:30:27.879 clat (usec): min=13372, max=66056, avg=25795.71, stdev=1833.79 00:30:27.879 lat (usec): min=13384, max=66076, avg=25819.61, stdev=1833.89 00:30:27.879 clat percentiles (usec): 00:30:27.879 | 1.00th=[24773], 5.00th=[25035], 10.00th=[25035], 20.00th=[25297], 00:30:27.879 | 30.00th=[25297], 40.00th=[25297], 50.00th=[25560], 60.00th=[25560], 00:30:27.879 | 70.00th=[25822], 80.00th=[26346], 90.00th=[27132], 95.00th=[27395], 00:30:27.879 | 99.00th=[27919], 99.50th=[28443], 99.90th=[55313], 99.95th=[55313], 00:30:27.879 | 99.99th=[65799] 00:30:27.879 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2458.63, stdev=68.04, samples=19 00:30:27.879 iops : min= 576, max= 640, avg=614.63, stdev=16.97, samples=19 00:30:27.879 lat (msec) : 20=0.10%, 50=99.64%, 100=0.26% 00:30:27.879 cpu : usr=98.40%, sys=0.91%, ctx=31, majf=0, minf=46 00:30:27.879 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:27.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.879 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.879 issued rwts: total=6160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:27.879 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:27.879 filename2: (groupid=0, jobs=1): err= 0: pid=3945214: Mon Jul 15 16:11:55 2024 00:30:27.879 read: IOPS=620, BW=2481KiB/s (2541kB/s)(24.2MiB/10009msec) 00:30:27.879 slat (nsec): min=7129, max=93246, avg=44784.39, stdev=20785.18 00:30:27.879 clat (usec): min=4184, max=33673, avg=25449.12, stdev=2019.54 00:30:27.879 lat (usec): min=4199, max=33735, avg=25493.90, stdev=2020.82 00:30:27.880 clat percentiles (usec): 00:30:27.880 | 1.00th=[16909], 5.00th=[24773], 10.00th=[24773], 20.00th=[25035], 00:30:27.880 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:30:27.880 | 70.00th=[25822], 80.00th=[26084], 90.00th=[26870], 95.00th=[27395], 00:30:27.880 | 99.00th=[28181], 99.50th=[28443], 99.90th=[33424], 99.95th=[33817], 00:30:27.880 | 99.99th=[33817] 00:30:27.880 bw ( KiB/s): min= 2299, max= 2688, per=4.18%, avg=2476.25, stdev=112.20, samples=20 00:30:27.880 iops : min= 574, max= 672, avg=619.00, stdev=28.09, samples=20 00:30:27.880 lat (msec) : 10=0.77%, 20=0.26%, 50=98.97% 00:30:27.880 cpu : usr=98.52%, sys=0.92%, ctx=40, majf=0, minf=39 00:30:27.880 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:27.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.880 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.880 issued rwts: total=6208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:27.880 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:27.880 filename2: (groupid=0, jobs=1): err= 0: pid=3945215: Mon Jul 15 16:11:55 2024 00:30:27.880 read: IOPS=615, BW=2461KiB/s (2520kB/s)(24.1MiB/10014msec) 00:30:27.880 slat (usec): min=7, max=143, avg=42.17, stdev=14.97 00:30:27.880 clat (usec): min=14145, max=58062, avg=25662.09, stdev=1937.03 00:30:27.880 lat (usec): min=14185, max=58092, avg=25704.26, stdev=1936.05 00:30:27.880 clat percentiles (usec): 00:30:27.880 | 1.00th=[24511], 5.00th=[24773], 10.00th=[24773], 20.00th=[25035], 00:30:27.880 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:30:27.880 | 70.00th=[25822], 80.00th=[26084], 90.00th=[26870], 95.00th=[27657], 00:30:27.880 | 99.00th=[28181], 99.50th=[28443], 99.90th=[57934], 99.95th=[57934], 00:30:27.880 | 99.99th=[57934] 00:30:27.880 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2457.05, stdev=78.49, samples=20 00:30:27.880 iops : min= 576, max= 640, avg=614.20, stdev=19.61, samples=20 00:30:27.880 lat (msec) : 20=0.36%, 50=99.38%, 100=0.26% 00:30:27.880 cpu : usr=98.93%, sys=0.65%, ctx=55, majf=0, minf=53 00:30:27.880 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:30:27.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.880 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.880 issued rwts: total=6160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:27.880 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:27.880 filename2: (groupid=0, jobs=1): err= 0: pid=3945216: Mon Jul 15 16:11:55 2024 00:30:27.880 read: IOPS=615, BW=2460KiB/s (2519kB/s)(24.1MiB/10015msec) 00:30:27.880 slat (nsec): min=5860, max=95554, avg=52279.11, stdev=15375.80 00:30:27.880 clat (usec): min=22386, max=50706, avg=25582.64, stdev=1517.18 00:30:27.880 lat (usec): min=22436, max=50724, avg=25634.92, stdev=1514.98 00:30:27.880 clat percentiles (usec): 00:30:27.880 | 1.00th=[24511], 5.00th=[24511], 10.00th=[24773], 20.00th=[24773], 00:30:27.880 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:30:27.880 | 70.00th=[25560], 80.00th=[26084], 90.00th=[26870], 95.00th=[27395], 00:30:27.880 | 99.00th=[28181], 99.50th=[28443], 99.90th=[50594], 99.95th=[50594], 00:30:27.880 | 99.99th=[50594] 00:30:27.880 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2457.30, stdev=78.40, samples=20 00:30:27.880 iops : min= 576, max= 640, avg=614.30, stdev=19.57, samples=20 00:30:27.880 lat (msec) : 50=99.74%, 100=0.26% 00:30:27.880 cpu : usr=97.76%, sys=1.21%, ctx=130, majf=0, minf=40 00:30:27.880 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:30:27.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.880 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.880 issued rwts: total=6160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:27.880 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:27.880 filename2: (groupid=0, jobs=1): err= 0: pid=3945217: Mon Jul 15 16:11:55 2024 00:30:27.880 read: IOPS=623, BW=2492KiB/s (2552kB/s)(24.4MiB/10005msec) 00:30:27.880 slat (nsec): min=6110, max=94277, avg=19203.58, stdev=16092.90 00:30:27.880 clat (usec): min=4746, max=76539, avg=25596.78, stdev=3579.15 00:30:27.880 lat (usec): min=4753, max=76595, avg=25615.99, stdev=3577.86 00:30:27.880 clat percentiles (usec): 00:30:27.880 | 1.00th=[18220], 5.00th=[20055], 10.00th=[21627], 20.00th=[23987], 00:30:27.880 | 30.00th=[25297], 40.00th=[25297], 50.00th=[25560], 60.00th=[25560], 00:30:27.880 | 70.00th=[26084], 80.00th=[27132], 90.00th=[29230], 95.00th=[31327], 00:30:27.880 | 99.00th=[33817], 99.50th=[34866], 99.90th=[61080], 99.95th=[61080], 00:30:27.880 | 99.99th=[76022] 00:30:27.880 bw ( KiB/s): min= 2272, max= 2592, per=4.19%, avg=2484.21, stdev=69.71, samples=19 00:30:27.880 iops : min= 568, max= 648, avg=621.05, stdev=17.43, samples=19 00:30:27.880 lat (msec) : 10=0.19%, 20=4.33%, 50=95.22%, 100=0.26% 00:30:27.880 cpu : usr=99.10%, sys=0.52%, ctx=14, majf=0, minf=70 00:30:27.880 IO depths : 1=0.1%, 2=0.5%, 4=3.8%, 8=79.4%, 16=16.2%, 32=0.0%, >=64=0.0% 00:30:27.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.880 complete : 0=0.0%, 4=89.3%, 8=8.7%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.880 issued rwts: total=6234,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:27.880 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:27.880 00:30:27.880 Run status group 0 (all jobs): 00:30:27.880 READ: bw=57.9MiB/s (60.7MB/s), 2456KiB/s-2584KiB/s (2515kB/s-2646kB/s), io=580MiB (608MB), run=10002-10019msec 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:27.880 bdev_null0 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:27.880 [2024-07-15 16:11:55.398918] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:27.880 16:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:27.881 bdev_null1 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:27.881 { 00:30:27.881 "params": { 00:30:27.881 "name": "Nvme$subsystem", 00:30:27.881 "trtype": "$TEST_TRANSPORT", 00:30:27.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:27.881 "adrfam": "ipv4", 00:30:27.881 "trsvcid": "$NVMF_PORT", 00:30:27.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:27.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:27.881 "hdgst": ${hdgst:-false}, 00:30:27.881 "ddgst": ${ddgst:-false} 00:30:27.881 }, 00:30:27.881 "method": "bdev_nvme_attach_controller" 00:30:27.881 } 00:30:27.881 EOF 00:30:27.881 )") 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:27.881 { 00:30:27.881 "params": { 00:30:27.881 "name": "Nvme$subsystem", 00:30:27.881 "trtype": "$TEST_TRANSPORT", 00:30:27.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:27.881 "adrfam": "ipv4", 00:30:27.881 "trsvcid": "$NVMF_PORT", 00:30:27.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:27.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:27.881 "hdgst": ${hdgst:-false}, 00:30:27.881 "ddgst": ${ddgst:-false} 00:30:27.881 }, 00:30:27.881 "method": "bdev_nvme_attach_controller" 00:30:27.881 } 00:30:27.881 EOF 00:30:27.881 )") 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:27.881 "params": { 00:30:27.881 "name": "Nvme0", 00:30:27.881 "trtype": "tcp", 00:30:27.881 "traddr": "10.0.0.2", 00:30:27.881 "adrfam": "ipv4", 00:30:27.881 "trsvcid": "4420", 00:30:27.881 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:27.881 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:27.881 "hdgst": false, 00:30:27.881 "ddgst": false 00:30:27.881 }, 00:30:27.881 "method": "bdev_nvme_attach_controller" 00:30:27.881 },{ 00:30:27.881 "params": { 00:30:27.881 "name": "Nvme1", 00:30:27.881 "trtype": "tcp", 00:30:27.881 "traddr": "10.0.0.2", 00:30:27.881 "adrfam": "ipv4", 00:30:27.881 "trsvcid": "4420", 00:30:27.881 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:27.881 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:27.881 "hdgst": false, 00:30:27.881 "ddgst": false 00:30:27.881 }, 00:30:27.881 "method": "bdev_nvme_attach_controller" 00:30:27.881 }' 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:27.881 16:11:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:27.881 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:27.881 ... 00:30:27.881 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:27.881 ... 00:30:27.881 fio-3.35 00:30:27.881 Starting 4 threads 00:30:27.881 EAL: No free 2048 kB hugepages reported on node 1 00:30:33.151 00:30:33.151 filename0: (groupid=0, jobs=1): err= 0: pid=3947250: Mon Jul 15 16:12:01 2024 00:30:33.151 read: IOPS=2617, BW=20.4MiB/s (21.4MB/s)(102MiB/5003msec) 00:30:33.151 slat (nsec): min=4233, max=71538, avg=12057.95, stdev=8476.79 00:30:33.151 clat (usec): min=951, max=49023, avg=3020.72, stdev=1246.52 00:30:33.151 lat (usec): min=966, max=49037, avg=3032.78, stdev=1246.37 00:30:33.151 clat percentiles (usec): 00:30:33.151 | 1.00th=[ 2008], 5.00th=[ 2311], 10.00th=[ 2507], 20.00th=[ 2671], 00:30:33.151 | 30.00th=[ 2769], 40.00th=[ 2835], 50.00th=[ 2933], 60.00th=[ 2999], 00:30:33.151 | 70.00th=[ 3064], 80.00th=[ 3228], 90.00th=[ 3720], 95.00th=[ 4113], 00:30:33.151 | 99.00th=[ 4621], 99.50th=[ 4883], 99.90th=[ 5538], 99.95th=[49021], 00:30:33.151 | 99.99th=[49021] 00:30:33.151 bw ( KiB/s): min=19401, max=21712, per=25.15%, avg=20939.67, stdev=652.27, samples=9 00:30:33.151 iops : min= 2425, max= 2714, avg=2617.44, stdev=81.57, samples=9 00:30:33.151 lat (usec) : 1000=0.01% 00:30:33.151 lat (msec) : 2=0.99%, 4=91.22%, 10=7.72%, 50=0.06% 00:30:33.151 cpu : usr=97.04%, sys=2.62%, ctx=10, majf=0, minf=0 00:30:33.151 IO depths : 1=0.2%, 2=3.1%, 4=68.9%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:33.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.151 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.151 issued rwts: total=13093,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:33.151 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:33.151 filename0: (groupid=0, jobs=1): err= 0: pid=3947251: Mon Jul 15 16:12:01 2024 00:30:33.151 read: IOPS=2622, BW=20.5MiB/s (21.5MB/s)(102MiB/5001msec) 00:30:33.151 slat (usec): min=6, max=121, avg=11.99, stdev= 8.18 00:30:33.151 clat (usec): min=1179, max=5765, avg=3015.43, stdev=552.68 00:30:33.151 lat (usec): min=1188, max=5776, avg=3027.42, stdev=552.50 00:30:33.151 clat percentiles (usec): 00:30:33.151 | 1.00th=[ 1778], 5.00th=[ 2311], 10.00th=[ 2507], 20.00th=[ 2671], 00:30:33.151 | 30.00th=[ 2737], 40.00th=[ 2835], 50.00th=[ 2933], 60.00th=[ 2999], 00:30:33.151 | 70.00th=[ 3064], 80.00th=[ 3261], 90.00th=[ 3916], 95.00th=[ 4178], 00:30:33.151 | 99.00th=[ 4686], 99.50th=[ 4883], 99.90th=[ 5407], 99.95th=[ 5538], 00:30:33.151 | 99.99th=[ 5735] 00:30:33.151 bw ( KiB/s): min=20192, max=21984, per=25.23%, avg=20999.11, stdev=512.94, samples=9 00:30:33.151 iops : min= 2524, max= 2748, avg=2624.89, stdev=64.12, samples=9 00:30:33.151 lat (msec) : 2=1.78%, 4=88.76%, 10=9.46% 00:30:33.151 cpu : usr=97.20%, sys=2.48%, ctx=11, majf=0, minf=9 00:30:33.151 IO depths : 1=0.1%, 2=1.9%, 4=70.4%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:33.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.151 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.151 issued rwts: total=13115,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:33.151 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:33.151 filename1: (groupid=0, jobs=1): err= 0: pid=3947252: Mon Jul 15 16:12:01 2024 00:30:33.151 read: IOPS=2545, BW=19.9MiB/s (20.9MB/s)(99.4MiB/5001msec) 00:30:33.151 slat (nsec): min=6162, max=69242, avg=12382.01, stdev=6414.77 00:30:33.151 clat (usec): min=647, max=44886, avg=3107.06, stdev=1174.00 00:30:33.151 lat (usec): min=654, max=44907, avg=3119.44, stdev=1173.65 00:30:33.151 clat percentiles (usec): 00:30:33.151 | 1.00th=[ 2114], 5.00th=[ 2442], 10.00th=[ 2573], 20.00th=[ 2704], 00:30:33.151 | 30.00th=[ 2802], 40.00th=[ 2900], 50.00th=[ 2966], 60.00th=[ 3032], 00:30:33.151 | 70.00th=[ 3130], 80.00th=[ 3326], 90.00th=[ 3949], 95.00th=[ 4146], 00:30:33.151 | 99.00th=[ 4752], 99.50th=[ 5014], 99.90th=[ 5800], 99.95th=[44827], 00:30:33.152 | 99.99th=[44827] 00:30:33.152 bw ( KiB/s): min=18484, max=20800, per=24.34%, avg=20260.00, stdev=714.02, samples=9 00:30:33.152 iops : min= 2310, max= 2600, avg=2532.44, stdev=89.41, samples=9 00:30:33.152 lat (usec) : 750=0.02% 00:30:33.152 lat (msec) : 2=0.52%, 4=89.87%, 10=9.53%, 50=0.06% 00:30:33.152 cpu : usr=97.22%, sys=2.42%, ctx=12, majf=0, minf=9 00:30:33.152 IO depths : 1=0.3%, 2=3.3%, 4=68.7%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:33.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.152 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.152 issued rwts: total=12729,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:33.152 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:33.152 filename1: (groupid=0, jobs=1): err= 0: pid=3947253: Mon Jul 15 16:12:01 2024 00:30:33.152 read: IOPS=2624, BW=20.5MiB/s (21.5MB/s)(103MiB/5001msec) 00:30:33.152 slat (nsec): min=6118, max=68780, avg=12073.99, stdev=8383.62 00:30:33.152 clat (usec): min=927, max=5977, avg=3013.36, stdev=532.94 00:30:33.152 lat (usec): min=939, max=5985, avg=3025.43, stdev=532.52 00:30:33.152 clat percentiles (usec): 00:30:33.152 | 1.00th=[ 1762], 5.00th=[ 2343], 10.00th=[ 2507], 20.00th=[ 2671], 00:30:33.152 | 30.00th=[ 2802], 40.00th=[ 2868], 50.00th=[ 2933], 60.00th=[ 2999], 00:30:33.152 | 70.00th=[ 3064], 80.00th=[ 3261], 90.00th=[ 3851], 95.00th=[ 4146], 00:30:33.152 | 99.00th=[ 4621], 99.50th=[ 4817], 99.90th=[ 5145], 99.95th=[ 5342], 00:30:33.152 | 99.99th=[ 5932] 00:30:33.152 bw ( KiB/s): min=20608, max=21904, per=25.21%, avg=20983.11, stdev=393.15, samples=9 00:30:33.152 iops : min= 2576, max= 2738, avg=2622.89, stdev=49.14, samples=9 00:30:33.152 lat (usec) : 1000=0.01% 00:30:33.152 lat (msec) : 2=1.64%, 4=89.77%, 10=8.59% 00:30:33.152 cpu : usr=96.90%, sys=2.78%, ctx=8, majf=0, minf=9 00:30:33.152 IO depths : 1=0.1%, 2=2.2%, 4=70.3%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:33.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.152 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.152 issued rwts: total=13123,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:33.152 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:33.152 00:30:33.152 Run status group 0 (all jobs): 00:30:33.152 READ: bw=81.3MiB/s (85.2MB/s), 19.9MiB/s-20.5MiB/s (20.9MB/s-21.5MB/s), io=407MiB (426MB), run=5001-5003msec 00:30:33.152 16:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:30:33.152 16:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:33.152 16:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:33.152 16:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:33.152 16:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:33.152 16:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:33.152 16:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.152 16:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:33.152 16:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.152 16:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:33.152 16:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.152 16:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:33.152 16:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.152 16:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:33.152 16:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:33.152 16:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:33.152 16:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:33.152 16:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.152 16:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:33.152 16:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.152 16:12:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:33.152 16:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.152 16:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:33.152 16:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.152 00:30:33.152 real 0m24.359s 00:30:33.152 user 4m51.253s 00:30:33.152 sys 0m4.396s 00:30:33.152 16:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:33.152 16:12:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:33.152 ************************************ 00:30:33.152 END TEST fio_dif_rand_params 00:30:33.152 ************************************ 00:30:33.152 16:12:01 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:30:33.152 16:12:01 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:30:33.152 16:12:01 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:33.152 16:12:01 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:33.152 16:12:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:33.152 ************************************ 00:30:33.152 START TEST fio_dif_digest 00:30:33.152 ************************************ 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:33.152 bdev_null0 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:33.152 [2024-07-15 16:12:01.905547] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:33.152 { 00:30:33.152 "params": { 00:30:33.152 "name": "Nvme$subsystem", 00:30:33.152 "trtype": "$TEST_TRANSPORT", 00:30:33.152 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:33.152 "adrfam": "ipv4", 00:30:33.152 "trsvcid": "$NVMF_PORT", 00:30:33.152 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:33.152 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:33.152 "hdgst": ${hdgst:-false}, 00:30:33.152 "ddgst": ${ddgst:-false} 00:30:33.152 }, 00:30:33.152 "method": "bdev_nvme_attach_controller" 00:30:33.152 } 00:30:33.152 EOF 00:30:33.152 )") 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:33.152 16:12:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:30:33.153 16:12:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:33.153 16:12:01 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:30:33.153 16:12:01 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:30:33.153 16:12:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:30:33.153 16:12:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:33.153 16:12:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:30:33.153 16:12:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:30:33.153 16:12:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:33.153 "params": { 00:30:33.153 "name": "Nvme0", 00:30:33.153 "trtype": "tcp", 00:30:33.153 "traddr": "10.0.0.2", 00:30:33.153 "adrfam": "ipv4", 00:30:33.153 "trsvcid": "4420", 00:30:33.153 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:33.153 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:33.153 "hdgst": true, 00:30:33.153 "ddgst": true 00:30:33.153 }, 00:30:33.153 "method": "bdev_nvme_attach_controller" 00:30:33.153 }' 00:30:33.153 16:12:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:33.153 16:12:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:33.153 16:12:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:33.153 16:12:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:33.153 16:12:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:33.153 16:12:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:33.153 16:12:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:33.153 16:12:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:33.153 16:12:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:33.153 16:12:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:33.410 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:33.410 ... 00:30:33.411 fio-3.35 00:30:33.411 Starting 3 threads 00:30:33.411 EAL: No free 2048 kB hugepages reported on node 1 00:30:45.642 00:30:45.642 filename0: (groupid=0, jobs=1): err= 0: pid=3948467: Mon Jul 15 16:12:12 2024 00:30:45.642 read: IOPS=280, BW=35.1MiB/s (36.8MB/s)(352MiB/10046msec) 00:30:45.642 slat (nsec): min=4292, max=19463, avg=11558.02, stdev=2214.28 00:30:45.642 clat (usec): min=6243, max=50672, avg=10662.31, stdev=1369.83 00:30:45.642 lat (usec): min=6256, max=50680, avg=10673.86, stdev=1369.78 00:30:45.642 clat percentiles (usec): 00:30:45.642 | 1.00th=[ 8094], 5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[10028], 00:30:45.642 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10683], 60.00th=[10814], 00:30:45.642 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11731], 95.00th=[11994], 00:30:45.642 | 99.00th=[12911], 99.50th=[13304], 99.90th=[16319], 99.95th=[49021], 00:30:45.642 | 99.99th=[50594] 00:30:45.642 bw ( KiB/s): min=35072, max=38144, per=34.56%, avg=36057.60, stdev=744.16, samples=20 00:30:45.642 iops : min= 274, max= 298, avg=281.70, stdev= 5.81, samples=20 00:30:45.642 lat (msec) : 10=20.18%, 20=79.74%, 50=0.04%, 100=0.04% 00:30:45.642 cpu : usr=94.51%, sys=5.16%, ctx=24, majf=0, minf=131 00:30:45.642 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:45.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:45.642 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:45.642 issued rwts: total=2819,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:45.642 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:45.642 filename0: (groupid=0, jobs=1): err= 0: pid=3948468: Mon Jul 15 16:12:12 2024 00:30:45.642 read: IOPS=266, BW=33.3MiB/s (34.9MB/s)(335MiB/10043msec) 00:30:45.642 slat (nsec): min=6535, max=25301, avg=11554.75, stdev=2253.00 00:30:45.643 clat (usec): min=6183, max=46468, avg=11224.89, stdev=1331.07 00:30:45.643 lat (usec): min=6196, max=46483, avg=11236.45, stdev=1331.02 00:30:45.643 clat percentiles (usec): 00:30:45.643 | 1.00th=[ 8455], 5.00th=[ 9765], 10.00th=[10159], 20.00th=[10552], 00:30:45.643 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11207], 60.00th=[11338], 00:30:45.643 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12256], 95.00th=[12780], 00:30:45.643 | 99.00th=[13435], 99.50th=[13698], 99.90th=[14877], 99.95th=[46400], 00:30:45.643 | 99.99th=[46400] 00:30:45.643 bw ( KiB/s): min=33024, max=35328, per=32.82%, avg=34240.00, stdev=684.28, samples=20 00:30:45.643 iops : min= 258, max= 276, avg=267.50, stdev= 5.35, samples=20 00:30:45.643 lat (msec) : 10=6.95%, 20=92.98%, 50=0.07% 00:30:45.643 cpu : usr=94.64%, sys=5.03%, ctx=17, majf=0, minf=117 00:30:45.643 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:45.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:45.643 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:45.643 issued rwts: total=2677,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:45.643 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:45.643 filename0: (groupid=0, jobs=1): err= 0: pid=3948469: Mon Jul 15 16:12:12 2024 00:30:45.643 read: IOPS=269, BW=33.6MiB/s (35.3MB/s)(337MiB/10004msec) 00:30:45.643 slat (nsec): min=6438, max=25765, avg=11638.72, stdev=2208.95 00:30:45.643 clat (usec): min=4085, max=52952, avg=11133.98, stdev=2119.73 00:30:45.643 lat (usec): min=4093, max=52964, avg=11145.62, stdev=2119.74 00:30:45.643 clat percentiles (usec): 00:30:45.643 | 1.00th=[ 9110], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10421], 00:30:45.643 | 30.00th=[10552], 40.00th=[10814], 50.00th=[11076], 60.00th=[11207], 00:30:45.643 | 70.00th=[11469], 80.00th=[11731], 90.00th=[12125], 95.00th=[12518], 00:30:45.643 | 99.00th=[13435], 99.50th=[13960], 99.90th=[52167], 99.95th=[52167], 00:30:45.643 | 99.99th=[52691] 00:30:45.643 bw ( KiB/s): min=30976, max=36096, per=33.01%, avg=34435.35, stdev=1134.63, samples=20 00:30:45.643 iops : min= 242, max= 282, avg=269.00, stdev= 8.89, samples=20 00:30:45.643 lat (msec) : 10=11.22%, 20=88.56%, 100=0.22% 00:30:45.643 cpu : usr=94.49%, sys=5.19%, ctx=20, majf=0, minf=126 00:30:45.643 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:45.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:45.643 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:45.643 issued rwts: total=2692,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:45.643 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:45.643 00:30:45.643 Run status group 0 (all jobs): 00:30:45.643 READ: bw=102MiB/s (107MB/s), 33.3MiB/s-35.1MiB/s (34.9MB/s-36.8MB/s), io=1024MiB (1073MB), run=10004-10046msec 00:30:45.643 16:12:12 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:30:45.643 16:12:12 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:30:45.643 16:12:12 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:30:45.643 16:12:12 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:45.643 16:12:12 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:30:45.643 16:12:12 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:45.643 16:12:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:45.643 16:12:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:45.643 16:12:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:45.643 16:12:12 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:45.643 16:12:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:45.643 16:12:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:45.643 16:12:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:45.643 00:30:45.643 real 0m11.061s 00:30:45.643 user 0m34.889s 00:30:45.643 sys 0m1.827s 00:30:45.643 16:12:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:45.643 16:12:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:45.643 ************************************ 00:30:45.643 END TEST fio_dif_digest 00:30:45.643 ************************************ 00:30:45.643 16:12:12 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:30:45.643 16:12:12 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:30:45.643 16:12:12 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:30:45.643 16:12:12 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:45.643 16:12:12 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:30:45.643 16:12:12 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:45.643 16:12:12 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:30:45.643 16:12:12 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:45.643 16:12:12 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:45.643 rmmod nvme_tcp 00:30:45.643 rmmod nvme_fabrics 00:30:45.643 rmmod nvme_keyring 00:30:45.643 16:12:13 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:45.643 16:12:13 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:30:45.643 16:12:13 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:30:45.643 16:12:13 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 3939740 ']' 00:30:45.643 16:12:13 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 3939740 00:30:45.643 16:12:13 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 3939740 ']' 00:30:45.643 16:12:13 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 3939740 00:30:45.643 16:12:13 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:30:45.643 16:12:13 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:45.643 16:12:13 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3939740 00:30:45.643 16:12:13 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:45.643 16:12:13 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:45.643 16:12:13 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3939740' 00:30:45.643 killing process with pid 3939740 00:30:45.643 16:12:13 nvmf_dif -- common/autotest_common.sh@967 -- # kill 3939740 00:30:45.643 16:12:13 nvmf_dif -- common/autotest_common.sh@972 -- # wait 3939740 00:30:45.643 16:12:13 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:30:45.643 16:12:13 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:46.580 Waiting for block devices as requested 00:30:46.580 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:30:46.580 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:46.839 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:46.839 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:46.839 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:46.839 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:46.839 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:47.098 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:47.098 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:47.098 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:47.356 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:47.356 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:47.356 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:47.356 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:47.614 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:47.614 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:47.614 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:47.887 16:12:16 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:47.887 16:12:16 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:47.887 16:12:16 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:47.887 16:12:16 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:47.887 16:12:16 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:47.887 16:12:16 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:47.887 16:12:16 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:49.790 16:12:18 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:49.790 00:30:49.790 real 1m12.437s 00:30:49.790 user 7m8.589s 00:30:49.790 sys 0m17.789s 00:30:49.790 16:12:18 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:49.790 16:12:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:49.790 ************************************ 00:30:49.790 END TEST nvmf_dif 00:30:49.790 ************************************ 00:30:49.790 16:12:18 -- common/autotest_common.sh@1142 -- # return 0 00:30:49.790 16:12:18 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:49.790 16:12:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:49.790 16:12:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:49.790 16:12:18 -- common/autotest_common.sh@10 -- # set +x 00:30:49.790 ************************************ 00:30:49.790 START TEST nvmf_abort_qd_sizes 00:30:49.790 ************************************ 00:30:49.790 16:12:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:50.049 * Looking for test storage... 00:30:50.049 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:50.049 16:12:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:50.049 16:12:18 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:30:50.049 16:12:18 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:50.049 16:12:18 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:50.049 16:12:18 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:50.049 16:12:18 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:50.049 16:12:18 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:50.049 16:12:18 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:50.049 16:12:18 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:50.049 16:12:18 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:50.049 16:12:18 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:50.049 16:12:18 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:50.049 16:12:18 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:50.049 16:12:18 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:50.049 16:12:18 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:50.049 16:12:18 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:50.049 16:12:18 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:50.049 16:12:18 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:50.049 16:12:18 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:50.049 16:12:18 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:50.049 16:12:18 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:50.049 16:12:18 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:50.049 16:12:18 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.049 16:12:18 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.049 16:12:18 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.049 16:12:18 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:30:50.049 16:12:18 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.049 16:12:18 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:30:50.049 16:12:18 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:50.049 16:12:18 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:50.049 16:12:18 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:50.049 16:12:18 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:50.049 16:12:18 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:50.049 16:12:18 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:50.049 16:12:18 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:50.049 16:12:18 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:50.049 16:12:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:30:50.049 16:12:18 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:50.049 16:12:18 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:50.049 16:12:18 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:50.049 16:12:18 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:50.049 16:12:18 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:50.049 16:12:18 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:50.049 16:12:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:50.049 16:12:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:50.049 16:12:18 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:50.049 16:12:18 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:50.049 16:12:18 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:30:50.050 16:12:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:55.317 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:55.317 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:30:55.317 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:55.317 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:55.317 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:55.317 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:55.317 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:55.317 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:30:55.317 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:55.317 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:30:55.317 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:30:55.317 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:55.318 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:55.318 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:55.318 Found net devices under 0000:86:00.0: cvl_0_0 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:55.318 Found net devices under 0000:86:00.1: cvl_0_1 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:55.318 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:55.318 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:30:55.318 00:30:55.318 --- 10.0.0.2 ping statistics --- 00:30:55.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:55.318 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:55.318 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:55.318 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:30:55.318 00:30:55.318 --- 10.0.0.1 ping statistics --- 00:30:55.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:55.318 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:30:55.318 16:12:23 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:57.847 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:57.847 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:57.847 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:57.847 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:57.847 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:57.847 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:57.847 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:57.847 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:57.847 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:57.847 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:57.847 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:57.847 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:57.847 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:57.847 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:57.847 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:57.847 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:58.417 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:30:58.676 16:12:27 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:58.676 16:12:27 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:58.676 16:12:27 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:58.676 16:12:27 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:58.676 16:12:27 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:58.676 16:12:27 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:58.676 16:12:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:30:58.676 16:12:27 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:58.676 16:12:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:58.676 16:12:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:58.676 16:12:27 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=3956419 00:30:58.676 16:12:27 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 3956419 00:30:58.676 16:12:27 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:30:58.676 16:12:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 3956419 ']' 00:30:58.676 16:12:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:58.676 16:12:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:58.676 16:12:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:58.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:58.676 16:12:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:58.676 16:12:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:58.676 [2024-07-15 16:12:27.475059] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:30:58.676 [2024-07-15 16:12:27.475105] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:58.676 EAL: No free 2048 kB hugepages reported on node 1 00:30:58.676 [2024-07-15 16:12:27.536625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:58.936 [2024-07-15 16:12:27.624636] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:58.936 [2024-07-15 16:12:27.624670] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:58.936 [2024-07-15 16:12:27.624677] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:58.936 [2024-07-15 16:12:27.624683] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:58.936 [2024-07-15 16:12:27.624688] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:58.936 [2024-07-15 16:12:27.624728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:58.936 [2024-07-15 16:12:27.624825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:58.936 [2024-07-15 16:12:27.624840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:58.936 [2024-07-15 16:12:27.624845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:59.504 16:12:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:59.504 16:12:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:30:59.504 16:12:28 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:59.504 16:12:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:59.504 16:12:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:59.504 16:12:28 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:59.504 16:12:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:30:59.504 16:12:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:30:59.504 16:12:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:30:59.504 16:12:28 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:30:59.504 16:12:28 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:30:59.504 16:12:28 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:5e:00.0 ]] 00:30:59.504 16:12:28 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:30:59.504 16:12:28 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:30:59.504 16:12:28 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:30:59.504 16:12:28 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:30:59.504 16:12:28 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:30:59.504 16:12:28 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:30:59.504 16:12:28 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:30:59.504 16:12:28 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:5e:00.0 00:30:59.504 16:12:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:30:59.504 16:12:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:30:59.504 16:12:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:30:59.504 16:12:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:59.504 16:12:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:59.504 16:12:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:59.504 ************************************ 00:30:59.504 START TEST spdk_target_abort 00:30:59.504 ************************************ 00:30:59.504 16:12:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:30:59.504 16:12:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:30:59.504 16:12:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:30:59.504 16:12:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:59.504 16:12:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:02.794 spdk_targetn1 00:31:02.794 16:12:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:02.794 16:12:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:02.794 16:12:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:02.794 16:12:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:02.794 [2024-07-15 16:12:31.195923] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:02.794 16:12:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:02.794 16:12:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:31:02.794 16:12:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:02.794 16:12:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:02.794 16:12:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:02.794 16:12:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:31:02.794 16:12:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:02.794 16:12:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:02.794 16:12:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:02.794 16:12:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:31:02.794 16:12:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:02.794 16:12:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:02.794 [2024-07-15 16:12:31.224852] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:02.794 16:12:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:02.794 16:12:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:31:02.794 16:12:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:02.794 16:12:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:02.794 16:12:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:31:02.794 16:12:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:02.794 16:12:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:02.794 16:12:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:02.794 16:12:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:02.794 16:12:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:02.794 16:12:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:02.794 16:12:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:02.794 16:12:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:02.794 16:12:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:02.794 16:12:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:02.794 16:12:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:31:02.794 16:12:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:02.794 16:12:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:02.794 16:12:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:02.794 16:12:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:02.794 16:12:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:02.795 16:12:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:02.795 EAL: No free 2048 kB hugepages reported on node 1 00:31:06.083 Initializing NVMe Controllers 00:31:06.084 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:06.084 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:06.084 Initialization complete. Launching workers. 00:31:06.084 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 14632, failed: 0 00:31:06.084 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1364, failed to submit 13268 00:31:06.084 success 776, unsuccess 588, failed 0 00:31:06.084 16:12:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:06.084 16:12:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:06.084 EAL: No free 2048 kB hugepages reported on node 1 00:31:09.444 Initializing NVMe Controllers 00:31:09.444 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:09.444 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:09.444 Initialization complete. Launching workers. 00:31:09.444 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8687, failed: 0 00:31:09.444 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1236, failed to submit 7451 00:31:09.444 success 356, unsuccess 880, failed 0 00:31:09.444 16:12:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:09.444 16:12:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:09.444 EAL: No free 2048 kB hugepages reported on node 1 00:31:11.979 Initializing NVMe Controllers 00:31:11.979 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:11.979 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:11.979 Initialization complete. Launching workers. 00:31:11.979 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37744, failed: 0 00:31:11.979 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2851, failed to submit 34893 00:31:11.979 success 626, unsuccess 2225, failed 0 00:31:11.979 16:12:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:31:11.979 16:12:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.979 16:12:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:12.237 16:12:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.237 16:12:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:31:12.237 16:12:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.237 16:12:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:13.613 16:12:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.613 16:12:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3956419 00:31:13.613 16:12:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 3956419 ']' 00:31:13.613 16:12:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 3956419 00:31:13.613 16:12:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:31:13.613 16:12:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:13.613 16:12:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3956419 00:31:13.613 16:12:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:13.613 16:12:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:13.613 16:12:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3956419' 00:31:13.613 killing process with pid 3956419 00:31:13.613 16:12:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 3956419 00:31:13.613 16:12:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 3956419 00:31:13.613 00:31:13.613 real 0m14.069s 00:31:13.613 user 0m56.086s 00:31:13.613 sys 0m2.251s 00:31:13.613 16:12:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:13.613 16:12:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:13.613 ************************************ 00:31:13.613 END TEST spdk_target_abort 00:31:13.613 ************************************ 00:31:13.613 16:12:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:31:13.613 16:12:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:31:13.613 16:12:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:13.613 16:12:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:13.613 16:12:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:13.613 ************************************ 00:31:13.613 START TEST kernel_target_abort 00:31:13.613 ************************************ 00:31:13.613 16:12:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:31:13.613 16:12:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:31:13.613 16:12:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:31:13.613 16:12:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:13.613 16:12:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:13.613 16:12:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:13.613 16:12:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:13.613 16:12:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:13.613 16:12:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:13.613 16:12:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:13.613 16:12:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:13.613 16:12:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:13.613 16:12:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:13.613 16:12:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:13.613 16:12:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:13.613 16:12:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:13.613 16:12:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:13.613 16:12:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:13.613 16:12:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:31:13.613 16:12:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:13.613 16:12:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:13.613 16:12:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:13.613 16:12:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:16.146 Waiting for block devices as requested 00:31:16.146 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:31:16.146 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:16.146 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:16.146 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:16.146 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:16.405 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:16.405 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:16.405 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:16.405 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:16.664 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:16.664 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:16.664 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:16.664 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:16.923 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:16.923 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:16.923 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:17.183 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:17.183 16:12:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:17.183 16:12:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:17.183 16:12:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:17.183 16:12:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:31:17.183 16:12:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:17.183 16:12:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:31:17.183 16:12:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:17.183 16:12:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:17.183 16:12:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:17.183 No valid GPT data, bailing 00:31:17.183 16:12:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:17.183 16:12:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:31:17.183 16:12:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:31:17.183 16:12:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:17.183 16:12:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:17.183 16:12:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:17.183 16:12:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:17.183 16:12:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:17.183 16:12:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:17.183 16:12:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:31:17.183 16:12:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:17.183 16:12:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:31:17.183 16:12:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:17.183 16:12:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:31:17.183 16:12:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:31:17.183 16:12:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:31:17.183 16:12:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:17.183 16:12:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:31:17.443 00:31:17.443 Discovery Log Number of Records 2, Generation counter 2 00:31:17.443 =====Discovery Log Entry 0====== 00:31:17.443 trtype: tcp 00:31:17.443 adrfam: ipv4 00:31:17.443 subtype: current discovery subsystem 00:31:17.443 treq: not specified, sq flow control disable supported 00:31:17.443 portid: 1 00:31:17.443 trsvcid: 4420 00:31:17.443 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:17.443 traddr: 10.0.0.1 00:31:17.443 eflags: none 00:31:17.443 sectype: none 00:31:17.443 =====Discovery Log Entry 1====== 00:31:17.444 trtype: tcp 00:31:17.444 adrfam: ipv4 00:31:17.444 subtype: nvme subsystem 00:31:17.444 treq: not specified, sq flow control disable supported 00:31:17.444 portid: 1 00:31:17.444 trsvcid: 4420 00:31:17.444 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:17.444 traddr: 10.0.0.1 00:31:17.444 eflags: none 00:31:17.444 sectype: none 00:31:17.444 16:12:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:31:17.444 16:12:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:17.444 16:12:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:17.444 16:12:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:31:17.444 16:12:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:17.444 16:12:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:17.444 16:12:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:17.444 16:12:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:17.444 16:12:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:17.444 16:12:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:17.444 16:12:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:17.444 16:12:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:17.444 16:12:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:17.444 16:12:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:17.444 16:12:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:31:17.444 16:12:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:17.444 16:12:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:31:17.444 16:12:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:17.444 16:12:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:17.444 16:12:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:17.444 16:12:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:17.444 EAL: No free 2048 kB hugepages reported on node 1 00:31:20.732 Initializing NVMe Controllers 00:31:20.732 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:20.732 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:20.732 Initialization complete. Launching workers. 00:31:20.732 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 81507, failed: 0 00:31:20.732 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 81507, failed to submit 0 00:31:20.732 success 0, unsuccess 81507, failed 0 00:31:20.732 16:12:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:20.732 16:12:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:20.732 EAL: No free 2048 kB hugepages reported on node 1 00:31:24.023 Initializing NVMe Controllers 00:31:24.023 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:24.023 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:24.023 Initialization complete. Launching workers. 00:31:24.023 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 133567, failed: 0 00:31:24.023 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33358, failed to submit 100209 00:31:24.023 success 0, unsuccess 33358, failed 0 00:31:24.023 16:12:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:24.023 16:12:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:24.023 EAL: No free 2048 kB hugepages reported on node 1 00:31:26.555 Initializing NVMe Controllers 00:31:26.555 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:26.555 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:26.555 Initialization complete. Launching workers. 00:31:26.555 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 129055, failed: 0 00:31:26.555 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32294, failed to submit 96761 00:31:26.555 success 0, unsuccess 32294, failed 0 00:31:26.555 16:12:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:31:26.555 16:12:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:26.555 16:12:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:31:26.555 16:12:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:26.555 16:12:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:26.556 16:12:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:26.556 16:12:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:26.556 16:12:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:31:26.556 16:12:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:31:26.556 16:12:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:29.090 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:29.090 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:29.090 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:29.090 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:29.090 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:29.090 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:29.090 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:29.090 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:29.090 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:29.090 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:29.090 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:29.090 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:29.090 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:29.090 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:29.090 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:29.090 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:29.657 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:31:29.917 00:31:29.917 real 0m16.093s 00:31:29.917 user 0m7.729s 00:31:29.917 sys 0m4.334s 00:31:29.917 16:12:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:29.917 16:12:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:29.917 ************************************ 00:31:29.917 END TEST kernel_target_abort 00:31:29.917 ************************************ 00:31:29.917 16:12:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:31:29.917 16:12:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:29.917 16:12:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:31:29.917 16:12:58 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:29.917 16:12:58 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:31:29.917 16:12:58 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:29.917 16:12:58 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:31:29.917 16:12:58 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:29.917 16:12:58 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:29.917 rmmod nvme_tcp 00:31:29.917 rmmod nvme_fabrics 00:31:29.917 rmmod nvme_keyring 00:31:29.917 16:12:58 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:29.917 16:12:58 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:31:29.917 16:12:58 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:31:29.917 16:12:58 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 3956419 ']' 00:31:29.917 16:12:58 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 3956419 00:31:29.917 16:12:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 3956419 ']' 00:31:29.917 16:12:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 3956419 00:31:29.917 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3956419) - No such process 00:31:29.917 16:12:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 3956419 is not found' 00:31:29.917 Process with pid 3956419 is not found 00:31:29.917 16:12:58 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:31:29.917 16:12:58 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:32.456 Waiting for block devices as requested 00:31:32.456 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:31:32.456 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:32.456 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:32.716 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:32.716 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:32.716 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:32.716 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:32.975 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:32.975 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:32.975 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:32.975 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:33.234 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:33.234 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:33.234 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:33.234 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:33.494 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:33.494 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:33.494 16:13:02 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:33.494 16:13:02 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:33.494 16:13:02 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:33.494 16:13:02 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:33.494 16:13:02 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:33.494 16:13:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:33.495 16:13:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:36.072 16:13:04 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:36.072 00:31:36.072 real 0m45.745s 00:31:36.072 user 1m7.659s 00:31:36.073 sys 0m14.274s 00:31:36.073 16:13:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:36.073 16:13:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:36.073 ************************************ 00:31:36.073 END TEST nvmf_abort_qd_sizes 00:31:36.073 ************************************ 00:31:36.073 16:13:04 -- common/autotest_common.sh@1142 -- # return 0 00:31:36.073 16:13:04 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:31:36.073 16:13:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:36.073 16:13:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:36.073 16:13:04 -- common/autotest_common.sh@10 -- # set +x 00:31:36.073 ************************************ 00:31:36.073 START TEST keyring_file 00:31:36.073 ************************************ 00:31:36.073 16:13:04 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:31:36.073 * Looking for test storage... 00:31:36.073 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:31:36.073 16:13:04 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:31:36.073 16:13:04 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:36.073 16:13:04 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:31:36.073 16:13:04 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:36.073 16:13:04 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:36.073 16:13:04 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:36.073 16:13:04 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:36.073 16:13:04 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:36.073 16:13:04 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:36.073 16:13:04 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:36.073 16:13:04 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:36.073 16:13:04 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:36.073 16:13:04 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:36.073 16:13:04 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:36.073 16:13:04 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:36.073 16:13:04 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:36.073 16:13:04 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:36.073 16:13:04 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:36.073 16:13:04 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:36.073 16:13:04 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:36.073 16:13:04 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:36.073 16:13:04 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:36.073 16:13:04 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:36.073 16:13:04 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.073 16:13:04 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.073 16:13:04 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.073 16:13:04 keyring_file -- paths/export.sh@5 -- # export PATH 00:31:36.073 16:13:04 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.073 16:13:04 keyring_file -- nvmf/common.sh@47 -- # : 0 00:31:36.073 16:13:04 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:36.073 16:13:04 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:36.073 16:13:04 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:36.073 16:13:04 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:36.073 16:13:04 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:36.073 16:13:04 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:36.073 16:13:04 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:36.073 16:13:04 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:36.073 16:13:04 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:31:36.073 16:13:04 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:31:36.073 16:13:04 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:31:36.073 16:13:04 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:31:36.073 16:13:04 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:31:36.073 16:13:04 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:31:36.073 16:13:04 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:31:36.073 16:13:04 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:36.073 16:13:04 keyring_file -- keyring/common.sh@17 -- # name=key0 00:31:36.073 16:13:04 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:36.073 16:13:04 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:36.073 16:13:04 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:36.073 16:13:04 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.b8G55aTuIP 00:31:36.073 16:13:04 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:36.073 16:13:04 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:36.073 16:13:04 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:36.073 16:13:04 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:36.073 16:13:04 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:31:36.073 16:13:04 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:36.073 16:13:04 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:36.073 16:13:04 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.b8G55aTuIP 00:31:36.073 16:13:04 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.b8G55aTuIP 00:31:36.073 16:13:04 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.b8G55aTuIP 00:31:36.073 16:13:04 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:31:36.073 16:13:04 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:36.073 16:13:04 keyring_file -- keyring/common.sh@17 -- # name=key1 00:31:36.073 16:13:04 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:31:36.073 16:13:04 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:36.073 16:13:04 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:36.073 16:13:04 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.z4Yg06nChE 00:31:36.073 16:13:04 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:31:36.073 16:13:04 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:31:36.073 16:13:04 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:36.073 16:13:04 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:36.073 16:13:04 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:31:36.073 16:13:04 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:36.073 16:13:04 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:36.073 16:13:04 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.z4Yg06nChE 00:31:36.073 16:13:04 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.z4Yg06nChE 00:31:36.073 16:13:04 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.z4Yg06nChE 00:31:36.073 16:13:04 keyring_file -- keyring/file.sh@30 -- # tgtpid=3965013 00:31:36.073 16:13:04 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:31:36.073 16:13:04 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3965013 00:31:36.073 16:13:04 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3965013 ']' 00:31:36.073 16:13:04 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:36.073 16:13:04 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:36.073 16:13:04 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:36.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:36.073 16:13:04 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:36.073 16:13:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:36.073 [2024-07-15 16:13:04.789032] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:31:36.073 [2024-07-15 16:13:04.789081] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3965013 ] 00:31:36.073 EAL: No free 2048 kB hugepages reported on node 1 00:31:36.073 [2024-07-15 16:13:04.843503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:36.074 [2024-07-15 16:13:04.916397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:37.011 16:13:05 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:37.011 16:13:05 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:31:37.011 16:13:05 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:31:37.011 16:13:05 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.011 16:13:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:37.011 [2024-07-15 16:13:05.590841] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:37.011 null0 00:31:37.011 [2024-07-15 16:13:05.622893] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:37.011 [2024-07-15 16:13:05.623089] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:37.011 [2024-07-15 16:13:05.630901] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:31:37.011 16:13:05 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:37.011 16:13:05 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:37.011 16:13:05 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:31:37.011 16:13:05 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:37.011 16:13:05 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:37.011 16:13:05 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:37.011 16:13:05 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:37.011 16:13:05 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:37.011 16:13:05 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:37.011 16:13:05 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:37.011 16:13:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:37.011 [2024-07-15 16:13:05.638924] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:31:37.011 request: 00:31:37.011 { 00:31:37.011 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:31:37.011 "secure_channel": false, 00:31:37.011 "listen_address": { 00:31:37.011 "trtype": "tcp", 00:31:37.011 "traddr": "127.0.0.1", 00:31:37.011 "trsvcid": "4420" 00:31:37.011 }, 00:31:37.011 "method": "nvmf_subsystem_add_listener", 00:31:37.011 "req_id": 1 00:31:37.011 } 00:31:37.011 Got JSON-RPC error response 00:31:37.011 response: 00:31:37.011 { 00:31:37.011 "code": -32602, 00:31:37.011 "message": "Invalid parameters" 00:31:37.011 } 00:31:37.011 16:13:05 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:37.011 16:13:05 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:31:37.011 16:13:05 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:37.011 16:13:05 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:37.011 16:13:05 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:37.011 16:13:05 keyring_file -- keyring/file.sh@46 -- # bperfpid=3965181 00:31:37.011 16:13:05 keyring_file -- keyring/file.sh@48 -- # waitforlisten 3965181 /var/tmp/bperf.sock 00:31:37.011 16:13:05 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3965181 ']' 00:31:37.011 16:13:05 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:37.011 16:13:05 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:37.011 16:13:05 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:37.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:37.011 16:13:05 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:37.011 16:13:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:37.011 16:13:05 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:31:37.011 [2024-07-15 16:13:05.689281] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:31:37.011 [2024-07-15 16:13:05.689322] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3965181 ] 00:31:37.011 EAL: No free 2048 kB hugepages reported on node 1 00:31:37.011 [2024-07-15 16:13:05.742443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:37.011 [2024-07-15 16:13:05.821565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:37.578 16:13:06 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:37.578 16:13:06 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:31:37.578 16:13:06 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.b8G55aTuIP 00:31:37.578 16:13:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.b8G55aTuIP 00:31:37.837 16:13:06 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.z4Yg06nChE 00:31:37.837 16:13:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.z4Yg06nChE 00:31:38.096 16:13:06 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:31:38.096 16:13:06 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:31:38.096 16:13:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:38.096 16:13:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:38.096 16:13:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:38.096 16:13:07 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.b8G55aTuIP == \/\t\m\p\/\t\m\p\.\b\8\G\5\5\a\T\u\I\P ]] 00:31:38.096 16:13:07 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:31:38.096 16:13:07 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:31:38.096 16:13:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:38.096 16:13:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:38.096 16:13:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:38.354 16:13:07 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.z4Yg06nChE == \/\t\m\p\/\t\m\p\.\z\4\Y\g\0\6\n\C\h\E ]] 00:31:38.354 16:13:07 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:31:38.354 16:13:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:38.354 16:13:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:38.354 16:13:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:38.354 16:13:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:38.354 16:13:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:38.612 16:13:07 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:31:38.612 16:13:07 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:31:38.612 16:13:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:38.612 16:13:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:38.612 16:13:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:38.612 16:13:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:38.612 16:13:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:38.612 16:13:07 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:31:38.612 16:13:07 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:38.612 16:13:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:38.871 [2024-07-15 16:13:07.694664] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:38.871 nvme0n1 00:31:38.871 16:13:07 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:31:38.871 16:13:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:38.871 16:13:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:38.871 16:13:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:38.871 16:13:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:38.871 16:13:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:39.129 16:13:07 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:31:39.129 16:13:07 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:31:39.129 16:13:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:39.129 16:13:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:39.129 16:13:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:39.129 16:13:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:39.129 16:13:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:39.388 16:13:08 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:31:39.388 16:13:08 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:39.388 Running I/O for 1 seconds... 00:31:40.324 00:31:40.324 Latency(us) 00:31:40.324 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:40.324 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:31:40.324 nvme0n1 : 1.01 14662.72 57.28 0.00 0.00 8702.75 4530.53 14930.81 00:31:40.324 =================================================================================================================== 00:31:40.324 Total : 14662.72 57.28 0.00 0.00 8702.75 4530.53 14930.81 00:31:40.324 0 00:31:40.324 16:13:09 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:40.324 16:13:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:40.583 16:13:09 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:31:40.583 16:13:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:40.583 16:13:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:40.583 16:13:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:40.583 16:13:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:40.583 16:13:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:40.842 16:13:09 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:31:40.842 16:13:09 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:31:40.842 16:13:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:40.842 16:13:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:40.842 16:13:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:40.842 16:13:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:40.842 16:13:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:41.102 16:13:09 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:31:41.102 16:13:09 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:41.102 16:13:09 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:31:41.102 16:13:09 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:41.102 16:13:09 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:31:41.102 16:13:09 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:41.102 16:13:09 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:31:41.102 16:13:09 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:41.102 16:13:09 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:41.102 16:13:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:41.102 [2024-07-15 16:13:09.947425] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:41.102 [2024-07-15 16:13:09.948254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x154e770 (107): Transport endpoint is not connected 00:31:41.102 [2024-07-15 16:13:09.949247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x154e770 (9): Bad file descriptor 00:31:41.102 [2024-07-15 16:13:09.950253] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:41.102 [2024-07-15 16:13:09.950262] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:31:41.102 [2024-07-15 16:13:09.950269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:41.102 request: 00:31:41.102 { 00:31:41.102 "name": "nvme0", 00:31:41.102 "trtype": "tcp", 00:31:41.102 "traddr": "127.0.0.1", 00:31:41.102 "adrfam": "ipv4", 00:31:41.102 "trsvcid": "4420", 00:31:41.102 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:41.102 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:41.102 "prchk_reftag": false, 00:31:41.102 "prchk_guard": false, 00:31:41.102 "hdgst": false, 00:31:41.102 "ddgst": false, 00:31:41.102 "psk": "key1", 00:31:41.102 "method": "bdev_nvme_attach_controller", 00:31:41.102 "req_id": 1 00:31:41.102 } 00:31:41.102 Got JSON-RPC error response 00:31:41.102 response: 00:31:41.102 { 00:31:41.102 "code": -5, 00:31:41.102 "message": "Input/output error" 00:31:41.102 } 00:31:41.102 16:13:09 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:31:41.102 16:13:09 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:41.102 16:13:09 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:41.102 16:13:09 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:41.102 16:13:09 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:31:41.102 16:13:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:41.102 16:13:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:41.102 16:13:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:41.102 16:13:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:41.102 16:13:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:41.361 16:13:10 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:31:41.361 16:13:10 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:31:41.361 16:13:10 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:41.361 16:13:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:41.361 16:13:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:41.361 16:13:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:41.361 16:13:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:41.620 16:13:10 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:31:41.620 16:13:10 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:31:41.620 16:13:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:41.620 16:13:10 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:31:41.620 16:13:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:31:41.879 16:13:10 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:31:41.879 16:13:10 keyring_file -- keyring/file.sh@77 -- # jq length 00:31:41.879 16:13:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:42.138 16:13:10 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:31:42.138 16:13:10 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.b8G55aTuIP 00:31:42.139 16:13:10 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.b8G55aTuIP 00:31:42.139 16:13:10 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:31:42.139 16:13:10 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.b8G55aTuIP 00:31:42.139 16:13:10 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:31:42.139 16:13:10 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:42.139 16:13:10 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:31:42.139 16:13:10 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:42.139 16:13:10 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.b8G55aTuIP 00:31:42.139 16:13:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.b8G55aTuIP 00:31:42.139 [2024-07-15 16:13:11.014347] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.b8G55aTuIP': 0100660 00:31:42.139 [2024-07-15 16:13:11.014371] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:31:42.139 request: 00:31:42.139 { 00:31:42.139 "name": "key0", 00:31:42.139 "path": "/tmp/tmp.b8G55aTuIP", 00:31:42.139 "method": "keyring_file_add_key", 00:31:42.139 "req_id": 1 00:31:42.139 } 00:31:42.139 Got JSON-RPC error response 00:31:42.139 response: 00:31:42.139 { 00:31:42.139 "code": -1, 00:31:42.139 "message": "Operation not permitted" 00:31:42.139 } 00:31:42.139 16:13:11 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:31:42.139 16:13:11 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:42.139 16:13:11 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:42.139 16:13:11 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:42.139 16:13:11 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.b8G55aTuIP 00:31:42.139 16:13:11 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.b8G55aTuIP 00:31:42.139 16:13:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.b8G55aTuIP 00:31:42.398 16:13:11 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.b8G55aTuIP 00:31:42.398 16:13:11 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:31:42.398 16:13:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:42.398 16:13:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:42.398 16:13:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:42.398 16:13:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:42.398 16:13:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:42.658 16:13:11 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:31:42.658 16:13:11 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:42.658 16:13:11 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:31:42.658 16:13:11 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:42.658 16:13:11 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:31:42.658 16:13:11 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:42.658 16:13:11 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:31:42.658 16:13:11 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:42.658 16:13:11 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:42.658 16:13:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:42.658 [2024-07-15 16:13:11.523704] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.b8G55aTuIP': No such file or directory 00:31:42.658 [2024-07-15 16:13:11.523720] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:31:42.658 [2024-07-15 16:13:11.523739] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:31:42.658 [2024-07-15 16:13:11.523745] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:42.658 [2024-07-15 16:13:11.523751] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:31:42.658 request: 00:31:42.658 { 00:31:42.658 "name": "nvme0", 00:31:42.658 "trtype": "tcp", 00:31:42.658 "traddr": "127.0.0.1", 00:31:42.658 "adrfam": "ipv4", 00:31:42.658 "trsvcid": "4420", 00:31:42.658 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:42.658 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:42.658 "prchk_reftag": false, 00:31:42.658 "prchk_guard": false, 00:31:42.658 "hdgst": false, 00:31:42.658 "ddgst": false, 00:31:42.658 "psk": "key0", 00:31:42.658 "method": "bdev_nvme_attach_controller", 00:31:42.658 "req_id": 1 00:31:42.658 } 00:31:42.658 Got JSON-RPC error response 00:31:42.658 response: 00:31:42.658 { 00:31:42.658 "code": -19, 00:31:42.658 "message": "No such device" 00:31:42.658 } 00:31:42.658 16:13:11 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:31:42.658 16:13:11 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:42.658 16:13:11 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:42.658 16:13:11 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:42.658 16:13:11 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:31:42.658 16:13:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:42.917 16:13:11 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:31:42.917 16:13:11 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:42.917 16:13:11 keyring_file -- keyring/common.sh@17 -- # name=key0 00:31:42.917 16:13:11 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:42.917 16:13:11 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:42.917 16:13:11 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:42.917 16:13:11 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.BbacyUklRF 00:31:42.917 16:13:11 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:42.917 16:13:11 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:42.917 16:13:11 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:42.917 16:13:11 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:42.917 16:13:11 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:31:42.917 16:13:11 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:42.917 16:13:11 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:42.917 16:13:11 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.BbacyUklRF 00:31:42.917 16:13:11 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.BbacyUklRF 00:31:42.918 16:13:11 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.BbacyUklRF 00:31:42.918 16:13:11 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.BbacyUklRF 00:31:42.918 16:13:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.BbacyUklRF 00:31:43.176 16:13:11 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:43.176 16:13:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:43.435 nvme0n1 00:31:43.435 16:13:12 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:31:43.435 16:13:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:43.435 16:13:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:43.435 16:13:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:43.435 16:13:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:43.435 16:13:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:43.435 16:13:12 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:31:43.435 16:13:12 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:31:43.435 16:13:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:43.694 16:13:12 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:31:43.694 16:13:12 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:31:43.694 16:13:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:43.694 16:13:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:43.694 16:13:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:43.953 16:13:12 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:31:43.953 16:13:12 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:31:43.953 16:13:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:43.953 16:13:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:43.953 16:13:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:43.953 16:13:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:43.953 16:13:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:44.211 16:13:12 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:31:44.211 16:13:12 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:44.211 16:13:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:44.211 16:13:13 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:31:44.211 16:13:13 keyring_file -- keyring/file.sh@104 -- # jq length 00:31:44.211 16:13:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:44.469 16:13:13 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:31:44.469 16:13:13 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.BbacyUklRF 00:31:44.469 16:13:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.BbacyUklRF 00:31:44.727 16:13:13 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.z4Yg06nChE 00:31:44.727 16:13:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.z4Yg06nChE 00:31:44.727 16:13:13 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:44.727 16:13:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:44.985 nvme0n1 00:31:44.985 16:13:13 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:31:44.985 16:13:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:31:45.244 16:13:14 keyring_file -- keyring/file.sh@112 -- # config='{ 00:31:45.244 "subsystems": [ 00:31:45.244 { 00:31:45.244 "subsystem": "keyring", 00:31:45.244 "config": [ 00:31:45.244 { 00:31:45.244 "method": "keyring_file_add_key", 00:31:45.244 "params": { 00:31:45.244 "name": "key0", 00:31:45.244 "path": "/tmp/tmp.BbacyUklRF" 00:31:45.244 } 00:31:45.244 }, 00:31:45.244 { 00:31:45.244 "method": "keyring_file_add_key", 00:31:45.244 "params": { 00:31:45.244 "name": "key1", 00:31:45.244 "path": "/tmp/tmp.z4Yg06nChE" 00:31:45.244 } 00:31:45.244 } 00:31:45.244 ] 00:31:45.244 }, 00:31:45.244 { 00:31:45.244 "subsystem": "iobuf", 00:31:45.244 "config": [ 00:31:45.244 { 00:31:45.244 "method": "iobuf_set_options", 00:31:45.244 "params": { 00:31:45.244 "small_pool_count": 8192, 00:31:45.244 "large_pool_count": 1024, 00:31:45.244 "small_bufsize": 8192, 00:31:45.244 "large_bufsize": 135168 00:31:45.244 } 00:31:45.244 } 00:31:45.244 ] 00:31:45.244 }, 00:31:45.244 { 00:31:45.244 "subsystem": "sock", 00:31:45.244 "config": [ 00:31:45.244 { 00:31:45.244 "method": "sock_set_default_impl", 00:31:45.244 "params": { 00:31:45.244 "impl_name": "posix" 00:31:45.244 } 00:31:45.244 }, 00:31:45.244 { 00:31:45.244 "method": "sock_impl_set_options", 00:31:45.244 "params": { 00:31:45.244 "impl_name": "ssl", 00:31:45.244 "recv_buf_size": 4096, 00:31:45.244 "send_buf_size": 4096, 00:31:45.244 "enable_recv_pipe": true, 00:31:45.244 "enable_quickack": false, 00:31:45.244 "enable_placement_id": 0, 00:31:45.244 "enable_zerocopy_send_server": true, 00:31:45.244 "enable_zerocopy_send_client": false, 00:31:45.244 "zerocopy_threshold": 0, 00:31:45.244 "tls_version": 0, 00:31:45.244 "enable_ktls": false 00:31:45.244 } 00:31:45.244 }, 00:31:45.244 { 00:31:45.244 "method": "sock_impl_set_options", 00:31:45.244 "params": { 00:31:45.244 "impl_name": "posix", 00:31:45.244 "recv_buf_size": 2097152, 00:31:45.244 "send_buf_size": 2097152, 00:31:45.244 "enable_recv_pipe": true, 00:31:45.244 "enable_quickack": false, 00:31:45.244 "enable_placement_id": 0, 00:31:45.244 "enable_zerocopy_send_server": true, 00:31:45.244 "enable_zerocopy_send_client": false, 00:31:45.244 "zerocopy_threshold": 0, 00:31:45.244 "tls_version": 0, 00:31:45.244 "enable_ktls": false 00:31:45.245 } 00:31:45.245 } 00:31:45.245 ] 00:31:45.245 }, 00:31:45.245 { 00:31:45.245 "subsystem": "vmd", 00:31:45.245 "config": [] 00:31:45.245 }, 00:31:45.245 { 00:31:45.245 "subsystem": "accel", 00:31:45.245 "config": [ 00:31:45.245 { 00:31:45.245 "method": "accel_set_options", 00:31:45.245 "params": { 00:31:45.245 "small_cache_size": 128, 00:31:45.245 "large_cache_size": 16, 00:31:45.245 "task_count": 2048, 00:31:45.245 "sequence_count": 2048, 00:31:45.245 "buf_count": 2048 00:31:45.245 } 00:31:45.245 } 00:31:45.245 ] 00:31:45.245 }, 00:31:45.245 { 00:31:45.245 "subsystem": "bdev", 00:31:45.245 "config": [ 00:31:45.245 { 00:31:45.245 "method": "bdev_set_options", 00:31:45.245 "params": { 00:31:45.245 "bdev_io_pool_size": 65535, 00:31:45.245 "bdev_io_cache_size": 256, 00:31:45.245 "bdev_auto_examine": true, 00:31:45.245 "iobuf_small_cache_size": 128, 00:31:45.245 "iobuf_large_cache_size": 16 00:31:45.245 } 00:31:45.245 }, 00:31:45.245 { 00:31:45.245 "method": "bdev_raid_set_options", 00:31:45.245 "params": { 00:31:45.245 "process_window_size_kb": 1024 00:31:45.245 } 00:31:45.245 }, 00:31:45.245 { 00:31:45.245 "method": "bdev_iscsi_set_options", 00:31:45.245 "params": { 00:31:45.245 "timeout_sec": 30 00:31:45.245 } 00:31:45.245 }, 00:31:45.245 { 00:31:45.245 "method": "bdev_nvme_set_options", 00:31:45.245 "params": { 00:31:45.245 "action_on_timeout": "none", 00:31:45.245 "timeout_us": 0, 00:31:45.245 "timeout_admin_us": 0, 00:31:45.245 "keep_alive_timeout_ms": 10000, 00:31:45.245 "arbitration_burst": 0, 00:31:45.245 "low_priority_weight": 0, 00:31:45.245 "medium_priority_weight": 0, 00:31:45.245 "high_priority_weight": 0, 00:31:45.245 "nvme_adminq_poll_period_us": 10000, 00:31:45.245 "nvme_ioq_poll_period_us": 0, 00:31:45.245 "io_queue_requests": 512, 00:31:45.245 "delay_cmd_submit": true, 00:31:45.245 "transport_retry_count": 4, 00:31:45.245 "bdev_retry_count": 3, 00:31:45.245 "transport_ack_timeout": 0, 00:31:45.245 "ctrlr_loss_timeout_sec": 0, 00:31:45.245 "reconnect_delay_sec": 0, 00:31:45.245 "fast_io_fail_timeout_sec": 0, 00:31:45.245 "disable_auto_failback": false, 00:31:45.245 "generate_uuids": false, 00:31:45.245 "transport_tos": 0, 00:31:45.245 "nvme_error_stat": false, 00:31:45.245 "rdma_srq_size": 0, 00:31:45.245 "io_path_stat": false, 00:31:45.245 "allow_accel_sequence": false, 00:31:45.245 "rdma_max_cq_size": 0, 00:31:45.245 "rdma_cm_event_timeout_ms": 0, 00:31:45.245 "dhchap_digests": [ 00:31:45.245 "sha256", 00:31:45.245 "sha384", 00:31:45.245 "sha512" 00:31:45.245 ], 00:31:45.245 "dhchap_dhgroups": [ 00:31:45.245 "null", 00:31:45.245 "ffdhe2048", 00:31:45.245 "ffdhe3072", 00:31:45.245 "ffdhe4096", 00:31:45.245 "ffdhe6144", 00:31:45.245 "ffdhe8192" 00:31:45.245 ] 00:31:45.245 } 00:31:45.245 }, 00:31:45.245 { 00:31:45.245 "method": "bdev_nvme_attach_controller", 00:31:45.245 "params": { 00:31:45.245 "name": "nvme0", 00:31:45.245 "trtype": "TCP", 00:31:45.245 "adrfam": "IPv4", 00:31:45.245 "traddr": "127.0.0.1", 00:31:45.245 "trsvcid": "4420", 00:31:45.245 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:45.245 "prchk_reftag": false, 00:31:45.245 "prchk_guard": false, 00:31:45.245 "ctrlr_loss_timeout_sec": 0, 00:31:45.245 "reconnect_delay_sec": 0, 00:31:45.245 "fast_io_fail_timeout_sec": 0, 00:31:45.245 "psk": "key0", 00:31:45.245 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:45.245 "hdgst": false, 00:31:45.245 "ddgst": false 00:31:45.245 } 00:31:45.245 }, 00:31:45.245 { 00:31:45.245 "method": "bdev_nvme_set_hotplug", 00:31:45.245 "params": { 00:31:45.245 "period_us": 100000, 00:31:45.245 "enable": false 00:31:45.245 } 00:31:45.245 }, 00:31:45.245 { 00:31:45.245 "method": "bdev_wait_for_examine" 00:31:45.245 } 00:31:45.245 ] 00:31:45.245 }, 00:31:45.245 { 00:31:45.245 "subsystem": "nbd", 00:31:45.245 "config": [] 00:31:45.245 } 00:31:45.245 ] 00:31:45.245 }' 00:31:45.245 16:13:14 keyring_file -- keyring/file.sh@114 -- # killprocess 3965181 00:31:45.245 16:13:14 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3965181 ']' 00:31:45.245 16:13:14 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3965181 00:31:45.245 16:13:14 keyring_file -- common/autotest_common.sh@953 -- # uname 00:31:45.245 16:13:14 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:45.245 16:13:14 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3965181 00:31:45.245 16:13:14 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:45.245 16:13:14 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:45.245 16:13:14 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3965181' 00:31:45.245 killing process with pid 3965181 00:31:45.245 16:13:14 keyring_file -- common/autotest_common.sh@967 -- # kill 3965181 00:31:45.245 Received shutdown signal, test time was about 1.000000 seconds 00:31:45.245 00:31:45.245 Latency(us) 00:31:45.245 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:45.245 =================================================================================================================== 00:31:45.245 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:45.245 16:13:14 keyring_file -- common/autotest_common.sh@972 -- # wait 3965181 00:31:45.504 16:13:14 keyring_file -- keyring/file.sh@117 -- # bperfpid=3966702 00:31:45.504 16:13:14 keyring_file -- keyring/file.sh@119 -- # waitforlisten 3966702 /var/tmp/bperf.sock 00:31:45.504 16:13:14 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3966702 ']' 00:31:45.504 16:13:14 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:45.504 16:13:14 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:45.504 16:13:14 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:45.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:45.504 16:13:14 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:45.504 16:13:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:45.504 16:13:14 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:31:45.504 16:13:14 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:31:45.504 "subsystems": [ 00:31:45.504 { 00:31:45.504 "subsystem": "keyring", 00:31:45.504 "config": [ 00:31:45.504 { 00:31:45.504 "method": "keyring_file_add_key", 00:31:45.504 "params": { 00:31:45.504 "name": "key0", 00:31:45.504 "path": "/tmp/tmp.BbacyUklRF" 00:31:45.504 } 00:31:45.504 }, 00:31:45.504 { 00:31:45.504 "method": "keyring_file_add_key", 00:31:45.504 "params": { 00:31:45.504 "name": "key1", 00:31:45.504 "path": "/tmp/tmp.z4Yg06nChE" 00:31:45.504 } 00:31:45.504 } 00:31:45.504 ] 00:31:45.504 }, 00:31:45.504 { 00:31:45.504 "subsystem": "iobuf", 00:31:45.504 "config": [ 00:31:45.504 { 00:31:45.504 "method": "iobuf_set_options", 00:31:45.504 "params": { 00:31:45.504 "small_pool_count": 8192, 00:31:45.504 "large_pool_count": 1024, 00:31:45.504 "small_bufsize": 8192, 00:31:45.504 "large_bufsize": 135168 00:31:45.504 } 00:31:45.504 } 00:31:45.504 ] 00:31:45.504 }, 00:31:45.504 { 00:31:45.504 "subsystem": "sock", 00:31:45.504 "config": [ 00:31:45.504 { 00:31:45.504 "method": "sock_set_default_impl", 00:31:45.504 "params": { 00:31:45.504 "impl_name": "posix" 00:31:45.504 } 00:31:45.504 }, 00:31:45.504 { 00:31:45.504 "method": "sock_impl_set_options", 00:31:45.504 "params": { 00:31:45.504 "impl_name": "ssl", 00:31:45.504 "recv_buf_size": 4096, 00:31:45.504 "send_buf_size": 4096, 00:31:45.504 "enable_recv_pipe": true, 00:31:45.504 "enable_quickack": false, 00:31:45.504 "enable_placement_id": 0, 00:31:45.504 "enable_zerocopy_send_server": true, 00:31:45.504 "enable_zerocopy_send_client": false, 00:31:45.504 "zerocopy_threshold": 0, 00:31:45.504 "tls_version": 0, 00:31:45.504 "enable_ktls": false 00:31:45.504 } 00:31:45.504 }, 00:31:45.504 { 00:31:45.504 "method": "sock_impl_set_options", 00:31:45.504 "params": { 00:31:45.504 "impl_name": "posix", 00:31:45.504 "recv_buf_size": 2097152, 00:31:45.504 "send_buf_size": 2097152, 00:31:45.504 "enable_recv_pipe": true, 00:31:45.504 "enable_quickack": false, 00:31:45.504 "enable_placement_id": 0, 00:31:45.504 "enable_zerocopy_send_server": true, 00:31:45.504 "enable_zerocopy_send_client": false, 00:31:45.504 "zerocopy_threshold": 0, 00:31:45.504 "tls_version": 0, 00:31:45.504 "enable_ktls": false 00:31:45.504 } 00:31:45.504 } 00:31:45.504 ] 00:31:45.504 }, 00:31:45.504 { 00:31:45.504 "subsystem": "vmd", 00:31:45.504 "config": [] 00:31:45.504 }, 00:31:45.504 { 00:31:45.504 "subsystem": "accel", 00:31:45.504 "config": [ 00:31:45.504 { 00:31:45.504 "method": "accel_set_options", 00:31:45.504 "params": { 00:31:45.504 "small_cache_size": 128, 00:31:45.504 "large_cache_size": 16, 00:31:45.504 "task_count": 2048, 00:31:45.504 "sequence_count": 2048, 00:31:45.504 "buf_count": 2048 00:31:45.504 } 00:31:45.504 } 00:31:45.504 ] 00:31:45.504 }, 00:31:45.504 { 00:31:45.504 "subsystem": "bdev", 00:31:45.504 "config": [ 00:31:45.504 { 00:31:45.504 "method": "bdev_set_options", 00:31:45.504 "params": { 00:31:45.504 "bdev_io_pool_size": 65535, 00:31:45.504 "bdev_io_cache_size": 256, 00:31:45.504 "bdev_auto_examine": true, 00:31:45.504 "iobuf_small_cache_size": 128, 00:31:45.504 "iobuf_large_cache_size": 16 00:31:45.504 } 00:31:45.504 }, 00:31:45.504 { 00:31:45.504 "method": "bdev_raid_set_options", 00:31:45.504 "params": { 00:31:45.504 "process_window_size_kb": 1024 00:31:45.504 } 00:31:45.504 }, 00:31:45.504 { 00:31:45.504 "method": "bdev_iscsi_set_options", 00:31:45.504 "params": { 00:31:45.504 "timeout_sec": 30 00:31:45.504 } 00:31:45.504 }, 00:31:45.504 { 00:31:45.504 "method": "bdev_nvme_set_options", 00:31:45.504 "params": { 00:31:45.504 "action_on_timeout": "none", 00:31:45.504 "timeout_us": 0, 00:31:45.504 "timeout_admin_us": 0, 00:31:45.504 "keep_alive_timeout_ms": 10000, 00:31:45.504 "arbitration_burst": 0, 00:31:45.504 "low_priority_weight": 0, 00:31:45.504 "medium_priority_weight": 0, 00:31:45.504 "high_priority_weight": 0, 00:31:45.504 "nvme_adminq_poll_period_us": 10000, 00:31:45.504 "nvme_ioq_poll_period_us": 0, 00:31:45.504 "io_queue_requests": 512, 00:31:45.504 "delay_cmd_submit": true, 00:31:45.504 "transport_retry_count": 4, 00:31:45.504 "bdev_retry_count": 3, 00:31:45.504 "transport_ack_timeout": 0, 00:31:45.504 "ctrlr_loss_timeout_sec": 0, 00:31:45.504 "reconnect_delay_sec": 0, 00:31:45.504 "fast_io_fail_timeout_sec": 0, 00:31:45.504 "disable_auto_failback": false, 00:31:45.504 "generate_uuids": false, 00:31:45.504 "transport_tos": 0, 00:31:45.504 "nvme_error_stat": false, 00:31:45.504 "rdma_srq_size": 0, 00:31:45.504 "io_path_stat": false, 00:31:45.504 "allow_accel_sequence": false, 00:31:45.504 "rdma_max_cq_size": 0, 00:31:45.504 "rdma_cm_event_timeout_ms": 0, 00:31:45.504 "dhchap_digests": [ 00:31:45.504 "sha256", 00:31:45.504 "sha384", 00:31:45.504 "sha512" 00:31:45.504 ], 00:31:45.504 "dhchap_dhgroups": [ 00:31:45.504 "null", 00:31:45.504 "ffdhe2048", 00:31:45.504 "ffdhe3072", 00:31:45.504 "ffdhe4096", 00:31:45.504 "ffdhe6144", 00:31:45.504 "ffdhe8192" 00:31:45.504 ] 00:31:45.504 } 00:31:45.504 }, 00:31:45.504 { 00:31:45.504 "method": "bdev_nvme_attach_controller", 00:31:45.504 "params": { 00:31:45.504 "name": "nvme0", 00:31:45.504 "trtype": "TCP", 00:31:45.504 "adrfam": "IPv4", 00:31:45.504 "traddr": "127.0.0.1", 00:31:45.504 "trsvcid": "4420", 00:31:45.504 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:45.504 "prchk_reftag": false, 00:31:45.504 "prchk_guard": false, 00:31:45.504 "ctrlr_loss_timeout_sec": 0, 00:31:45.504 "reconnect_delay_sec": 0, 00:31:45.504 "fast_io_fail_timeout_sec": 0, 00:31:45.504 "psk": "key0", 00:31:45.504 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:45.504 "hdgst": false, 00:31:45.504 "ddgst": false 00:31:45.504 } 00:31:45.504 }, 00:31:45.504 { 00:31:45.504 "method": "bdev_nvme_set_hotplug", 00:31:45.504 "params": { 00:31:45.504 "period_us": 100000, 00:31:45.504 "enable": false 00:31:45.504 } 00:31:45.504 }, 00:31:45.504 { 00:31:45.504 "method": "bdev_wait_for_examine" 00:31:45.504 } 00:31:45.504 ] 00:31:45.504 }, 00:31:45.504 { 00:31:45.504 "subsystem": "nbd", 00:31:45.504 "config": [] 00:31:45.504 } 00:31:45.504 ] 00:31:45.504 }' 00:31:45.504 [2024-07-15 16:13:14.353688] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:31:45.504 [2024-07-15 16:13:14.353737] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3966702 ] 00:31:45.504 EAL: No free 2048 kB hugepages reported on node 1 00:31:45.504 [2024-07-15 16:13:14.407424] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:45.762 [2024-07-15 16:13:14.480306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:45.762 [2024-07-15 16:13:14.638745] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:46.329 16:13:15 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:46.329 16:13:15 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:31:46.329 16:13:15 keyring_file -- keyring/file.sh@120 -- # jq length 00:31:46.329 16:13:15 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:31:46.329 16:13:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:46.587 16:13:15 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:31:46.587 16:13:15 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:31:46.587 16:13:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:46.587 16:13:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:46.587 16:13:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:46.587 16:13:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:46.587 16:13:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:46.587 16:13:15 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:31:46.587 16:13:15 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:31:46.587 16:13:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:46.587 16:13:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:46.587 16:13:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:46.587 16:13:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:46.587 16:13:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:46.845 16:13:15 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:31:46.845 16:13:15 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:31:46.845 16:13:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:31:46.845 16:13:15 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:31:47.103 16:13:15 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:31:47.103 16:13:15 keyring_file -- keyring/file.sh@1 -- # cleanup 00:31:47.103 16:13:15 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.BbacyUklRF /tmp/tmp.z4Yg06nChE 00:31:47.103 16:13:15 keyring_file -- keyring/file.sh@20 -- # killprocess 3966702 00:31:47.103 16:13:15 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3966702 ']' 00:31:47.103 16:13:15 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3966702 00:31:47.103 16:13:15 keyring_file -- common/autotest_common.sh@953 -- # uname 00:31:47.103 16:13:15 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:47.103 16:13:15 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3966702 00:31:47.103 16:13:15 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:47.103 16:13:15 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:47.103 16:13:15 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3966702' 00:31:47.103 killing process with pid 3966702 00:31:47.103 16:13:15 keyring_file -- common/autotest_common.sh@967 -- # kill 3966702 00:31:47.103 Received shutdown signal, test time was about 1.000000 seconds 00:31:47.103 00:31:47.103 Latency(us) 00:31:47.103 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:47.103 =================================================================================================================== 00:31:47.103 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:47.103 16:13:15 keyring_file -- common/autotest_common.sh@972 -- # wait 3966702 00:31:47.362 16:13:16 keyring_file -- keyring/file.sh@21 -- # killprocess 3965013 00:31:47.362 16:13:16 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3965013 ']' 00:31:47.362 16:13:16 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3965013 00:31:47.362 16:13:16 keyring_file -- common/autotest_common.sh@953 -- # uname 00:31:47.362 16:13:16 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:47.362 16:13:16 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3965013 00:31:47.362 16:13:16 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:47.362 16:13:16 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:47.362 16:13:16 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3965013' 00:31:47.362 killing process with pid 3965013 00:31:47.362 16:13:16 keyring_file -- common/autotest_common.sh@967 -- # kill 3965013 00:31:47.362 [2024-07-15 16:13:16.104904] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:31:47.362 16:13:16 keyring_file -- common/autotest_common.sh@972 -- # wait 3965013 00:31:47.620 00:31:47.620 real 0m11.903s 00:31:47.620 user 0m28.167s 00:31:47.620 sys 0m2.691s 00:31:47.620 16:13:16 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:47.620 16:13:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:47.620 ************************************ 00:31:47.620 END TEST keyring_file 00:31:47.620 ************************************ 00:31:47.620 16:13:16 -- common/autotest_common.sh@1142 -- # return 0 00:31:47.620 16:13:16 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:31:47.620 16:13:16 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:31:47.621 16:13:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:47.621 16:13:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:47.621 16:13:16 -- common/autotest_common.sh@10 -- # set +x 00:31:47.621 ************************************ 00:31:47.621 START TEST keyring_linux 00:31:47.621 ************************************ 00:31:47.621 16:13:16 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:31:47.621 * Looking for test storage... 00:31:47.621 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:31:47.621 16:13:16 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:31:47.621 16:13:16 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:47.621 16:13:16 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:31:47.885 16:13:16 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:47.885 16:13:16 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:47.885 16:13:16 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:47.885 16:13:16 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:47.885 16:13:16 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:47.885 16:13:16 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:47.885 16:13:16 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:47.885 16:13:16 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:47.885 16:13:16 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:47.885 16:13:16 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:47.885 16:13:16 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:47.885 16:13:16 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:47.885 16:13:16 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:47.885 16:13:16 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:47.885 16:13:16 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:47.885 16:13:16 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:47.885 16:13:16 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:47.885 16:13:16 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:47.885 16:13:16 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:47.885 16:13:16 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:47.885 16:13:16 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.885 16:13:16 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.885 16:13:16 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.885 16:13:16 keyring_linux -- paths/export.sh@5 -- # export PATH 00:31:47.885 16:13:16 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.885 16:13:16 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:31:47.885 16:13:16 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:47.885 16:13:16 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:47.885 16:13:16 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:47.885 16:13:16 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:47.885 16:13:16 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:47.885 16:13:16 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:47.885 16:13:16 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:47.885 16:13:16 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:47.885 16:13:16 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:31:47.885 16:13:16 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:31:47.885 16:13:16 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:31:47.885 16:13:16 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:31:47.885 16:13:16 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:31:47.885 16:13:16 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:31:47.885 16:13:16 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:31:47.885 16:13:16 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:31:47.885 16:13:16 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:31:47.885 16:13:16 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:47.885 16:13:16 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:31:47.885 16:13:16 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:31:47.885 16:13:16 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:47.885 16:13:16 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:47.885 16:13:16 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:31:47.885 16:13:16 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:47.885 16:13:16 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:31:47.885 16:13:16 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:31:47.885 16:13:16 keyring_linux -- nvmf/common.sh@705 -- # python - 00:31:47.885 16:13:16 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:31:47.885 16:13:16 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:31:47.885 /tmp/:spdk-test:key0 00:31:47.885 16:13:16 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:31:47.885 16:13:16 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:31:47.885 16:13:16 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:31:47.885 16:13:16 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:31:47.885 16:13:16 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:31:47.885 16:13:16 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:31:47.885 16:13:16 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:31:47.885 16:13:16 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:31:47.885 16:13:16 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:31:47.885 16:13:16 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:47.885 16:13:16 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:31:47.885 16:13:16 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:31:47.885 16:13:16 keyring_linux -- nvmf/common.sh@705 -- # python - 00:31:47.885 16:13:16 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:31:47.885 16:13:16 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:31:47.885 /tmp/:spdk-test:key1 00:31:47.885 16:13:16 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3967242 00:31:47.885 16:13:16 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3967242 00:31:47.885 16:13:16 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:31:47.885 16:13:16 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 3967242 ']' 00:31:47.885 16:13:16 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:47.885 16:13:16 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:47.885 16:13:16 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:47.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:47.885 16:13:16 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:47.885 16:13:16 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:47.885 [2024-07-15 16:13:16.715496] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:31:47.885 [2024-07-15 16:13:16.715547] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3967242 ] 00:31:47.885 EAL: No free 2048 kB hugepages reported on node 1 00:31:47.885 [2024-07-15 16:13:16.767748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:48.183 [2024-07-15 16:13:16.848882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:48.748 16:13:17 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:48.748 16:13:17 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:31:48.748 16:13:17 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:31:48.748 16:13:17 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.748 16:13:17 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:48.748 [2024-07-15 16:13:17.515894] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:48.748 null0 00:31:48.748 [2024-07-15 16:13:17.547951] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:48.748 [2024-07-15 16:13:17.548281] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:48.748 16:13:17 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.748 16:13:17 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:31:48.748 985163867 00:31:48.748 16:13:17 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:31:48.748 770783149 00:31:48.748 16:13:17 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3967325 00:31:48.748 16:13:17 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3967325 /var/tmp/bperf.sock 00:31:48.748 16:13:17 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:31:48.748 16:13:17 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 3967325 ']' 00:31:48.748 16:13:17 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:48.748 16:13:17 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:48.748 16:13:17 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:48.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:48.748 16:13:17 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:48.748 16:13:17 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:48.748 [2024-07-15 16:13:17.619729] Starting SPDK v24.09-pre git sha1 a95bbf233 / DPDK 24.03.0 initialization... 00:31:48.748 [2024-07-15 16:13:17.619771] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3967325 ] 00:31:48.748 EAL: No free 2048 kB hugepages reported on node 1 00:31:48.748 [2024-07-15 16:13:17.674379] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:49.006 [2024-07-15 16:13:17.754655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:49.572 16:13:18 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:49.572 16:13:18 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:31:49.572 16:13:18 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:31:49.572 16:13:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:31:49.831 16:13:18 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:31:49.831 16:13:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:50.090 16:13:18 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:31:50.090 16:13:18 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:31:50.090 [2024-07-15 16:13:18.986382] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:50.349 nvme0n1 00:31:50.349 16:13:19 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:31:50.349 16:13:19 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:31:50.349 16:13:19 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:31:50.349 16:13:19 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:31:50.349 16:13:19 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:50.349 16:13:19 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:31:50.349 16:13:19 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:31:50.349 16:13:19 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:31:50.349 16:13:19 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:31:50.349 16:13:19 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:31:50.349 16:13:19 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:50.349 16:13:19 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:31:50.349 16:13:19 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:50.608 16:13:19 keyring_linux -- keyring/linux.sh@25 -- # sn=985163867 00:31:50.608 16:13:19 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:31:50.608 16:13:19 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:31:50.608 16:13:19 keyring_linux -- keyring/linux.sh@26 -- # [[ 985163867 == \9\8\5\1\6\3\8\6\7 ]] 00:31:50.608 16:13:19 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 985163867 00:31:50.608 16:13:19 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:31:50.608 16:13:19 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:50.608 Running I/O for 1 seconds... 00:31:51.985 00:31:51.985 Latency(us) 00:31:51.985 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:51.985 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:51.985 nvme0n1 : 1.01 15535.66 60.69 0.00 0.00 8203.75 4957.94 13392.14 00:31:51.985 =================================================================================================================== 00:31:51.985 Total : 15535.66 60.69 0.00 0.00 8203.75 4957.94 13392.14 00:31:51.985 0 00:31:51.985 16:13:20 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:51.985 16:13:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:51.985 16:13:20 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:31:51.985 16:13:20 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:31:51.985 16:13:20 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:31:51.985 16:13:20 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:31:51.985 16:13:20 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:31:51.985 16:13:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:51.985 16:13:20 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:31:51.985 16:13:20 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:31:51.985 16:13:20 keyring_linux -- keyring/linux.sh@23 -- # return 00:31:51.985 16:13:20 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:51.985 16:13:20 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:31:51.985 16:13:20 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:51.985 16:13:20 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:31:51.985 16:13:20 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:51.985 16:13:20 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:31:51.985 16:13:20 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:51.985 16:13:20 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:51.985 16:13:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:52.244 [2024-07-15 16:13:21.058765] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:52.244 [2024-07-15 16:13:21.058802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2021c30 (107): Transport endpoint is not connected 00:31:52.244 [2024-07-15 16:13:21.059797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2021c30 (9): Bad file descriptor 00:31:52.244 [2024-07-15 16:13:21.060798] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:52.244 [2024-07-15 16:13:21.060807] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:31:52.244 [2024-07-15 16:13:21.060814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:52.244 request: 00:31:52.244 { 00:31:52.244 "name": "nvme0", 00:31:52.244 "trtype": "tcp", 00:31:52.244 "traddr": "127.0.0.1", 00:31:52.244 "adrfam": "ipv4", 00:31:52.244 "trsvcid": "4420", 00:31:52.244 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:52.244 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:52.244 "prchk_reftag": false, 00:31:52.244 "prchk_guard": false, 00:31:52.244 "hdgst": false, 00:31:52.244 "ddgst": false, 00:31:52.244 "psk": ":spdk-test:key1", 00:31:52.244 "method": "bdev_nvme_attach_controller", 00:31:52.244 "req_id": 1 00:31:52.244 } 00:31:52.244 Got JSON-RPC error response 00:31:52.244 response: 00:31:52.244 { 00:31:52.244 "code": -5, 00:31:52.244 "message": "Input/output error" 00:31:52.244 } 00:31:52.244 16:13:21 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:31:52.244 16:13:21 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:52.244 16:13:21 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:52.244 16:13:21 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:52.244 16:13:21 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:31:52.244 16:13:21 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:31:52.244 16:13:21 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:31:52.244 16:13:21 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:31:52.244 16:13:21 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:31:52.244 16:13:21 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:31:52.244 16:13:21 keyring_linux -- keyring/linux.sh@33 -- # sn=985163867 00:31:52.244 16:13:21 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 985163867 00:31:52.244 1 links removed 00:31:52.244 16:13:21 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:31:52.244 16:13:21 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:31:52.244 16:13:21 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:31:52.244 16:13:21 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:31:52.244 16:13:21 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:31:52.244 16:13:21 keyring_linux -- keyring/linux.sh@33 -- # sn=770783149 00:31:52.244 16:13:21 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 770783149 00:31:52.244 1 links removed 00:31:52.244 16:13:21 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3967325 00:31:52.244 16:13:21 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 3967325 ']' 00:31:52.244 16:13:21 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 3967325 00:31:52.244 16:13:21 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:31:52.244 16:13:21 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:52.244 16:13:21 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3967325 00:31:52.244 16:13:21 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:52.244 16:13:21 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:52.244 16:13:21 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3967325' 00:31:52.244 killing process with pid 3967325 00:31:52.244 16:13:21 keyring_linux -- common/autotest_common.sh@967 -- # kill 3967325 00:31:52.244 Received shutdown signal, test time was about 1.000000 seconds 00:31:52.244 00:31:52.244 Latency(us) 00:31:52.244 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:52.244 =================================================================================================================== 00:31:52.244 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:52.244 16:13:21 keyring_linux -- common/autotest_common.sh@972 -- # wait 3967325 00:31:52.503 16:13:21 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3967242 00:31:52.503 16:13:21 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 3967242 ']' 00:31:52.503 16:13:21 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 3967242 00:31:52.503 16:13:21 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:31:52.503 16:13:21 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:52.503 16:13:21 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3967242 00:31:52.503 16:13:21 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:52.503 16:13:21 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:52.503 16:13:21 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3967242' 00:31:52.503 killing process with pid 3967242 00:31:52.503 16:13:21 keyring_linux -- common/autotest_common.sh@967 -- # kill 3967242 00:31:52.503 16:13:21 keyring_linux -- common/autotest_common.sh@972 -- # wait 3967242 00:31:52.762 00:31:52.762 real 0m5.187s 00:31:52.762 user 0m9.051s 00:31:52.762 sys 0m1.538s 00:31:52.762 16:13:21 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:52.762 16:13:21 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:52.762 ************************************ 00:31:52.762 END TEST keyring_linux 00:31:52.762 ************************************ 00:31:52.762 16:13:21 -- common/autotest_common.sh@1142 -- # return 0 00:31:52.762 16:13:21 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:31:52.762 16:13:21 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:31:52.762 16:13:21 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:31:52.762 16:13:21 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:31:52.762 16:13:21 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:31:52.762 16:13:21 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:31:52.762 16:13:21 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:31:52.762 16:13:21 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:31:52.762 16:13:21 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:31:52.762 16:13:21 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:31:52.762 16:13:21 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:31:52.762 16:13:21 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:31:52.762 16:13:21 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:31:52.762 16:13:21 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:31:52.762 16:13:21 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:31:52.762 16:13:21 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:31:52.762 16:13:21 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:31:52.762 16:13:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:52.762 16:13:21 -- common/autotest_common.sh@10 -- # set +x 00:31:52.762 16:13:21 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:31:52.762 16:13:21 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:31:52.762 16:13:21 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:31:53.021 16:13:21 -- common/autotest_common.sh@10 -- # set +x 00:31:57.210 INFO: APP EXITING 00:31:57.210 INFO: killing all VMs 00:31:57.210 INFO: killing vhost app 00:31:57.210 INFO: EXIT DONE 00:31:59.743 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:31:59.743 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:31:59.743 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:31:59.743 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:31:59.743 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:31:59.743 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:31:59.743 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:31:59.743 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:31:59.743 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:31:59.743 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:31:59.743 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:31:59.743 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:31:59.743 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:31:59.743 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:31:59.743 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:31:59.743 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:31:59.743 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:32:03.074 Cleaning 00:32:03.074 Removing: /var/run/dpdk/spdk0/config 00:32:03.074 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:03.074 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:03.074 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:03.074 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:03.074 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:32:03.074 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:32:03.074 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:32:03.074 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:32:03.074 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:03.074 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:03.074 Removing: /var/run/dpdk/spdk1/config 00:32:03.074 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:03.074 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:03.074 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:03.074 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:03.074 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:32:03.074 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:32:03.074 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:32:03.074 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:32:03.074 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:03.074 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:03.074 Removing: /var/run/dpdk/spdk1/mp_socket 00:32:03.074 Removing: /var/run/dpdk/spdk2/config 00:32:03.074 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:32:03.074 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:32:03.074 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:32:03.074 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:32:03.074 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:32:03.074 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:32:03.074 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:32:03.074 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:32:03.074 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:32:03.074 Removing: /var/run/dpdk/spdk2/hugepage_info 00:32:03.074 Removing: /var/run/dpdk/spdk3/config 00:32:03.074 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:32:03.074 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:32:03.074 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:32:03.074 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:32:03.074 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:32:03.074 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:32:03.074 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:32:03.074 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:32:03.074 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:32:03.074 Removing: /var/run/dpdk/spdk3/hugepage_info 00:32:03.074 Removing: /var/run/dpdk/spdk4/config 00:32:03.074 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:32:03.074 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:32:03.074 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:32:03.074 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:32:03.074 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:32:03.074 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:32:03.075 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:32:03.075 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:32:03.075 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:32:03.075 Removing: /var/run/dpdk/spdk4/hugepage_info 00:32:03.075 Removing: /dev/shm/bdev_svc_trace.1 00:32:03.075 Removing: /dev/shm/nvmf_trace.0 00:32:03.075 Removing: /dev/shm/spdk_tgt_trace.pid3581798 00:32:03.075 Removing: /var/run/dpdk/spdk0 00:32:03.075 Removing: /var/run/dpdk/spdk1 00:32:03.075 Removing: /var/run/dpdk/spdk2 00:32:03.075 Removing: /var/run/dpdk/spdk3 00:32:03.075 Removing: /var/run/dpdk/spdk4 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3579520 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3580722 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3581798 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3582433 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3583379 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3583617 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3584595 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3584814 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3584944 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3586544 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3588001 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3588337 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3588625 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3589226 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3589611 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3589866 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3590114 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3590393 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3591155 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3594129 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3594406 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3594775 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3594884 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3595372 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3595485 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3595880 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3596106 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3596374 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3596489 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3596643 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3596875 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3597344 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3597563 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3597873 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3598173 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3598262 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3598325 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3598584 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3598829 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3599082 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3599328 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3599577 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3599828 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3600077 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3600327 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3600579 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3600828 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3601080 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3601344 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3601609 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3601868 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3602133 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3602393 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3602667 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3602943 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3603209 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3603512 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3603614 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3603921 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3607563 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3651913 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3656158 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3666203 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3671593 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3675378 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3676059 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3682178 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3688557 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3688573 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3689355 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3690193 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3691107 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3691615 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3691795 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3692023 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3692041 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3692047 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3692958 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3693874 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3694785 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3695261 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3695266 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3695514 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3696735 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3697933 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3706276 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3706525 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3710549 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3716406 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3719006 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3729919 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3738805 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3740437 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3741374 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3758162 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3761935 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3787357 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3791816 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3793590 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3795460 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3795698 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3795885 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3796037 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3796686 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3798514 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3799507 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3800006 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3802110 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3802829 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3803555 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3807702 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3818058 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3822090 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3827841 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3829218 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3830693 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3835179 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3839227 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3846578 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3846581 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3851283 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3851511 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3851743 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3852091 00:32:03.075 Removing: /var/run/dpdk/spdk_pid3852203 00:32:03.335 Removing: /var/run/dpdk/spdk_pid3856693 00:32:03.335 Removing: /var/run/dpdk/spdk_pid3857185 00:32:03.335 Removing: /var/run/dpdk/spdk_pid3861885 00:32:03.335 Removing: /var/run/dpdk/spdk_pid3864879 00:32:03.335 Removing: /var/run/dpdk/spdk_pid3870253 00:32:03.335 Removing: /var/run/dpdk/spdk_pid3875590 00:32:03.335 Removing: /var/run/dpdk/spdk_pid3884158 00:32:03.335 Removing: /var/run/dpdk/spdk_pid3891231 00:32:03.335 Removing: /var/run/dpdk/spdk_pid3891276 00:32:03.335 Removing: /var/run/dpdk/spdk_pid3909280 00:32:03.335 Removing: /var/run/dpdk/spdk_pid3910207 00:32:03.335 Removing: /var/run/dpdk/spdk_pid3910888 00:32:03.335 Removing: /var/run/dpdk/spdk_pid3911582 00:32:03.335 Removing: /var/run/dpdk/spdk_pid3912435 00:32:03.335 Removing: /var/run/dpdk/spdk_pid3913041 00:32:03.335 Removing: /var/run/dpdk/spdk_pid3913731 00:32:03.335 Removing: /var/run/dpdk/spdk_pid3914424 00:32:03.335 Removing: /var/run/dpdk/spdk_pid3918680 00:32:03.335 Removing: /var/run/dpdk/spdk_pid3918911 00:32:03.335 Removing: /var/run/dpdk/spdk_pid3924906 00:32:03.335 Removing: /var/run/dpdk/spdk_pid3925037 00:32:03.335 Removing: /var/run/dpdk/spdk_pid3927264 00:32:03.335 Removing: /var/run/dpdk/spdk_pid3934906 00:32:03.335 Removing: /var/run/dpdk/spdk_pid3934975 00:32:03.335 Removing: /var/run/dpdk/spdk_pid3939864 00:32:03.335 Removing: /var/run/dpdk/spdk_pid3941840 00:32:03.335 Removing: /var/run/dpdk/spdk_pid3943748 00:32:03.335 Removing: /var/run/dpdk/spdk_pid3944988 00:32:03.335 Removing: /var/run/dpdk/spdk_pid3946961 00:32:03.335 Removing: /var/run/dpdk/spdk_pid3948196 00:32:03.335 Removing: /var/run/dpdk/spdk_pid3957042 00:32:03.335 Removing: /var/run/dpdk/spdk_pid3957504 00:32:03.335 Removing: /var/run/dpdk/spdk_pid3958161 00:32:03.335 Removing: /var/run/dpdk/spdk_pid3960342 00:32:03.335 Removing: /var/run/dpdk/spdk_pid3960898 00:32:03.335 Removing: /var/run/dpdk/spdk_pid3961366 00:32:03.335 Removing: /var/run/dpdk/spdk_pid3965013 00:32:03.335 Removing: /var/run/dpdk/spdk_pid3965181 00:32:03.335 Removing: /var/run/dpdk/spdk_pid3966702 00:32:03.335 Removing: /var/run/dpdk/spdk_pid3967242 00:32:03.335 Removing: /var/run/dpdk/spdk_pid3967325 00:32:03.335 Clean 00:32:03.335 16:13:32 -- common/autotest_common.sh@1451 -- # return 0 00:32:03.335 16:13:32 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:32:03.335 16:13:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:03.335 16:13:32 -- common/autotest_common.sh@10 -- # set +x 00:32:03.335 16:13:32 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:32:03.335 16:13:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:03.335 16:13:32 -- common/autotest_common.sh@10 -- # set +x 00:32:03.595 16:13:32 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:03.595 16:13:32 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:32:03.595 16:13:32 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:32:03.595 16:13:32 -- spdk/autotest.sh@391 -- # hash lcov 00:32:03.595 16:13:32 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:03.595 16:13:32 -- spdk/autotest.sh@393 -- # hostname 00:32:03.595 16:13:32 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:32:03.595 geninfo: WARNING: invalid characters removed from testname! 00:32:25.555 16:13:52 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:26.123 16:13:54 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:28.030 16:13:56 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:29.936 16:13:58 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:31.838 16:14:00 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:33.217 16:14:02 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:35.119 16:14:03 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:32:35.119 16:14:03 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:35.119 16:14:03 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:32:35.119 16:14:03 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:35.119 16:14:03 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:35.119 16:14:03 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:35.119 16:14:03 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:35.119 16:14:03 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:35.119 16:14:03 -- paths/export.sh@5 -- $ export PATH 00:32:35.119 16:14:03 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:35.119 16:14:03 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:32:35.119 16:14:03 -- common/autobuild_common.sh@444 -- $ date +%s 00:32:35.119 16:14:03 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721052843.XXXXXX 00:32:35.119 16:14:03 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721052843.jHD9kS 00:32:35.119 16:14:03 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:32:35.119 16:14:03 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:32:35.119 16:14:03 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:32:35.119 16:14:03 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:32:35.119 16:14:03 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:32:35.119 16:14:03 -- common/autobuild_common.sh@460 -- $ get_config_params 00:32:35.119 16:14:03 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:32:35.119 16:14:03 -- common/autotest_common.sh@10 -- $ set +x 00:32:35.119 16:14:03 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:32:35.119 16:14:03 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:32:35.119 16:14:03 -- pm/common@17 -- $ local monitor 00:32:35.119 16:14:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:35.119 16:14:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:35.119 16:14:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:35.119 16:14:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:35.119 16:14:03 -- pm/common@21 -- $ date +%s 00:32:35.119 16:14:03 -- pm/common@25 -- $ sleep 1 00:32:35.119 16:14:03 -- pm/common@21 -- $ date +%s 00:32:35.119 16:14:03 -- pm/common@21 -- $ date +%s 00:32:35.119 16:14:03 -- pm/common@21 -- $ date +%s 00:32:35.119 16:14:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721052843 00:32:35.119 16:14:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721052843 00:32:35.119 16:14:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721052843 00:32:35.119 16:14:03 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721052843 00:32:35.119 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721052843_collect-vmstat.pm.log 00:32:35.119 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721052843_collect-cpu-load.pm.log 00:32:35.119 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721052843_collect-cpu-temp.pm.log 00:32:35.119 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721052843_collect-bmc-pm.bmc.pm.log 00:32:36.055 16:14:04 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:32:36.055 16:14:04 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:32:36.055 16:14:04 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:36.055 16:14:04 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:32:36.055 16:14:04 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:32:36.055 16:14:04 -- spdk/autopackage.sh@19 -- $ timing_finish 00:32:36.055 16:14:04 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:32:36.055 16:14:04 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:32:36.055 16:14:04 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:36.055 16:14:04 -- spdk/autopackage.sh@20 -- $ exit 0 00:32:36.055 16:14:04 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:32:36.055 16:14:04 -- pm/common@29 -- $ signal_monitor_resources TERM 00:32:36.055 16:14:04 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:32:36.055 16:14:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:36.055 16:14:04 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:32:36.055 16:14:04 -- pm/common@44 -- $ pid=3977439 00:32:36.055 16:14:04 -- pm/common@50 -- $ kill -TERM 3977439 00:32:36.055 16:14:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:36.055 16:14:04 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:32:36.055 16:14:04 -- pm/common@44 -- $ pid=3977440 00:32:36.055 16:14:04 -- pm/common@50 -- $ kill -TERM 3977440 00:32:36.055 16:14:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:36.055 16:14:04 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:32:36.055 16:14:04 -- pm/common@44 -- $ pid=3977442 00:32:36.055 16:14:04 -- pm/common@50 -- $ kill -TERM 3977442 00:32:36.055 16:14:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:36.055 16:14:04 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:32:36.055 16:14:04 -- pm/common@44 -- $ pid=3977467 00:32:36.055 16:14:04 -- pm/common@50 -- $ sudo -E kill -TERM 3977467 00:32:36.055 + [[ -n 3476154 ]] 00:32:36.055 + sudo kill 3476154 00:32:36.324 [Pipeline] } 00:32:36.341 [Pipeline] // stage 00:32:36.346 [Pipeline] } 00:32:36.365 [Pipeline] // timeout 00:32:36.370 [Pipeline] } 00:32:36.388 [Pipeline] // catchError 00:32:36.393 [Pipeline] } 00:32:36.409 [Pipeline] // wrap 00:32:36.415 [Pipeline] } 00:32:36.431 [Pipeline] // catchError 00:32:36.439 [Pipeline] stage 00:32:36.441 [Pipeline] { (Epilogue) 00:32:36.455 [Pipeline] catchError 00:32:36.457 [Pipeline] { 00:32:36.472 [Pipeline] echo 00:32:36.473 Cleanup processes 00:32:36.480 [Pipeline] sh 00:32:36.765 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:36.765 3977553 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:32:36.765 3977838 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:36.836 [Pipeline] sh 00:32:37.118 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:37.118 ++ grep -v 'sudo pgrep' 00:32:37.118 ++ awk '{print $1}' 00:32:37.118 + sudo kill -9 3977553 00:32:37.129 [Pipeline] sh 00:32:37.410 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:47.446 [Pipeline] sh 00:32:47.730 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:47.730 Artifacts sizes are good 00:32:47.746 [Pipeline] archiveArtifacts 00:32:47.754 Archiving artifacts 00:32:47.912 [Pipeline] sh 00:32:48.197 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:32:48.213 [Pipeline] cleanWs 00:32:48.224 [WS-CLEANUP] Deleting project workspace... 00:32:48.224 [WS-CLEANUP] Deferred wipeout is used... 00:32:48.231 [WS-CLEANUP] done 00:32:48.233 [Pipeline] } 00:32:48.256 [Pipeline] // catchError 00:32:48.274 [Pipeline] sh 00:32:48.562 + logger -p user.info -t JENKINS-CI 00:32:48.572 [Pipeline] } 00:32:48.592 [Pipeline] // stage 00:32:48.599 [Pipeline] } 00:32:48.619 [Pipeline] // node 00:32:48.625 [Pipeline] End of Pipeline 00:32:48.663 Finished: SUCCESS